Science.gov

Sample records for image analysis criteria

  1. Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD): Development of Image Analysis Criteria and Examiner Reliability for Image Analysis

    PubMed Central

    Ahmad, Mansur; Hollender, Lars; Odont; Anderson, Quentin; Kartha, Krishnan; Ohrbach, Richard K.; Truelove, Edmond L.; John, Mike T.; Schiffman, Eric L.

    2011-01-01

    Introduction As a part of a multi-site RDC/TMD Validation Project, comprehensive TMJ diagnostic criteria were developed for image analysis using panoramic radiography, magnetic resonance imaging (MRI), and computed tomography (CT). Methods Inter-examiner reliability was estimated using the kappa (k) statistic, and agreement between rater pairs was characterized by overall, positive, and negative percent agreement. CT was the reference standard for assessing validity of other imaging modalities for detecting osteoarthritis (OA). Results For the radiological diagnosis of OA, reliability of the three examiners was poor for panoramic radiography (k = 0.16), fair for MRI (k = 0.46), and close to the threshold for excellent for CT (k = 0.71). Using MRI, reliability was excellent for diagnosing disc displacements (DD) with reduction (k = 0.78) and for DD without reduction (k = 0.94), and was good for effusion (k = 0.64). Overall percent agreement for pair-wise ratings was ≥ 82% for all conditions. Positive percent agreement for diagnosing OA was 19% for panoramic radiography, 59% for MRI, and 84% for CT. Using MRI, positive percent agreement for diagnoses of any DD was 95% and for effusion was 81%. Negative percent agreement was ≥ 88% for all conditions. Compared to CT, panoramic radiography and MRI had poor to marginal sensitivity, respectively, but excellent specificity, in detecting OA. Conclusion Comprehensive image analysis criteria for RDC/TMD Validation Project were developed, which can reliably be employed for assessing OA using CT, and for disc position and effusion using MRI. PMID:19464658

  2. Difference image analysis: automatic kernel design using information criteria

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Horne, Keith; Alsubai, K. A.; Bachelet, E.; Mislis, D.; Parley, N.

    2016-03-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularization. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unregularized delta basis functions, combined with either the Akaike or Takeuchi information criterion, is the best kernel solution method in terms of photometric accuracy. Our results are validated by tests performed on two independent sets of real data. Finally, we provide some important recommendations for software implementations of difference image analysis.

  3. Can state-of-the-art HVS-based objective image quality criteria be used for image reconstruction techniques based on ROI analysis?

    NASA Astrophysics Data System (ADS)

    Dostal, P.; Krasula, L.; Klima, M.

    2012-06-01

    Various image processing techniques in multimedia technology are optimized using visual attention feature of the human visual system. Spatial non-uniformity causes that different locations in an image are of different importance in terms of perception of the image. In other words, the perceived image quality depends mainly on the quality of important locations known as regions of interest. The performance of such techniques is measured by subjective evaluation or objective image quality criteria. Many state-of-the-art objective metrics are based on HVS properties; SSIM, MS-SSIM based on image structural information, VIF based on the information that human brain can ideally gain from the reference image or FSIM utilizing the low-level features to assign the different importance to each location in the image. But still none of these objective metrics utilize the analysis of regions of interest. We solve the question if these objective metrics can be used for effective evaluation of images reconstructed by processing techniques based on ROI analysis utilizing high-level features. In this paper authors show that the state-of-the-art objective metrics do not correlate well with subjective evaluation while the demosaicing based on ROI analysis is used for reconstruction. The ROI were computed from "ground truth" visual attention data. The algorithm combining two known demosaicing techniques on the basis of ROI location is proposed to reconstruct the ROI in fine quality while the rest of image is reconstructed with low quality. The color image reconstructed by this ROI approach was compared with selected demosaicing techniques by objective criteria and subjective testing. The qualitative comparison of the objective and subjective results indicates that the state-of-the-art objective metrics are still not suitable for evaluation image processing techniques based on ROI analysis and new criteria is demanded.

  4. Terahertz Wide-Angle Imaging and Analysis on Plane-wave Criteria Based on Inverse Synthetic Aperture Techniques

    NASA Astrophysics Data System (ADS)

    Gao, Jing Kun; Qin, Yu Liang; Deng, Bin; Wang, Hong Qiang; Li, Jin; Li, Xiang

    2016-04-01

    This paper presents two parts of work around terahertz imaging applications. The first part aims at solving the problems occurred with the increasing of the rotation angle. To compensate for the nonlinearity of terahertz radar systems, a calibration signal acquired from a bright target is always used. Generally, this compensation inserts an extra linear phase term in the intermediate frequency (IF) echo signal which is not expected in large-rotation angle imaging applications. We carried out a detailed theoretical analysis on this problem, and a minimum entropy criterion was employed to estimate and compensate for the linear-phase errors. In the second part, the effects of spherical wave on terahertz inverse synthetic aperture imaging are analyzed. Analytic criteria of plane-wave approximation were derived in the cases of different rotation angles. Experimental results of corner reflectors and an aircraft model based on a 330-GHz linear frequency-modulated continuous wave (LFMCW) radar system validated the necessity and effectiveness of the proposed compensation. By comparing the experimental images obtained under plane-wave assumption and spherical-wave correction, it also showed to be highly consistent with the analytic criteria we derived.

  5. Epilepsy Imaging Study Guideline Criteria

    PubMed Central

    Gaillard, William D; Cross, J Helen; Duncan, John S; Stefan, Hermann; Theodore, William H

    2011-01-01

    Recognition of limited economic resources, as well as potential adverse effects of ‘over testing,’ has increased interest in ‘evidence-based’ assessment of new medical technology. This creates a particular problem for evaluation and treatment of epilepsy, increasingly dependent on advanced imaging and electrophysiology, since there is a marked paucity of epilepsy diagnostic and prognostic studies that meet rigorous standards for evidence classification. The lack of high quality data reflects fundamental weaknesses in many imaging studies but also limitations in the assumptions underlying evidence classification schemes as they relate to epilepsy, and to the practicalities of conducting adequately powered studies of rapidly evolving technologies. We review the limitations of current guidelines and propose elements for imaging studies that can contribute meaningfully to the epilepsy literature. PMID:21740417

  6. Evaluation of quantitative image analysis criteria for the high-resolution microendoscopic detection of neoplasia in Barrett's esophagus

    NASA Astrophysics Data System (ADS)

    Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2010-03-01

    Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.

  7. Criteria for phonological process analysis.

    PubMed

    McReynolds, L V; Elbert, M

    1981-05-01

    Investigators have proposed that children with functional articulation disorders should be relabelled phonologically disordered. To support this proposal, evidence has been presented in the literature demonstrating that children's error patterns reflect the operation of phonological processes. No quantitative or qualitative criteria have been offered to differentiate these processes from surface error patterns. The purpose of the present descriptive study was to determine if differences would be found when two kinds of process analyses were employed: a nonquantitative criteria analysis as conducted in the studies reported in the literature, and a quantitative criteria analysis. Speech samples were obtained from 13 children with functional articulation problems. Their errors were submitted to the two analysis procedures. Results indicated that the number of identified processes were reduced when minimum quantitative criteria were used from the number identified when no quantitative criteria were imposed. The decrease occurred in individual children's patterns as well as across the patterns of the 13 children. It is suggested that there is a need to establish reasonable quantitative and qualitative criteria for phonological process identification.

  8. Assessing the performance of four different categories of histological criteria in brain tumours grading by means of a computer-aided diagnosis image analysis system.

    PubMed

    Kostopoulos, S; Konstandinou, C; Sidiropoulos, K; Ravazoula, P; Kalatzis, I; Asvestas, P; Cavouras, D; Glotsos, D

    2015-10-01

    Brain tumours are considered one of the most lethal and difficult to treat forms of cancer, with unknown aetiology and lack of any realistic screening. In this study, we examine, whether the combination of descriptive criteria, used by expert histopathologists in assessing histologic tissue samples, and quantitative image analysis features may improve the diagnostic accuracy of brain tumour grading. Data comprised 61 cases of brain cancers (astrocytomas, oligodendrogliomas, meningiomas) collected from the archives of the University Hospital of Patras, Greece. Incorporating physician's descriptive criteria and image analysis's quantitative features into a discriminant function, a computer-aided diagnosis system was designed for discriminating low-grade from high-grade brain tumours. Physician's descriptive features, when solely used in the system, proved of high discrimination accuracy (93.4%). When verbal descriptive features were combined with quantitative image analysis features in the system, discrimination accuracy improved to 98.4%. The generalization of the proposed system to unseen data converged to an overall prediction accuracy of 86.7% ± 5.4%. Considering that histological grading affects treatment selection and diagnostic errors may be notable in clinical practice, the utilization of the proposed system may safeguard against diagnostic misinterpretations in every day clinical practice.

  9. Quality criteria for simulator images - A literature review

    NASA Astrophysics Data System (ADS)

    Padmos, Pieter; Milders, Maarten V.

    1992-12-01

    Quality criteria are presented for each of about 30 different outside-world image features of computer-generated image systems on vehicle simulators (e.g., airplane, tank, ship). Criteria derived are based on a literature review. In addition to purely physical properties related to image presentation (e.g., field size, contrast ratio, update frequency), attention is paid to image content (e.g., number of polygons, surface treatments, moving objects) and various other features (e.g., electro-optical aids, vehicle-terrain interactions, modeling tools, instruction tools). Included in this paper are an introduction on visual perception, separate discussions of each image feature including terminology definitions, and suggestions for further research.

  10. Prioritization criteria of objective index for disaster management by satellite image processing

    NASA Astrophysics Data System (ADS)

    Poursaber, Mohammad R.; Ariki, Yasuo; Safi, Mohammad

    2014-10-01

    The outputs obtained from satellite image processing generally presents various information based on the interpretation technique, selected objects for object based processing, precision of processing, the number and time of images used for this process. This issue should be managed well during a disaster management process based on satellite images. Very high resolution (VHR) optical satellite data are potential sources to provide detailed information on damage and geological changes for a large area in a short time. In this paper, we studied tsunami triggered area, which was caused on 11 March 2011 by Tohoku earthquake, using VHR data from GeoEye-1satellite images. A set of pre and post-earthquake images were used to perform visual change analysis through comparison of these data. These images include the data of the same area before the disaster in normal condition and after the disaster which caused changes and also some modification imposed to that area. Upon occurrence of a disaster, the images are used to estimate the extent of the damage. Then based on disaster management criteria and the needs for recovery and reconstruction, the priorities for object based classification indexes are defined. In post-disaster management, they are used for reconstruction and sustainable development activities. Finally a classified characteristic definition has been proposed which can be used as sample indexes prioritization criteria for disaster management based on satellite image processing. This prioritization criteria are based on an object based processing technique and can be further developed for other image processing methods.

  11. HER2 amplification in gastroesophageal adenocarcinoma: correlation of two antibodies using gastric cancer scoring criteria, H score, and digital image analysis with fluorescence in situ hybridization.

    PubMed

    Radu, Oana M; Foxwell, Tyler; Cieply, Kathleen; Navina, Sarah; Dacic, Sanja; Nason, Katie S; Davison, Jon M

    2012-04-01

    We assessed 103 resected gastroesophageal adenocarcinomas for HER2 amplification by fluorescence in situ hybridization (FISH) and 2 commercial immunohistochemical assays. Of 103, 30 (29%) were FISH-amplified. Both immunohistochemical assays had greater than 95% concordance with FISH. However, as a screening test for FISH amplification, the Ventana Medical Systems (Tucson, AZ) 4B5 antibody demonstrated superior sensitivity (87%) compared with the DAKO (Carpinteria, CA) A0485 (70%). Of the cases, 28 were immunohistochemically 3+ or immunohistochemically 2+/FISH-amplified with the 4B5 assay compared with only 22 cases with the A0485 assay, representing a large potential difference in patient eligibility for anti-HER2 therapy. Cases with low-level FISH amplification (HER2/CEP17, 2.2-4.0) express lower levels of HER2 protein compared with cases with high-level amplification (HER2/CEP17, ≥4.0), raising the possibility of a differential response to anti-HER2 therapy. The H score and digital image analysis may have a limited role in improving HER2 test performance.

  12. Retinal Imaging and Image Analysis

    PubMed Central

    Abràmoff, Michael D.; Garvin, Mona K.; Sonka, Milan

    2011-01-01

    Many important eye diseases as well as systemic diseases manifest themselves in the retina. While a number of other anatomical structures contribute to the process of vision, this review focuses on retinal imaging and image analysis. Following a brief overview of the most prevalent causes of blindness in the industrialized world that includes age-related macular degeneration, diabetic retinopathy, and glaucoma, the review is devoted to retinal imaging and image analysis methods and their clinical implications. Methods for 2-D fundus imaging and techniques for 3-D optical coherence tomography (OCT) imaging are reviewed. Special attention is given to quantitative techniques for analysis of fundus photographs with a focus on clinically relevant assessment of retinal vasculature, identification of retinal lesions, assessment of optic nerve head (ONH) shape, building retinal atlases, and to automated methods for population screening for retinal diseases. A separate section is devoted to 3-D analysis of OCT images, describing methods for segmentation and analysis of retinal layers, retinal vasculature, and 2-D/3-D detection of symptomatic exudate-associated derangements, as well as to OCT-based analysis of ONH morphology and shape. Throughout the paper, aspects of image acquisition, image analysis, and clinical relevance are treated together considering their mutually interlinked relationships. PMID:21743764

  13. Analysis of the impact of safeguards criteria

    SciTech Connect

    Mullen, M.F.; Reardon, P.T.

    1981-01-01

    As part of the US Program of Technical Assistance to IAEA Safeguards, the Pacific Northwest Laboratory (PNL) was asked to assist in developing and demonstrating a model for assessing the impact of setting criteria for the application of IAEA safeguards. This report presents the results of PNL's work on the task. The report is in three parts. The first explains the technical approach and methodology. The second contains an example application of the methodology. The third presents the conclusions of the study. PNL used the model and computer programs developed as part of Task C.5 (Estimation of Inspection Efforts) of the Program of Technical Assistance. The example application of the methodology involves low-enriched uranium conversion and fuel fabrication facilities. The effects of variations in seven parameters are considered: false alarm probability, goal probability of detection, detection goal quantity, the plant operator's measurement capability, the inspector's variables measurement capability, the inspector's attributes measurement capability, and annual plant throughput. Among the key results and conclusions of the analysis are the following: the variables with the greatest impact on the probability of detection are the inspector's measurement capability, the goal quantity, and the throughput; the variables with the greatest impact on inspection costs are the throughput, the goal quantity, and the goal probability of detection; there are important interactions between variables. That is, the effects of a given variable often depends on the level or value of some other variable. With the methodology used in this study, these interactions can be quantitatively analyzed; reasonably good approximate prediction equations can be developed using the methodology described here.

  14. An Analysis of Stopping Criteria in Artificial Neural Networks

    DTIC Science & Technology

    1994-03-01

    I’AD-A278 491(1 AN ANALYSIS OF STOPPING CRITERIA IN ARTIFICIAL NEURAL NETWORKS THESIS Bruce Kostal Captain, USAF AFIT/GST/ENS/94M 07 D I ELECTE APR...ANALYSIS OF STOPPING CRITERIA IN ARTIFICIAL NEURAL NETWORKS THESIS Bruce Kostal Captain, USAF AFIT/GST/ENS/94M-07 ETIC ELECTE 94-12275 APR2 1994 U Approved...for public release; distributi6 unlimited D94󈧮i •6 AFIT/GST/ENS/94M-07 AN ANALYSIS OF STOPPING CRITERIA IN ARTIFICIAL NEURAL NETWORKS THESIS

  15. On Model Selection Criteria in Multimodel Analysis

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Ye, M.; Neuman, S. P.

    2007-12-01

    Hydrologic systems are open and complex, rendering them prone to multiple conceptualizations and mathematical descriptions. There has been a growing tendency to postulate several alternative hydrologic models for a site and use model selection criteria to (a) rank these models, (b) eliminate some of them and/or (c) weigh and average predictions and statistics generated by multiple models. This has led to some debate among hydrogeologists about the merits and demerits of common model selection (also known as model discrimination or information) criteria such as AIC, AICc, BIC, and KIC and some lack of clarity about the proper interpretation and mathematical representation of each criterion. In particular, whereas we [Neuman, 2003; Ye et al., 2004, 2005; Meyer et al., 2007] have based our approach to multimodel hydrologic ranking and inference on the Bayesian criterion KIC (which reduces asymptotically to BIC), Poeter and Anderson [2005] have voiced a strong preference for the information-theoretic criterion AICc (which reduces asymptotically to AIC). Their preference stems in part from a perception that KIC and BIC require a "true" or "quasi-true" model to be in the set of alternatives while AIC and AICc are free of such an unreasonable requirement. We examine the model selection literature to find that (a) all published rigorous derivations of AIC and AICc require that the (true) model having generated the observational data be in the set of candidate models; (b) though BIC and KIC were originally derived by assuming that such a model is in the set, BIC has been rederived by Cavanaugh and Neath [1999] without the need for such an assumption; (c) KIC reduces to BIC as the number of observations becomes large relative to the number of adjustable model parameters, implying that it likewise does not require the existence of a true model in the set of alternatives; (d) if a true model is in the set, BIC and KIC select with probability one the true model as sample size

  16. Image Analysis and Modeling

    DTIC Science & Technology

    1976-03-01

    This report summarizes the results of the research program on Image Analysis and Modeling supported by the Defense Advanced Research Projects Agency...The objective is to achieve a better understanding of image structure and to use this knowledge to develop improved image models for use in image ... analysis and processing tasks such as information extraction, image enhancement and restoration, and coding. The ultimate objective of this research is

  17. GIS Based Multi-Criteria Decision Analysis For Cement Plant Site Selection For Cuddalore District

    NASA Astrophysics Data System (ADS)

    Chhabra, A.

    2015-12-01

    India's cement industry is a vital part of its economy, providing employment to more than a million people. On the back of growing demands, due to increased construction and infrastructural activities cement market in India is expected to grow at a compound annual growth rate (CAGR) of 8.96 percent during the period 2014-2019. In this study, GIS-based spatial Multi Criteria Decision Analysis (MCDA) is used to determine the optimum and alternative sites to setup a cement plant. This technique contains a set of evaluation criteria which are quantifiable indicators of the extent to which decision objectives are realized. In intersection with available GIS (Geographical Information System) and local ancillary data, the outputs of image analysis serves as input for the multi-criteria decision making system. Moreover, the following steps were performed so as to represent the criteria in GIS layers, which underwent the GIS analysis in order to get several potential sites. Satellite imagery from LANDSAT 8 and ASTER DEM were used for the analysis. Cuddalore District in Tamil Nadu was selected as the study site as limestone mining is already being carried out in that region which meets the criteria of raw material for cement production. Several other criteria considered were land use land cover (LULC) classification (built-up area, river, forest cover, wet land, barren land, harvest land and agriculture land), slope, proximity to road, railway and drainage networks.

  18. Image-analysis library

    NASA Technical Reports Server (NTRS)

    1980-01-01

    MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.

  19. Basics of image analysis

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hyperspectral imaging technology has emerged as a powerful tool for quality and safety inspection of food and agricultural products and in precision agriculture over the past decade. Image analysis is a critical step in implementing hyperspectral imaging technology; it is aimed to improve the qualit...

  20. 10 CFR 434.607 - Life cycle cost analysis criteria.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle...

  1. 10 CFR 434.607 - Life cycle cost analysis criteria.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 3 2013-01-01 2013-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle...

  2. 10 CFR 434.607 - Life cycle cost analysis criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 3 2011-01-01 2011-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle...

  3. 10 CFR 434.607 - Life cycle cost analysis criteria.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle...

  4. 10 CFR 434.607 - Life cycle cost analysis criteria.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle...

  5. Improvement and Extension of Shape Evaluation Criteria in Multi-Scale Image Segmentation

    NASA Astrophysics Data System (ADS)

    Sakamoto, M.; Honda, Y.; Kondo, A.

    2016-06-01

    From the last decade, the multi-scale image segmentation is getting a particular interest and practically being used for object-based image analysis. In this study, we have addressed the issues on multi-scale image segmentation, especially, in improving the performances for validity of merging and variety of derived region's shape. Firstly, we have introduced constraints on the application of spectral criterion which could suppress excessive merging between dissimilar regions. Secondly, we have extended the evaluation for smoothness criterion by modifying the definition on the extent of the object, which was brought for controlling the shape's diversity. Thirdly, we have developed new shape criterion called aspect ratio. This criterion helps to improve the reproducibility on the shape of object to be matched to the actual objectives of interest. This criterion provides constraint on the aspect ratio in the bounding box of object by keeping properties controlled with conventional shape criteria. These improvements and extensions lead to more accurate, flexible, and diverse segmentation results according to the shape characteristics of the target of interest. Furthermore, we also investigated a technique for quantitative and automatic parameterization in multi-scale image segmentation. This approach is achieved by comparing segmentation result with training area specified in advance by considering the maximization of the average area in derived objects or satisfying the evaluation index called F-measure. Thus, it has been possible to automate the parameterization that suited the objectives especially in the view point of shape's reproducibility.

  6. A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images

    PubMed Central

    Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D. DiFranco, Matthew; Opposits, Gabor; K. Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo

    2016-01-01

    Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25–30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians. PMID:27736888

  7. A Study on the Basic Criteria for Selecting Heterogeneity Parameters of F18-FDG PET Images.

    PubMed

    Forgacs, Attila; Pall Jonsson, Hermann; Dahlbom, Magnus; Daver, Freddie; D DiFranco, Matthew; Opposits, Gabor; K Krizsan, Aron; Garai, Ildiko; Czernin, Johannes; Varga, Jozsef; Tron, Lajos; Balkay, Laszlo

    2016-01-01

    Textural analysis might give new insights into the quantitative characterization of metabolically active tumors. More than thirty textural parameters have been investigated in former F18-FDG studies already. The purpose of the paper is to declare basic requirements as a selection strategy to identify the most appropriate heterogeneity parameters to measure textural features. Our predefined requirements were: a reliable heterogeneity parameter has to be volume independent, reproducible, and suitable for expressing quantitatively the degree of heterogeneity. Based on this criteria, we compared various suggested measures of homogeneity. A homogeneous cylindrical phantom was measured on three different PET/CT scanners using the commonly used protocol. In addition, a custom-made inhomogeneous tumor insert placed into the NEMA image quality phantom was imaged with a set of acquisition times and several different reconstruction protocols. PET data of 65 patients with proven lung lesions were retrospectively analyzed as well. Four heterogeneity parameters out of 27 were found as the most attractive ones to characterize the textural properties of metabolically active tumors in FDG PET images. These four parameters included Entropy, Contrast, Correlation, and Coefficient of Variation. These parameters were independent of delineated tumor volume (bigger than 25-30 ml), provided reproducible values (relative standard deviation< 10%), and showed high sensitivity to changes in heterogeneity. Phantom measurements are a viable way to test the reliability of heterogeneity parameters that would be of interest to nuclear imaging clinicians.

  8. Resolution criteria in double-slit microscopic imaging experiments

    NASA Astrophysics Data System (ADS)

    You, Shangting; Kuang, Cuifang; Zhang, Baile

    2016-09-01

    Double-slit imaging is widely used for verifying the resolution of high-resolution and super-resolution microscopies. However, due to the fabrication limits, the slit width is generally non-negligible, which can affect the claimed resolution. In this paper we theoretically calculate the electromagnetic field distribution inside and near the metallic double slit using waveguide mode expansion method, and acquire the far-field image by vectorial Fourier optics. We find that the slit width has minimal influence when the illuminating light is polarized parallel to the slits. In this case, the claimed resolution should be based on the center-to-center distance of the double-slit.

  9. Improving diagnostic criteria for Propionibacterium acnes osteomyelitis: a retrospective analysis.

    PubMed

    Asseray, Nathalie; Papin, Christophe; Touchais, Sophie; Bemer, Pascale; Lambert, Chantal; Boutoille, David; Tequi, Brigitte; Gouin, François; Raffi, François; Passuti, Norbert; Potel, Gilles

    2010-07-01

    The identification of Propionibacterium acnes in cultures of bone and joint samples is always difficult to interpret because of the ubiquity of this microorganism. The aim of this study was to propose a diagnostic strategy to distinguish infections from contaminations. This was a retrospective analysis of all patient charts of those patients with >or=1 deep samples culture-positive for P. acnes. Every criterion was tested for sensitivity, specificity, and positive likelihood ratio, and then the diagnostic probability of combinations of criteria was calculated. Among 65 patients, 52 (80%) were considered truly infected with P. acnes, a diagnosis based on a multidisciplinary process. The most valuable diagnostic criteria were: >or=2 positive deep samples, peri-operative findings (necrosis, hardware loosening, etc.), and >or=2 surgical procedures. However, no single criterion was sufficient to ascertain the diagnosis. The following combinations of criteria had a diagnostic probability of >90%: >or=2 positive cultures + 1 criterion among: peri-operative findings, local signs of infection, >or=2 previous operations, orthopaedic devices; 1 positive culture + 3 criteria among: peri-operative findings, local signs of infection, >or=2 previous surgical operations, orthopaedic devices, inflammatory syndrome. The diagnosis of P. acnes osteomyelitis was greatly improved by combining different criteria, allowing differentiation between infection and contamination.

  10. Resolution criteria in double-slit microscopic imaging experiments

    PubMed Central

    You, Shangting; Kuang, Cuifang; Zhang, Baile

    2016-01-01

    Double-slit imaging is widely used for verifying the resolution of high-resolution and super-resolution microscopies. However, due to the fabrication limits, the slit width is generally non-negligible, which can affect the claimed resolution. In this paper we theoretically calculate the electromagnetic field distribution inside and near the metallic double slit using waveguide mode expansion method, and acquire the far-field image by vectorial Fourier optics. We find that the slit width has minimal influence when the illuminating light is polarized parallel to the slits. In this case, the claimed resolution should be based on the center-to-center distance of the double-slit. PMID:27640808

  11. Multisensor Image Analysis System

    DTIC Science & Technology

    1993-04-15

    AD-A263 679 II Uli! 91 Multisensor Image Analysis System Final Report Authors. Dr. G. M. Flachs Dr. Michael Giles Dr. Jay Jordan Dr. Eric...or decision, unless so designated by other documentation. 93-09739 *>ft s n~. now illlllM3lMVf Multisensor Image Analysis System Final...Multisensor Image Analysis System 3. REPORT TYPE AND DATES COVERED FINAL: LQj&tt-Z JZOfVL 5. FUNDING NUMBERS 93 > 6. AUTHOR(S) Drs. Gerald

  12. Coastal zone management with stochastic multi-criteria analysis.

    PubMed

    Félix, A; Baquerizo, A; Santiago, J M; Losada, M A

    2012-12-15

    The methodology for coastal management proposed in this study takes into account the physical processes of the coastal system and the stochastic nature of forcing agents. Simulation techniques are used to assess the uncertainty in the performance of a set of predefined management strategies based on different criteria representing the main concerns of interest groups. This statistical information as well as the distribution function that characterizes the uncertainty regarding the preferences of the decision makers is fed into a stochastic multi-criteria acceptability analysis that provides the probability of alternatives obtaining certain ranks and also calculates the preferences of a typical decision maker who supports an alternative. This methodology was applied as a management solution for Playa Granada in the Guadalfeo River Delta (Granada, Spain), where the construction of a dam in the river basin is causing severe erosion. The analysis of shoreline evolution took into account the coupled action of atmosphere, ocean, and land agents and their intrinsic stochastic character. This study considered five different management strategies. The criteria selected for the analysis were the economic benefits for three interest groups: (i) indirect beneficiaries of tourist activities; (ii) beach homeowners; and (iii) the administration. The strategies were ranked according to their effectiveness, and the relative importance given to each criterion was obtained.

  13. Investigation of various criteria for evaluation of aluminum thin foil ''smart sensors'' images

    NASA Astrophysics Data System (ADS)

    Panin, S. V.; Eremin, A. V.; Lyubutin, P. S.; Burkov, M. V.

    2014-10-01

    Various criteria for processing of aluminum foil ''smart sensors'' images for fatigue evaluation of carbon fiber reinforced polymer (CFRP) were analyzed. There are informative parameters used to assess image quality and surface relief and accordingly to characterize the fatigue damage state of CFRP. The sensitivity of all criteria to distortion influences, particularly, to Gaussian noise, blurring and JPEG compression was investigated. The main purpose of the research is related to the search of informative parameters for fatigue evaluation, which are the least sensitive to different distortions.

  14. Description, Recognition and Analysis of Biological Images

    SciTech Connect

    Yu Donggang; Jin, Jesse S.; Luo Suhuai; Pham, Tuan D.; Lai Wei

    2010-01-25

    Description, recognition and analysis biological images plays an important role for human to describe and understand the related biological information. The color images are separated by color reduction. A new and efficient linearization algorithm is introduced based on some criteria of difference chain code. A series of critical points is got based on the linearized lines. The series of curvature angle, linearity, maximum linearity, convexity, concavity and bend angle of linearized lines are calculated from the starting line to the end line along all smoothed contours. The useful method can be used for shape description and recognition. The analysis, decision, classification of the biological images are based on the description of morphological structures, color information and prior knowledge, which are associated each other. The efficiency of the algorithms is described based on two applications. One application is the description, recognition and analysis of color flower images. Another one is related to the dynamic description, recognition and analysis of cell-cycle images.

  15. Release criteria and pathway analysis for radiological remediation

    SciTech Connect

    Subbaraman, G.; Tuttle, R.J.; Oliver, B.M. . Rocketdyne Div.); Devgun, J.S. )

    1991-01-01

    Site-specific activity concentrations were derived for soils contaminated with mixed fission products (MFP), or uranium-processing residues, using the Department of Energy (DOE) pathway analysis computer code RESRAD at four different sites. The concentrations and other radiological parameters, such as limits on background-subtracted gamma exposure rate were used as the basis to arrive at release criteria for two of the sites. Valid statistical parameters, calculated for the distribution of radiological data obtained from site surveys, were then compared with the criteria to determine releasability or need for further decontamination. For the other two sites, RESRAD has been used as a preremediation planning tool to derive residual material guidelines for uranium. 11 refs., 4 figs., 3 tabs.

  16. Discrimination of Different Brain Metastases and Primary CNS Lymphomas Using Morphologic Criteria and Diffusion Tensor Imaging.

    PubMed

    Bette, S; Wiestler, B; Delbridge, C; Huber, T; Boeckh-Behrens, T; Meyer, B; Zimmer, C; Gempt, J; Kirschke, J

    2016-12-01

    Purpose: Brain metastases are a common complication of cancer and occur in about 15 - 40 % of patients with malignancies. The aim of this retrospective study was to differentiate between metastases from different primary tumors/CNS lymphyomas using morphologic criteria, fractional anisotropy (FA) and apparent diffusion coefficient (ADC). Materials and Methods: Morphologic criteria such as hemorrhage, cysts, pattern of contrast enhancement and location were reported in 200 consecutive patients with brain metastases/primary CNS lymphomas. FA and ADC values were measured in regions of interest (ROIs) placed in the contrast-enhancing tumor part, the necrosis and the non-enhancing peritumoral region (NEPTR). Differences between histopathological subtypes of metastases were analyzed using non-parametric tests, decision trees and hierarchical clustering analysis. Results: Significant differences were found in morphologic criteria such as hemorrhage or pattern of contrast enhancement. In diffusion measurements, significant differences between the different tumor entities were only found in ADC analyzed in the contrast-enhancing tumor part. Among single tumor entities, primary CNS lymphomas showed significantly lower median ADC values in the contrast-enhancing tumor part (ADClymphoma 0.92 [0.83 - 1.07] vs. ADCno_lymphoma 1.35 [1.10 - 1.64] P = 0.001). Further differentiation between types of metastases was not possible using FA and ADC. Conclusion: There were morphologic differences among the main subtypes of brain metastases/CNS lymphomas. However, due to a high variability of common types of metastases and low specificity, prospective differentiation remained challenging. DTI including FA and ADC was not a reliable tool for differentiation between different histopathological subtypes of brain metastases except for CNS lymphomas showing lower ADC values. Biopsy, surgery and staging remain essential for diagnosis. Key Points:

  17. Use of Model-Segmentation Criteria in Clustering and Segmentation of Time Series and Digital Images.

    DTIC Science & Technology

    1983-05-05

    DANS LAS REPARTITION? ET LA SEGMENTATION DES SERIES TEMPORELLES ET DES IMAGES NUMtRICALES Cet article traite le d~veloppement et l’utilisation des...multidimensionnelles et du no-bre des classes des segments dans la segmentation des series temporelles et des imaqes numericales. Les criteres comme ceux de Akaike...NATIONAL BURCAU OF STAND)AROS 1963 A USE OF MODEL-SEGMENTATION CRITERIA IN CLUSTERING AND SEGMENTATION OF TIME SERIES AND DIGITAL IMAGES by STANLEY L

  18. [Semen analysis: spermiogram according to WHO 2010 criteria].

    PubMed

    Gottardo, F; Kliesch, S

    2011-01-01

    Semen analysis plays a key role in the diagnostics of male infertility. Semen analysis has to be performed according to World Health Organisation (WHO) criteria. The updated version of the WHO manual was completed at the end of 2009 and published in 2010. Standard procedures in semen analysis include evaluation of sperm concentration, motility, morphology and vitality. In this new version particular attention has been paid to internal and external quality control, helping to identify and correct incidental and systematic errors both in routine analysis as well as in the field of research. The new manual describes all laboratory solutions, procedures and calculation formulas, and focuses on the definition of cryptozoospermia or azoospermia. A chapter concerning cryopreservation of spermatozoa has been newly integrated. The following overview presents the most important aspects of the updated WHO manual.

  19. Image Retrieval: Theoretical Analysis and Empirical User Studies on Accessing Information in Images.

    ERIC Educational Resources Information Center

    Ornager, Susanne

    1997-01-01

    Discusses indexing and retrieval for effective searches of digitized images. Reports on an empirical study about criteria for analysis and indexing digitized images, and the different types of user queries done in newspaper image archives in Denmark. Concludes that it is necessary that the indexing represent both a factual and an expressional…

  20. Systemic Sclerosis Classification Criteria: Developing methods for multi-criteria decision analysis with 1000Minds

    PubMed Central

    Johnson, Sindhu R.; Naden, Raymond P.; Fransen, Jaap; van den Hoogen, Frank; Pope, Janet E.; Baron, Murray; Tyndall, Alan; Matucci-Cerinic, Marco; Denton, Christopher P.; Distler, Oliver; Gabrielli, Armando; van Laar, Jacob M.; Mayes, Maureen; Steen, Virginia; Seibold, James R.; Clements, Phillip; Medsger, Thomas A.; Carreira, Patricia E.; Riemekasten, Gabriela; Chung, Lorinda; Fessler, Barri J.; Merkel, Peter A.; Silver, Richard; Varga, John; Allanore, Yannick; Mueller-Ladner, Ulf; Vonk, Madelon C.; Walker, Ulrich A.; Cappelli, Susanna; Khanna, Dinesh

    2014-01-01

    Objective Classification criteria for systemic sclerosis (SSc) are being developed. The objectives were to: develop an instrument for collating case-data and evaluate its sensibility; use forced-choice methods to reduce and weight criteria; and explore agreement between experts on the probability that cases were classified as SSc. Study Design and Setting A standardized instrument was tested for sensibility. The instrument was applied to 20 cases covering a range of probabilities that each had SSc. Experts rank-ordered cases from highest to lowest probability; reduced and weighted the criteria using forced-choice methods; and re-ranked the cases. Consistency in rankings was evaluated using intraclass correlation coefficients (ICC). Results Experts endorsed clarity (83%), comprehensibility (100%), face and content validity (100%). Criteria were weighted (points): finger skin thickening (14–22), finger-tip lesions (9–21), friction rubs (21), finger flexion contractures (16), pulmonary fibrosis (14), SSc-related antibodies (15), Raynaud’s phenomenon (13), calcinosis (12), pulmonary hypertension (11), renal crisis (11), telangiectasia (10), abnormal nailfold capillaries (10), esophageal dilation (7) and puffy fingers (5). The ICC across experts was 0.73 (95%CI 0.58,0.86) and improved to 0.80 (95%CI 0.68,0.90). Conclusions Using a sensible instrument and forced-choice methods, the number of criteria were reduced by 39% (23 to 14) and weighted. Our methods reflect the rigors of measurement science, and serves as a template for developing classification criteria. PMID:24721558

  1. Multi-criteria decision analysis: Limitations, pitfalls, and practical difficulties

    SciTech Connect

    Kujawski, Edouard

    2003-02-01

    The 2002 Winter Olympics women's figure skating competition is used as a case study to illustrate some of the limitations, pitfalls, and practical difficulties of Multi-Criteria Decision Analysis (MCDA). The paper compares several widely used models for synthesizing the multiple attributes into a single aggregate value. The various MCDA models can provide conflicting rankings of the alternatives for a common set of information even under states of certainty. Analysts involved in MCDA need to deal with the following challenging tasks: (1) selecting an appropriate analysis method, and (2) properly interpreting the results. An additional trap is the availability of software tools that implement specific MCDA models that can beguile the user with quantitative scores. These conclusions are independent of the decision domain and they should help foster better MCDA practices in many fields including systems engineering trade studies.

  2. [New ASAS criteria for the diagnosis of spondyloarthritis: diagnosing sacroiliitis by magnetic resonance imaging].

    PubMed

    Banegas Illescas, M E; López Menéndez, C; Rozas Rodríguez, M L; Fernández Quintero, R M

    2014-01-01

    Radiographic sacroiliitis has been included in the diagnostic criteria for spondyloarthropathies since the Rome criteria were defined in 1961. However, in the last ten years, magnetic resonance imaging (MRI) has proven more sensitive in the evaluation of the sacroiliac joints in patients with suspected spondyloarthritis and symptoms of sacroiliitis; MRI has proven its usefulness not only for diagnosis of this disease, but also for the follow-up of the disease and response to treatment in these patients. In 2009, The Assessment of SpondyloArthritis international Society (ASAS) developed a new set of criteria for classifying and diagnosing patients with spondyloarthritis; one important development with respect to previous classifications is the inclusion of MRI positive for sacroiliitis as a major diagnostic criterion. This article focuses on the radiologic part of the new classification. We describe and illustrate the different alterations that can be seen on MRI in patients with sacroiliitis, pointing out the limitations of the technique and diagnostic pitfalls.

  3. Clinical, Neurocognitive, Structural Imaging and Dermatogliphics in Schizophrenia According to Kraepelin Criteria

    PubMed Central

    GÜLEÇ, Hüseyin; ULUSOY KAYMAK, Semra; BİLİCİ, Mustafa; GANGAL, Ali; KAYIKÇ IOĞLU, Temel; SARI, Ahmet; TAN, Üner

    2013-01-01

    Introduction A century ago, Kraepelin stated that the distinctive feature of schizophrenia was progressive deterioration. Kraepelin criteria for schizophrenia are: (1) continuous hospitalization or complete dependence on others for obtaining basic necessities of life, (2) unemployment and (3) no remission for the past five years. We aimed to determine the clinical appearance and structural biological features of Kraepelinian schizophrenia. Methods The sample consisted of 17 Kraepelinian patients, 30 non-Kraepelinian schizophrenic patients and 43 healthy controls. The Clinical Global Impressions (CGI) and the Positive and Negative Syndrome Scales (PANSS) were used for clinical assessment. The Frontal Assessment Battery (FAB) and the Verbal Fluency and Color Trail Test (CTT) were included in the cognitive battery. Brain magnetic resonance imaging and dermatoglyphic measurements were performed for structural features. Result Duration of illness, hospitalization, suicide attempts, admission type, presence of a stressor and treatment choice were similar between the two patient groups. Treatment resistance and family history of schizophrenia were more common in Kraepelinian patients. PANSS and CGI subscales scores were also higher in this group. Only the category fluency and CTT-I were different in Kraepelinian patients in comparison to the other patient group. Structural findings were not different between the three groups. Conclusion Category fluency, which was lower in Kraepelinian patients, is an important marker of a degenerative process. The collection of severe clinical symptoms, family history of psychiatric illness and nonresponse to treatment in this particular group of patients points to the need to conduct further studies including cluster analysis in methodology.

  4. A Speedy Cardiovascular Diseases Classifier Using Multiple Criteria Decision Analysis

    PubMed Central

    Lee, Wah Ching; Hung, Faan Hei; Tsang, Kim Fung; Tung, Hoi Ching; Lau, Wing Hong; Rakocevic, Veselin; Lai, Loi Lei

    2015-01-01

    Each year, some 30 percent of global deaths are caused by cardiovascular diseases. This figure is worsening due to both the increasing elderly population and severe shortages of medical personnel. The development of a cardiovascular diseases classifier (CDC) for auto-diagnosis will help address solve the problem. Former CDCs did not achieve quick evaluation of cardiovascular diseases. In this letter, a new CDC to achieve speedy detection is investigated. This investigation incorporates the analytic hierarchy process (AHP)-based multiple criteria decision analysis (MCDA) to develop feature vectors using a Support Vector Machine. The MCDA facilitates the efficient assignment of appropriate weightings to potential patients, thus scaling down the number of features. Since the new CDC will only adopt the most meaningful features for discrimination between healthy persons versus cardiovascular disease patients, a speedy detection of cardiovascular diseases has been successfully implemented. PMID:25587978

  5. Image analysis library software development

    NASA Technical Reports Server (NTRS)

    Guseman, L. F., Jr.; Bryant, J.

    1977-01-01

    The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.

  6. Digital Image Analysis of Cereals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Image analysis is the extraction of meaningful information from images, mainly digital images by means of digital processing techniques. The field was established in the 1950s and coincides with the advent of computer technology, as image analysis is profoundly reliant on computer processing. As t...

  7. Medical Image Analysis Facility

    NASA Technical Reports Server (NTRS)

    1978-01-01

    To improve the quality of photos sent to Earth by unmanned spacecraft. NASA's Jet Propulsion Laboratory (JPL) developed a computerized image enhancement process that brings out detail not visible in the basic photo. JPL is now applying this technology to biomedical research in its Medical lrnage Analysis Facility, which employs computer enhancement techniques to analyze x-ray films of internal organs, such as the heart and lung. A major objective is study of the effects of I stress on persons with heart disease. In animal tests, computerized image processing is being used to study coronary artery lesions and the degree to which they reduce arterial blood flow when stress is applied. The photos illustrate the enhancement process. The upper picture is an x-ray photo in which the artery (dotted line) is barely discernible; in the post-enhancement photo at right, the whole artery and the lesions along its wall are clearly visible. The Medical lrnage Analysis Facility offers a faster means of studying the effects of complex coronary lesions in humans, and the research now being conducted on animals is expected to have important application to diagnosis and treatment of human coronary disease. Other uses of the facility's image processing capability include analysis of muscle biopsy and pap smear specimens, and study of the microscopic structure of fibroprotein in the human lung. Working with JPL on experiments are NASA's Ames Research Center, the University of Southern California School of Medicine, and Rancho Los Amigos Hospital, Downey, California.

  8. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  9. Criteria for High Quality Biology Teaching: An Analysis

    ERIC Educational Resources Information Center

    Tasci, Guntay

    2015-01-01

    This study aims to analyze the process under which biology lessons are taught in terms of teaching quality criteria (TQC). Teaching quality is defined as the properties of efficient teaching and is considered to be the criteria used to measure teaching quality both in general and specific to a field. The data were collected through classroom…

  10. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  11. Parallel Algorithms for Image Analysis.

    DTIC Science & Technology

    1982-06-01

    8217 _ _ _ _ _ _ _ 4. TITLE (aid Subtitle) S. TYPE OF REPORT & PERIOD COVERED PARALLEL ALGORITHMS FOR IMAGE ANALYSIS TECHNICAL 6. PERFORMING O4G. REPORT NUMBER TR-1180...Continue on reverse side it neceesary aid Identlfy by block number) Image processing; image analysis ; parallel processing; cellular computers. 20... IMAGE ANALYSIS TECHNICAL 6. PERFORMING ONG. REPORT NUMBER TR-1180 - 7. AUTHOR(&) S. CONTRACT OR GRANT NUMBER(s) Azriel Rosenfeld AFOSR-77-3271 9

  12. Brain Imaging Analysis

    PubMed Central

    BOWMAN, F. DUBOIS

    2014-01-01

    The increasing availability of brain imaging technologies has led to intense neuroscientific inquiry into the human brain. Studies often investigate brain function related to emotion, cognition, language, memory, and numerous other externally induced stimuli as well as resting-state brain function. Studies also use brain imaging in an attempt to determine the functional or structural basis for psychiatric or neurological disorders and, with respect to brain function, to further examine the responses of these disorders to treatment. Neuroimaging is a highly interdisciplinary field, and statistics plays a critical role in establishing rigorous methods to extract information and to quantify evidence for formal inferences. Neuroimaging data present numerous challenges for statistical analysis, including the vast amounts of data collected from each individual and the complex temporal and spatial dependence present. We briefly provide background on various types of neuroimaging data and analysis objectives that are commonly targeted in the field. We present a survey of existing methods targeting these objectives and identify particular areas offering opportunities for future statistical contribution. PMID:25309940

  13. Multi-Criteria Analysis in Naval Ship Design

    DTIC Science & Technology

    2005-03-01

    the cost with respect to the different designs. See Appendix F for the MATLAB code of the model. SYSTEM DESCRIPTION WT KEY NAVIGATION EQUIPMENT...public release; distribution is unlimited 13. ABSTRACT (maximum 200 words) Numerous optimization problems involve systems with multiple and often... systems with multiple and often contradictory criteria. Such contradictory criteria have been an issue for marine/naval engineering design studies for

  14. DIDA - Dynamic Image Disparity Analysis.

    DTIC Science & Technology

    1982-12-31

    Understanding, Dynamic Image Analysis , Disparity Analysis, Optical Flow, Real-Time Processing ___ 20. ABSTRACT (Continue on revere side If necessary aid identify...three aspects of dynamic image analysis must be studied: effectiveness, generality, and efficiency. In addition, efforts must be made to understand the...environment. A better understanding of the need for these Limiting constraints is required. Efficiency is obviously important if dynamic image analysis is

  15. Image-guided tumor ablation: standardization of terminology and reporting criteria.

    PubMed

    Goldberg, S Nahum; Grassi, Clement J; Cardella, John F; Charboneau, J William; Dodd, Gerald D; Dupuy, Damian E; Gervais, Debra A; Gillams, Alice R; Kane, Robert A; Lee, Fred T; Livraghi, Tito; McGahan, John; Phillips, David A; Rhim, Hyunchul; Silverman, Stuart G; Solbiati, Luigi; Vogl, Thomas J; Wood, Bradford J; Vedantham, Suresh; Sacks, David

    2009-07-01

    The field of interventional oncology with use of image-guided tumor ablation requires standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison between treatments that use different technologies, such as chemical (ethanol or acetic acid) ablation, and thermal therapies, such as radiofrequency (RF), laser, microwave, ultrasound, and cryoablation. This document provides a framework that will hopefully facilitate the clearest communication between investigators and will provide the greatest flexibility in comparison between the many new, exciting, and emerging technologies. An appropriate vehicle for reporting the various aspects of image-guided ablation therapy, including classification of therapies and procedure terms, appropriate descriptors of imaging guidance, and terminology to define imaging and pathologic findings, are outlined. Methods for standardizing the reporting of follow-up findings and complications and other important aspects that require attention when reporting clinical results are addressed. It is the group's intention that adherence to the recommendations will facilitate achievement of the group's main objective: improved precision and communication in this field that lead to more accurate comparison of technologies and results and, ultimately, to improved patient outcomes. The intent of this standardization of terminology is to provide an appropriate vehicle for reporting the various aspects of image-guided ablation therapy.

  16. A critical overview of the imaging arm of the ASAS criteria for diagnosing axial spondyloarthritis: what the radiologist should know.

    PubMed

    Aydingoz, Ustun; Yildiz, Adalet Elcin; Ozdemir, Zeynep Maras; Yildirim, Seray Akcalar; Erkus, Figen; Ergen, Fatma Bilge

    2012-01-01

    The Assessment in SpondyloArthritis international Society (ASAS) defined new criteria in 2009 for the classification of axial spondyloarthritis (SpA) in patients with ≥ 3 months of back pain who were aged <45 years at the onset of back pain. This represents a culmination of a number of efforts in the last 30 years starting with the 1984 modified New York criteria for ankylosing spondylitis, followed by the 1990 Amor criteria and the 1991 European Spondyloarthropathy Study Group criteria for SpA. The importance of new ASAS criteria for radiologists is that magnetic resonance imaging (MRI) takes center stage and is one of the major criteria for the diagnosis of axial SpA when active (or acute) inflammation is present on MRI that is highly suggestive of sacroiliitis associated with SpA. According to the new criteria, sacroiliitis on imaging plus ≥ 1 SpA features (such as inflammatory back pain, arthritis, heel enthesitis, uveitis, dactylitis, psoriasis, Crohn's disease/colitis, good response to non-steroidal anti-inflammatory drugs, family history for SpA, HLA-B27 positivity, or elevated C-reactive protein) is sufficient to make the diagnosis of axial SpA. A number of rules and pitfalls, however, are present in the diagnosis of active sacroiliitis on MRI. These points are highlighted in this review, and a potential shortcoming of the imaging arm of the ASAS criteria is addressed.

  17. A multiple criteria-based spectral partitioning method for remotely sensed hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Li, Jun; Plaza, Antonio; Sun, Yanli

    2016-10-01

    Hyperspectral remote sensing offers a powerful tool in many different application contexts. The imbalance between the high dimensionality of the data and the limited availability of training samples calls for the need to perform dimensionality reduction in practice. Among traditional dimensionality reduction techniques, feature extraction is one of the most widely used approaches due to its flexibility to transform the original spectral information into a subspace. In turn, band selection is important when the application requires preserving the original spectral information (especially the physically meaningful information) for the interpretation of the hyperspectral scene. In the case of hyperspectral image classification, both techniques need to discard most of the original features/bands in order to perform the classification using a feature set with much lower dimensionality. However, the discriminative information that allows a classifier to provide good performance is usually classdependent and the relevant information may live in weak features/bands that are usually discarded or lost through subspace transformation or band selection. As a result, in practice, it is challenging to use either feature extraction or band selection for classification purposes. Relevant lines of attack to address this problem have focused on multiple feature selection aiming at a suitable fusion of diverse features in order to provide relevant information to the classifier. In this paper, we present a new dimensionality reduction technique, called multiple criteria-based spectral partitioning, which is embedded in an ensemble learning framework to perform advanced hyperspectral image classification. Driven by the use of a multiple band priority criteria that is derived from classic band selection techniques, we obtain multiple spectral partitions from the original hyperspectral data that correspond to several band subgroups with much lower spectral dimensionality as compared with

  18. Reflections on ultrasound image analysis.

    PubMed

    Alison Noble, J

    2016-10-01

    Ultrasound (US) image analysis has advanced considerably in twenty years. Progress in ultrasound image analysis has always been fundamental to the advancement of image-guided interventions research due to the real-time acquisition capability of ultrasound and this has remained true over the two decades. But in quantitative ultrasound image analysis - which takes US images and turns them into more meaningful clinical information - thinking has perhaps more fundamentally changed. From roots as a poor cousin to Computed Tomography (CT) and Magnetic Resonance (MR) image analysis, both of which have richer anatomical definition and thus were better suited to the earlier eras of medical image analysis which were dominated by model-based methods, ultrasound image analysis has now entered an exciting new era, assisted by advances in machine learning and the growing clinical and commercial interest in employing low-cost portable ultrasound devices outside traditional hospital-based clinical settings. This short article provides a perspective on this change, and highlights some challenges ahead and potential opportunities in ultrasound image analysis which may both have high impact on healthcare delivery worldwide in the future but may also, perhaps, take the subject further away from CT and MR image analysis research with time.

  19. Spreadsheet-Like Image Analysis

    DTIC Science & Technology

    1992-08-01

    1 " DTIC AD-A254 395 S LECTE D, ° AD-E402 350 Technical Report ARPAD-TR-92002 SPREADSHEET-LIKE IMAGE ANALYSIS Paul Willson August 1992 U.S. ARMY...August 1992 4. TITLE AND SUBTITLE 5. FUNDING NUMBERS SPREADSHEET-LIKE IMAGE ANALYSIS 6. AUTHOR(S) Paul Willson 7. PERFORMING ORGANIZATION NAME(S) AND...14. SUBJECT TERMS 15. NUMBER OF PAGES Image analysis , nondestructive inspection, spreadsheet, Macintosh software, 14 neural network, signal processing

  20. Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1981-04-01

    UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface

  1. Analysis of proposed criteria for human response to vibration

    NASA Technical Reports Server (NTRS)

    Janeway, R. N.

    1975-01-01

    The development of criteria for human vibration response is reviewed, including the evolution of the ISO standard 2631. The document is analyzed to show why its application to vehicle ride evaluation is strongly opposed. Alternative vertical horizontal limits for comfort are recommended in the ground vehicle ride frequency range above 1 Hz. These values are derived by correlating the absorbed power findings of Pradko and Lee with other established criteria. Special emphasis is placed on working limits in the frequency range of 1 to 10 Hz since this is the most significant area in ground vehicle ride evaluation.

  2. Image-guided tumor ablation: standardization of terminology and reporting criteria--a 10-year update.

    PubMed

    Ahmed, Muneeb; Solbiati, Luigi; Brace, Christopher L; Breen, David J; Callstrom, Matthew R; Charboneau, J William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D; Dupuy, Damian E; Gervais, Debra A; Gianfelice, David; Gillams, Alice R; Lee, Fred T; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J; Livraghi, Tito; Lu, David S; McGahan, John P; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L; Liang, Ping; Rhim, Hyunchul; Rose, Steven C; Salem, Riad; Sofocleous, Constantinos T; Solomon, Stephen B; Soulen, Michael C; Tanaka, Masatoshi; Vogl, Thomas J; Wood, Bradford J; Goldberg, S Nahum

    2014-10-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes. Online supplemental material is available for this article .

  3. Image-guided tumor ablation: standardization of terminology and reporting criteria--a 10-year update.

    PubMed

    Ahmed, Muneeb; Solbiati, Luigi; Brace, Christopher L; Breen, David J; Callstrom, Matthew R; Charboneau, J William; Chen, Min-Hua; Choi, Byung Ihn; de Baère, Thierry; Dodd, Gerald D; Dupuy, Damian E; Gervais, Debra A; Gianfelice, David; Gillams, Alice R; Lee, Fred T; Leen, Edward; Lencioni, Riccardo; Littrup, Peter J; Livraghi, Tito; Lu, David S; McGahan, John P; Meloni, Maria Franca; Nikolic, Boris; Pereira, Philippe L; Liang, Ping; Rhim, Hyunchul; Rose, Steven C; Salem, Riad; Sofocleous, Constantinos T; Solomon, Stephen B; Soulen, Michael C; Tanaka, Masatoshi; Vogl, Thomas J; Wood, Bradford J; Goldberg, S Nahum

    2014-11-01

    Image-guided tumor ablation has become a well-established hallmark of local cancer therapy. The breadth of options available in this growing field increases the need for standardization of terminology and reporting criteria to facilitate effective communication of ideas and appropriate comparison among treatments that use different technologies, such as chemical (eg, ethanol or acetic acid) ablation, thermal therapies (eg, radiofrequency, laser, microwave, focused ultrasound, and cryoablation) and newer ablative modalities such as irreversible electroporation. This updated consensus document provides a framework that will facilitate the clearest communication among investigators regarding ablative technologies. An appropriate vehicle is proposed for reporting the various aspects of image-guided ablation therapy including classification of therapies, procedure terms, descriptors of imaging guidance, and terminology for imaging and pathologic findings. Methods are addressed for standardizing reporting of technique, follow-up, complications, and clinical results. As noted in the original document from 2003, adherence to the recommendations will improve the precision of communications in this field, leading to more accurate comparison of technologies and results, and ultimately to improved patient outcomes.

  4. Air Pollution Monitoring Site Selection by Multiple Criteria Decision Analysis

    EPA Science Inventory

    Criteria air pollutants (particulate matter, sulfur dioxide, oxides of nitrogen, volatile organic compounds, and carbon monoxide) as well as toxic air pollutants are a global concern. A particular scenario that is receiving increased attention in the research is the exposure to t...

  5. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  6. Appendage modal coordinate truncation criteria in hybrid coordinate dynamic analysis. [for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Likins, P.; Ohkami, Y.; Wong, C.

    1976-01-01

    The paper examines the validity of the assumption that certain appendage-distributed (modal) coordinates can be truncated from a system model without unacceptable degradation of fidelity in hybrid coordinate dynamic analysis for attitude control of spacecraft with flexible appendages. Alternative truncation criteria are proposed and their interrelationships defined. Particular attention is given to truncation criteria based on eigenvalues, eigenvectors, and controllability and observability. No definitive resolution of the problem is advanced, and exhaustive study is required to obtain ultimate truncation criteria.

  7. Mapping tropical dry forest succession using multiple criteria spectral mixture analysis

    NASA Astrophysics Data System (ADS)

    Cao, Sen; Yu, Qiuyan; Sanchez-Azofeifa, Arturo; Feng, Jilu; Rivard, Benoit; Gu, Zhujun

    2015-11-01

    Tropical dry forests (TDFs) in the Americas are considered the first frontier of economic development with less than 1% of their total original coverage under protection. Accordingly, accurate estimates of their spatial extent, fragmentation, and degree of regeneration are critical in evaluating the success of current conservation policies. This study focused on a well-protected secondary TDF in Santa Rosa National Park (SRNP) Environmental Monitoring Super Site, Guanacaste, Costa Rica. We used spectral signature analysis of TDF ecosystem succession (early, intermediate, and late successional stages), and its intrinsic variability, to propose a new multiple criteria spectral mixture analysis (MCSMA) method on the shortwave infrared (SWIR) of HyMap image. Unlike most existing iterative mixture analysis (IMA) techniques, MCSMA tries to extract and make use of representative endmembers with spectral and spatial information. MCSMA then considers three criteria that influence the comparative importance of different endmember combinations (endmember models): root mean square error (RMSE); spatial distance (SD); and fraction consistency (FC), to create an evaluation framework to select a best-fit model. The spectral analysis demonstrated that TDFs have a high spectral variability as a result of biomass variability. By adopting two search strategies, the unmixing results showed that our new MCSMA approach had a better performance in root mean square error (early: 0.160/0.159; intermediate: 0.322/0.321; and late: 0.239/0.235); mean absolute error (early: 0.132/0.128; intermediate: 0.254/0.251; and late: 0.191/0.188); and systematic error (early: 0.045/0.055; intermediate: -0.211/-0.214; and late: 0.161/0.160), compared to the multiple endmember spectral mixture analysis (MESMA). This study highlights the importance of SWIR in differentiating successional stages in TDFs. The proposed MCSMA provides a more flexible and generalized means for the best-fit model determination

  8. Oncological image analysis: medical and molecular image analysis

    NASA Astrophysics Data System (ADS)

    Brady, Michael

    2007-03-01

    This paper summarises the work we have been doing on joint projects with GE Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on dynamic PET. First, we recall the salient facts about cancer and oncological image analysis. Then we introduce some of the work that we have done on analysing clinical MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and segmentation of the circumferential resection margin. In the second part of the paper, we shift attention to the complementary aspect of molecular image analysis, illustrating our approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug resistant tumours.

  9. Imaging based enrichment criteria using deep learning algorithms for efficient clinical trials in MCI

    PubMed Central

    Ithapu, Vamsi K.; Okonkwo, Ozioma C.; Chappell, Richard J.; Dowling, N. Maritza; Johnson, Sterling C.

    2015-01-01

    The Mild Cognitive Impairment (MCI) stage of AD may be optimal for clinical trials to test potential treatments for preventing or delaying decline to dementia. However, MCI is heterogeneous in that not all cases progress to dementia within the time frame of a trial, and some may not have underlying AD pathology. Identifying those MCIs who are most likely to decline during a trial and thus most likely to benefit from treatment will improve trial efficiency and power to detect treatment effects. To this end, employing multi-modal imaging-derived inclusion criteria may be especially beneficial. Here, we present a novel multi-modal imaging marker that predicts future cognitive and neural decline from [F-18]fluorodeoxyglucose positron emission tomography (PET), amyloid florbetapir PET, and structural magnetic resonance imaging (MRI), based on a new deep learning algorithm (randomized denoising autoencoder marker, rDAm). Using ADNI2 MCI data, we show that employing rDAm as a trial enrichment criterion reduces the required sample estimates by at least five times compared to the no-enrichment regime, and leads to smaller trials with high statistical power, compared to existing methods. PMID:26093156

  10. 75 FR 69140 - NUREG-1953, Confirmatory Thermal-Hydraulic Analysis To Support Specific Success Criteria in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-10

    ... COMMISSION NUREG-1953, Confirmatory Thermal-Hydraulic Analysis To Support Specific Success Criteria in the...- Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk Models...-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis Risk...

  11. Hyperspectral image analysis. A tutorial.

    PubMed

    Amigo, José Manuel; Babamoradi, Hamid; Elcoroaristizabal, Saioa

    2015-10-08

    This tutorial aims at providing guidelines and practical tools to assist with the analysis of hyperspectral images. Topics like hyperspectral image acquisition, image pre-processing, multivariate exploratory analysis, hyperspectral image resolution, classification and final digital image processing will be exposed, and some guidelines given and discussed. Due to the broad character of current applications and the vast number of multivariate methods available, this paper has focused on an industrial chemical framework to explain, in a step-wise manner, how to develop a classification methodology to differentiate between several types of plastics by using Near infrared hyperspectral imaging and Partial Least Squares - Discriminant Analysis. Thus, the reader is guided through every single step and oriented in order to adapt those strategies to the user's case.

  12. Vessel Labeling in Combined Confocal Scanning Laser Ophthalmoscopy and Optical Coherence Tomography Images: Criteria for Blood Vessel Discrimination

    PubMed Central

    Motte, Jeremias; Alten, Florian; Ewering, Carina; Osada, Nani; Kadas, Ella M.; Brandt, Alexander U.; Oberwahrenbrock, Timm; Clemens, Christoph R.; Eter, Nicole; Paul, Friedemann; Marziniak, Martin

    2014-01-01

    Introduction The diagnostic potential of optical coherence tomography (OCT) in neurological diseases is intensively discussed. Besides the sectional view of the retina, modern OCT scanners produce a simultaneous top-view confocal scanning laser ophthalmoscopy (cSLO) image including the option to evaluate retinal vessels. A correct discrimination between arteries and veins (labeling) is vital for detecting vascular differences between healthy subjects and patients. Up to now, criteria for labeling (cSLO) images generated by OCT scanners do not exist. Objective This study reviewed labeling criteria originally developed for color fundus photography (CFP) images. Methods The criteria were modified to reflect the cSLO technique, followed by development of a protocol for labeling blood vessels. These criteria were based on main aspects such as central light reflex, brightness, and vessel thickness, as well as on some additional criteria such as vascular crossing patterns and the context of the vessel tree. Results and Conclusion They demonstrated excellent inter-rater agreement and validity, which seems to indicate that labeling of images might no longer require more than one rater. This algorithm extends the diagnostic possibilities offered by OCT investigations. PMID:25203135

  13. Radiologist and automated image analysis

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.

    1999-07-01

    Significant advances are being made in the area of automated medical image analysis. Part of the progress is due to the general advances being made in the types of algorithms used to process images and perform various detection and recognition tasks. A more important reason for this growth in medical image analysis processes, may be due however to a very different reason. The use of computer workstations, digital image acquisition technologies and the use of CRT monitors for display of medical images for primary diagnostic reading is becoming more prevalent in radiology departments around the world. With the advance in computer- based displays, however, has come the realization that displaying images on a CRT monitor is not the same as displaying film on a viewbox. There are perceptual, cognitive and ergonomic issues that must be considered if radiologists are to accept this change in technology and display. The bottom line is that radiologists' performance must be evaluated with these new technologies and image analysis techniques in order to verify that diagnostic performance is at least as good with these new technologies and image analysis procedures as with film-based displays. The goal of this paper is to address some of the perceptual, cognitive and ergonomic issues associated with reading radiographic images from digital displays.

  14. Histopathological Image Analysis: A Review

    PubMed Central

    Gurcan, Metin N.; Boucheron, Laura; Can, Ali; Madabhushi, Anant; Rajpoot, Nasir; Yener, Bulent

    2010-01-01

    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement to the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe. PMID:20671804

  15. Comparison of the EORTC criteria and PERCIST in solid tumors: a pooled analysis and review

    PubMed Central

    Kim, Jung Han

    2016-01-01

    Two sets of response criteria using PET are currently available to monitor metabolic changes in solid tumors: the criteria developed by the European Organization for Research and Treatment of Cancer (EORTC criteria) and the PET Response Criteria in Solid Tumors (PERCIST). We conducted this pooled study to investigate the strength of agreement between the EORTC criteria and PERCIST in the assessment of tumor response. We surveyed MEDLINE, EMBASE and PUBMED for articles with terms of the EORTC criteria and PERCIST between 2009 and January 2016. We searched for all the references of relevant articles and reviews using the ‘related articles’ feature in the PUBMED. There were six articles with the data on the comparison of the EORTC criteria and PERCIST. A total of 348 patients were collected; 190 (54.6%) with breast cancer, 81 with colorectal cancer, 45 with lung cancer, 14 with basal cell carcinoma in the skin, 12 with stomach cancer, and 6 with head and neck cancer. The agreement of tumor response between the EORTC criteria and PERCIST was excellent (k = 0.946). Of 348 patients, only 12 (3.4%) showed disagreement between the two criteria in the assessment of tumor response. The shift of tumor response between the EORTC criteria and PERCIST occurred mostly in patients with PMR and SMD. The estimated overall response rates were not significantly different between the two criteria (72.7% by EORTC vs. 73.6% by PERCIST). In conclusion, this pooled analysis demonstrates that the EORTC criteria and PERCIST showed almost perfect agreement in the assessment of tumor response. PMID:27517621

  16. Analysis and performance of various classification criteria sets in a Colombian cohort of patients with spondyloarthritis.

    PubMed

    Bautista-Molano, Wilson; Landewé, Robert B M; Londoño, John; Romero-Sanchez, Consuelo; Valle-Oñate, Rafael; van der Heijde, Désirée

    2016-07-01

    The objective of this study was to investigate the performance of classification criteria sets (Assessment of SpondyloArthritis international Society (ASAS), European Spondylarthropathy Study Group (ESSG), and Amor) for spondyloarthritis (SpA) in a clinical practice cohort in Colombia and provide insight into how rheumatologists follow the diagnostic path in patients suspected of SpA. Patients with a rheumatologist's diagnosis of SpA were retrospectively classified according to three criteria sets. Classification rate was defined as the proportion of patients fulfilling a particular criterion. Characteristics of patients fulfilling and not fulfilling each criterion were compared. The ASAS criteria classified 81 % of all patients (n = 581) as having either axial SpA (44 %) or peripheral SpA (37 %), whereas a lower proportion met ESSG criteria (74 %) and Amor criteria (53 %). There was a high degree of overlap among the different criteria, and 42 % of the patients met all three criteria. Patients fulfilling all three criteria sets were older (36 vs. 30 years), had more SpA features (3 vs. 1 features), and more frequently had a current or past history of back pain (77 vs. 43 %), inflammatory back pain (47 vs. 13 %), enthesitis (67 vs. 26 %), and buttock pain (37 vs. 13 %) vs. those not fulfilling any criteria. HLA-B27, radiographs, and MRI-SI were performed in 77, 59, and 24 % of the patients, respectively. The ASAS criteria classified more patients as having SpA in this Colombian cohort when the rheumatologist's diagnosis is used as an external standard. Although physicians do not perform HLA-B27 or imaging in all patients, they do require these tests if the clinical symptoms fall short of confirming SpA and suspicion remains.

  17. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  18. Decerns: A framework for multi-criteria decision analysis

    DOE PAGES

    Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; ...

    2015-02-27

    A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.

  19. Evaluation of expert criteria for preoperative magnetic resonance imaging of newly diagnosed breast cancer.

    PubMed

    Behrendt, Carolyn E; Tumyan, Lusine; Gonser, Laura; Shaw, Sara L; Vora, Lalit; Paz, I Benjamin; Ellenhorn, Joshua D I; Yim, John H

    2014-08-01

    Despite 2 randomized trials reporting no reduction in operations or local recurrence at 1 year, preoperative magnetic resonance imaging (MRI) is increasingly used in diagnostic workup of breast cancer. We evaluated 5 utilization criteria recently proposed by experts. Of women (n = 340) newly diagnosed with unilateral breast cancer who underwent bilateral MRI, most (69.4%) met at least 1 criterion before MRI: mammographic density (44.4%), under consideration for partial breast irradiation (PBI) (19.7%), genetic-familial risk (12.9%), invasive lobular carcinoma (11.8%), and multifocal/multicentric disease (10.6%). MRI detected occult malignant lesion or extension of index lesion in 21.2% of index, 3.3% of contralateral, breasts. No expert criterion was associated with MRI-detected malignant lesion, which associated instead with pre-MRI plan of lumpectomy without PBI (48.2% of subjects): Odds Ratio 3.05, 95% CI 1.57-5.91 (p adjusted for multiple hypothesis testing = 0.007, adjusted for index-vs-contralateral breast and covariates). The expert guidelines were not confirmed by clinical evidence.

  20. Distributed multi-criteria model evaluation and spatial association analysis

    NASA Astrophysics Data System (ADS)

    Scherer, Laura; Pfister, Stephan

    2015-04-01

    Model performance, if evaluated, is often communicated by a single indicator and at an aggregated level; however, it does not embrace the trade-offs between different indicators and the inherent spatial heterogeneity of model efficiency. In this study, we simulated the water balance of the Mississippi watershed using the Soil and Water Assessment Tool (SWAT). The model was calibrated against monthly river discharge at 131 measurement stations. Its time series were bisected to allow for subsequent validation at the same gauges. Furthermore, the model was validated against evapotranspiration which was available as a continuous raster based on remote sensing. The model performance was evaluated for each of the 451 sub-watersheds using four different criteria: 1) Nash-Sutcliffe efficiency (NSE), 2) percent bias (PBIAS), 3) root mean square error (RMSE) normalized to standard deviation (RSR), as well as 4) a combined indicator of the squared correlation coefficient and the linear regression slope (bR2). Conditions that might lead to a poor model performance include aridity, a very flat and steep relief, snowfall and dams, as indicated by previous research. In an attempt to explain spatial differences in model efficiency, the goodness of the model was spatially compared to these four phenomena by means of a bivariate spatial association measure which combines Pearson's correlation coefficient and Moran's index for spatial autocorrelation. In order to assess the model performance of the Mississippi watershed as a whole, three different averages of the sub-watershed results were computed by 1) applying equal weights, 2) weighting by the mean observed river discharge, 3) weighting by the upstream catchment area and the square root of the time series length. Ratings of model performance differed significantly in space and according to efficiency criterion. The model performed much better in the humid Eastern region than in the arid Western region which was confirmed by the

  1. Image analysis for DNA sequencing

    NASA Astrophysics Data System (ADS)

    Palaniappan, Kannappan; Huang, Thomas S.

    1991-07-01

    There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.

  2. Errors from Image Analysis

    SciTech Connect

    Wood, William Monford

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  3. Family-based association analysis of alcohol dependence criteria and severity

    PubMed Central

    Wetherill, Leah; Kapoor, Manav; Agrawal, Arpana; Bucholz, Kathleen; Koller, Daniel; Bertelsen, Sarah E.; Le, Nhung; Wang, Jen-Chyong; Almasy, Laura; Hesselbrock, Victor; Kramer, John; Nurnberger, John I.; Schuckit, Marc; Tischfield, Jay A.; Xuei, Xiaoling; Porjesz, Bernice; Edenberg, Howard J.; Goate, Alison M.; Foroud, Tatiana

    2013-01-01

    Background Despite the high heritability of alcohol dependence (AD), the genes found to be associated with it account for only a small proportion of its total variability. The goal of this study was to identify and analyze phenotypes based on homogeneous classes of individuals to increase the power to detect genetic risk factors contributing to the risk of AD. Methods The 7 individual DSM-IV criteria for AD were analyzed using latent class analysis (LCA) to identify classes defined by the pattern of endorsement of the criteria. A genome-wide association study was performed in 118 extended European American families (n = 2,322 individuals) densely affected with AD to identify genes associated with AD, with each of the seven DSM-IV criteria, and with the probability of belonging to two of three latent classes. Results Heritability for DSM-IV AD was 61%, and ranged from 17-60% for the other phenotypes. A SNP in the olfactory receptor OR51L1 was significantly associated (7.3 × 10−8) with the DSM-IV criterion of persistent desire to, or inability to, cut down on drinking. LCA revealed a three-class model: the “low risk” class (50%) rarely endorsed any criteria, and none met criteria for AD; the “moderate risk” class (33) endorsed primarily 4 DSM-IV criteria, and 48% met criteria for AD; the “high risk” class (17%) manifested high endorsement probabilities for most criteria and nearly all (99%) met criteria for AD One single nucleotide polymorphism (SNP) in a sodium leak channel NALCN demonstrated genome-wide significance with the high risk class (p=4.1 × 10−8). Analyses in an independent sample did not replicate these associations. Conclusion We explored the genetic contribution to several phenotypes derived from the DSM-IV alcohol dependence criteria. The strongest evidence of association was with SNPs in NALCN and OR51L1. PMID:24015780

  4. Data selection criteria in star-based monitoring of GOES imager visible-channel responsivities

    NASA Astrophysics Data System (ADS)

    Chang, I.-Lok; Crosby, David; Dean, Charles; Weinreb, Michael; Baltimore, Perry; Baucom, Jeanette; Han, Dejiang

    2004-10-01

    Monitoring the responsivities of the visible channels of the operational Geostationary Operational Environmental Satellites (GOES) is an on-going effort at NOAA. Various techniques are being used. In this paper we describe the technique based on the analysis of star signals that are used in the GOES Orbit and Attitude Tracking System (OATS) for satellite attitude and orbit determination. Time series of OATS star observations give information on the degradation of the detectors of a visible channel. Investigations of star data from the past three years have led to several modifications of the method we initially used to calculate the exponential degradation coefficient of a star-signal time series. First we observed that different patterns of detector output versus time result when star images drift across the detector array along different trajectories. We found that certain trajectories should be rejected in the data analysis. We found also that some detector-dependent weighting coefficients used in the OATS analysis tend to scatter the star signals measured by different detectors. We present a set of modifications to our star monitoring algorithms for resolving such problems. Other simple enhancements on the algorithms will also be described. With these modifications, the time series of the star signals show less scatter. This allows for more confidence in the estimated degradation rates and a more realistic statistical analysis on the extent of uncertainty in those rates. The resulting time series and estimated degradation rates for the visible channels of GOES-8 and GOES-10 Imagers will be presented.

  5. Multi-Source Image Analysis.

    DTIC Science & Technology

    1979-12-01

    three sensor systems, but at some test sites only one or two types were utilized. Sensor characteristics were evaluated in relationship to the targets...Multi-source image analysis is an evaluation of remote sensor imagery for military geographic information. The imagery is confined to radar, thermal...heating affect a TIR scanner’s recorded temperature, careful image evaluation can be used to extract valuable military geographic information

  6. Priority setting of health interventions: the need for multi-criteria decision analysis

    PubMed Central

    Baltussen, Rob; Niessen, Louis

    2006-01-01

    Priority setting of health interventions is often ad-hoc and resources are not used to an optimal extent. Underlying problem is that multiple criteria play a role and decisions are complex. Interventions may be chosen to maximize general population health, to reduce health inequalities of disadvantaged or vulnerable groups, ad/or to respond to life-threatening situations, all with respect to practical and budgetary constraints. This is the type of problem that policy makers are typically bad at solving rationally, unaided. They tend to use heuristic or intuitive approaches to simplify complexity, and in the process, important information is ignored. Next, policy makers may select interventions for only political motives. This indicates the need for rational and transparent approaches to priority setting. Over the past decades, a number of approaches have been developed, including evidence-based medicine, burden of disease analyses, cost-effectiveness analyses, and equity analyses. However, these approaches concentrate on single criteria only, whereas in reality, policy makers need to make choices taking into account multiple criteria simultaneously. Moreover, they do not cover all criteria that are relevant to policy makers. Therefore, the development of a multi-criteria approach to priority setting is necessary, and this has indeed recently been identified as one of the most important issues in health system research. In other scientific disciplines, multi-criteria decision analysis is well developed, has gained widespread acceptance and is routinely used. This paper presents the main principles of multi-criteria decision analysis. There are only a very few applications to guide resource allocation decisions in health. We call for a shift away from present priority setting tools in health – that tend to focus on single criteria – towards transparent and systematic approaches that take into account all relevant criteria simultaneously. PMID:16923181

  7. An Analysis of P-3 Aircraft Service Period Adjustment Criteria

    DTIC Science & Technology

    1986-12-01

    A. ASPA(AIRCRAFT SERVICE PERIOD ADJUSTMENT) ....... 29 B. A REVIEW OF ANALYSIS METHODS ................... 310 1. Delphi ...uncertainty. A brief review of their procedures, advantages, and disadvantages is helpful to justify selecting the most appropriate method. 1. Delphi ...Technique The+ Delphi Technique is a method of statistically refining tie opinions of a group of experts or especially knowledgeable personnel. The

  8. Multispectral Imaging Broadens Cellular Analysis

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Amnis Corporation, a Seattle-based biotechnology company, developed ImageStream to produce sensitive fluorescence images of cells in flow. The company responded to an SBIR solicitation from Ames Research Center, and proposed to evaluate several methods of extending the depth of field for its ImageStream system and implement the best as an upgrade to its commercial products. This would allow users to view whole cells at the same time, rather than just one section of each cell. Through Phase I and II SBIR contracts, Ames provided Amnis the funding the company needed to develop this extended functionality. For NASA, the resulting high-speed image flow cytometry process made its way into Medusa, a life-detection instrument built to collect, store, and analyze sample organisms from erupting hydrothermal vents, and has the potential to benefit space flight health monitoring. On the commercial end, Amnis has implemented the process in ImageStream, combining high-resolution microscopy and flow cytometry in a single instrument, giving researchers the power to conduct quantitative analyses of individual cells and cell populations at the same time, in the same experiment. ImageStream is also built for many other applications, including cell signaling and pathway analysis; classification and characterization of peripheral blood mononuclear cell populations; quantitative morphology; apoptosis (cell death) assays; gene expression analysis; analysis of cell conjugates; molecular distribution; and receptor mapping and distribution.

  9. Multivariate image analysis in biomedicine.

    PubMed

    Nattkemper, Tim W

    2004-10-01

    In recent years, multivariate imaging techniques are developed and applied in biomedical research in an increasing degree. In research projects and in clinical studies as well m-dimensional multivariate images (MVI) are recorded and stored to databases for a subsequent analysis. The complexity of the m-dimensional data and the growing number of high throughput applications call for new strategies for the application of image processing and data mining to support the direct interactive analysis by human experts. This article provides an overview of proposed approaches for MVI analysis in biomedicine. After summarizing the biomedical MVI techniques the two level framework for MVI analysis is illustrated. Following this framework, the state-of-the-art solutions from the fields of image processing and data mining are reviewed and discussed. Motivations for MVI data mining in biology and medicine are characterized, followed by an overview of graphical and auditory approaches for interactive data exploration. The paper concludes with summarizing open problems in MVI analysis and remarks upon the future development of biomedical MVI analysis.

  10. Quantitative histogram analysis of images

    NASA Astrophysics Data System (ADS)

    Holub, Oliver; Ferreira, Sérgio T.

    2006-11-01

    A routine for histogram analysis of images has been written in the object-oriented, graphical development environment LabVIEW. The program converts an RGB bitmap image into an intensity-linear greyscale image according to selectable conversion coefficients. This greyscale image is subsequently analysed by plots of the intensity histogram and probability distribution of brightness, and by calculation of various parameters, including average brightness, standard deviation, variance, minimal and maximal brightness, mode, skewness and kurtosis of the histogram and the median of the probability distribution. The program allows interactive selection of specific regions of interest (ROI) in the image and definition of lower and upper threshold levels (e.g., to permit the removal of a constant background signal). The results of the analysis of multiple images can be conveniently saved and exported for plotting in other programs, which allows fast analysis of relatively large sets of image data. The program file accompanies this manuscript together with a detailed description of two application examples: The analysis of fluorescence microscopy images, specifically of tau-immunofluorescence in primary cultures of rat cortical and hippocampal neurons, and the quantification of protein bands by Western-blot. The possibilities and limitations of this kind of analysis are discussed. Program summaryTitle of program: HAWGC Catalogue identifier: ADXG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXG_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computers: Mobile Intel Pentium III, AMD Duron Installations: No installation necessary—Executable file together with necessary files for LabVIEW Run-time engine Operating systems or monitors under which the program has been tested: WindowsME/2000/XP Programming language used: LabVIEW 7.0 Memory required to execute with typical data:˜16MB for starting and ˜160MB used for

  11. A mathematical analysis of the ABCD criteria for diagnosing malignant melanoma

    NASA Astrophysics Data System (ADS)

    Lee, Hyunju; Kwon, Kiwoon

    2017-03-01

    The medical community currently employs the ABCD (asymmetry, border irregularity, color variegation, and diameter of the lesion) criteria in the early diagnosis of a malignant melanoma. Although many image segmentation and classification methods are used to analyze the ABCD criteria, it is rare to see a study containing mathematical justification of the parameters that are used to quantify the ABCD criteria. In this paper, we suggest new parameters to assess asymmetry, border irregularity, and color variegation, and explain the mathematical meaning of the parameters. The suggested parameters are then tested with 24 skin samples. The parameters suggested for the 24 skin samples are displayed in three-dimensional coordinates and are compared to those presented in other studies (Ercal et al 1994 IEEE Trans. Biomed. Eng. 41 837–45, Cheerla and Frazier 2014 Int. J. Innovative Res. Sci., Eng. Technol. 3 9164–83) in terms of Pearson correlation coefficient and classification accuracy in determining the malignancy of the lesions.

  12. A Unified Mathematical Approach to Image Analysis.

    DTIC Science & Technology

    1987-08-31

    describes four instances of the paradigm in detail. Directions for ongoing and future research are also indicated. Keywords: Image processing; Algorithms; Segmentation; Boundary detection; tomography; Global image analysis .

  13. CLINICAL AUDIT OF IMAGE QUALITY IN RADIOLOGY USING VISUAL GRADING CHARACTERISTICS ANALYSIS.

    PubMed

    Tesselaar, Erik; Dahlström, Nils; Sandborg, Michael

    2016-06-01

    The aim of this work was to assess whether an audit of clinical image quality could be efficiently implemented within a limited time frame using visual grading characteristics (VGC) analysis. Lumbar spine radiography, bedside chest radiography and abdominal CT were selected. For each examination, images were acquired or reconstructed in two ways. Twenty images per examination were assessed by 40 radiology residents using visual grading of image criteria. The results were analysed using VGC. Inter-observer reliability was assessed. The results of the visual grading analysis were consistent with expected outcomes. The inter-observer reliability was moderate to good and correlated with perceived image quality (r(2) = 0.47). The median observation time per image or image series was within 2 min. These results suggest that the use of visual grading of image criteria to assess the quality of radiographs provides a rapid method for performing an image quality audit in a clinical environment.

  14. 75 FR 80544 - NUREG-1953, Confirmatory Thermal-Hydraulic Analysis To Support Specific Success Criteria in the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-22

    ... COMMISSION NUREG-1953, Confirmatory Thermal-Hydraulic Analysis To Support Specific Success Criteria in the..., ``Confirmatory Thermal-Hydraulic Analysis to Support Specific Success Criteria in the Standardized Plant Analysis... . SUPPLEMENTARY INFORMATION: NUREG-1953, ``Confirmatory Thermal-Hydraulic Analysis to Support Specific...

  15. Minimizing impacts of land use change on ecosystem services using multi-criteria heuristic analysis.

    PubMed

    Keller, Arturo A; Fournier, Eric; Fox, Jessica

    2015-06-01

    Development of natural landscapes to support human activities impacts the capacity of the landscape to provide ecosystem services. Typically, several ecosystem services are impacted at a single development site and various footprint scenarios are possible, thus a multi-criteria analysis is needed. Restoration potential should also be considered for the area surrounding the permanent impact site. The primary objective of this research was to develop a heuristic approach to analyze multiple criteria (e.g. impacts to various ecosystem services) in a spatial configuration with many potential development sites. The approach was to: (1) quantify the magnitude of terrestrial ecosystem service (biodiversity, carbon sequestration, nutrient and sediment retention, and pollination) impacts associated with a suite of land use change scenarios using the InVEST model; (2) normalize results across categories of ecosystem services to allow cross-service comparison; (3) apply the multi-criteria heuristic algorithm to select sites with the least impact to ecosystem services, including a spatial criterion (separation between sites). As a case study, the multi-criteria impact minimization algorithm was applied to InVEST output to select 25 potential development sites out of 204 possible locations (selected by other criteria) within a 24,000 ha property. This study advanced a generally applicable spatial multi-criteria approach for 1) considering many land use footprint scenarios, 2) balancing impact decisions across a suite of ecosystem services, and 3) determining the restoration potential of ecosystem services after impacts.

  16. Selecting Potential Targetable Biomarkers for Imaging Purposes in Colorectal Cancer Using TArget Selection Criteria (TASC): A Novel Target Identification Tool.

    PubMed

    van Oosten, Marleen; Crane, Lucia Ma; Bart, Joost; van Leeuwen, Fijs W; van Dam, Gooitzen M

    2011-04-01

    Peritoneal carcinomatosis (PC) of colorectal origin is associated with a poor prognosis. However, cytoreductive surgery combined with hyperthermic intraperitoneal chemotherapy is available for a selected group of PC patients, which significantly increases overall survival rates up to 30%. As a consequence, there is substantial room for improvement. Tumor targeting is expected to improve the treatment efficacy of colorectal cancer (CRC) further through 1) more sensitive preoperative tumor detection, thus reducing overtreatment; 2) better intraoperative detection and surgical elimination of residual disease using tumor-specific intraoperative imaging; and 3) tumor-specific targeted therapeutics. This review focuses, in particular, on the development of tumor-targeted imaging agents. A large number of biomarkers are known to be upregulated in CRC. However, to date, no validated criteria have been described for the selection of the most promising biomarkers for tumor targeting. Such a scoring system might improve the selection of the correct biomarker for imaging purposes. In this review, we present the TArget Selection Criteria (TASC) scoring system for selection of potential biomarkers for tumor-targeted imaging. By applying TASC to biomarkers for CRC, we identified seven biomarkers (carcinoembryonic antigen, CXC chemokine receptor 4, epidermal growth factor receptor, epithelial cell adhesion molecule, matrix metalloproteinases, mucin 1, and vascular endothelial growth factor A) that seem most suitable for tumor-targeted imaging applications in colorectal cancer. Further cross-validation studies in CRC and other tumor types are necessary to establish its definitive value.

  17. Do choosing wisely tools meet criteria for patient decision aids? A descriptive analysis of patient materials

    PubMed Central

    Légaré, France; Hébert, Jessica; Goh, Larissa; Lewis, Krystina B; Leiva Portocarrero, Maria Ester; Robitaille, Hubert; Stacey, Dawn

    2016-01-01

    Objectives Choosing Wisely is a remarkable physician-led campaign to reduce unnecessary or harmful health services. Some of the literature identifies Choosing Wisely as a shared decision-making approach. We evaluated the patient materials developed by Choosing Wisely Canada to determine whether they meet the criteria for shared decision-making tools known as patient decision aids. Design Descriptive analysis of all Choosing Wisely Canada patient materials. Data source In May 2015, we selected all Choosing Wisely Canada patient materials from its official website. Main outcomes and measures Four team members independently extracted characteristics of the English materials using the International Patient Decision Aid Standards (IPDAS) modified 16-item minimum criteria for qualifying and certifying patient decision aids. The research team discussed discrepancies between data extractors and reached a consensus. Descriptive analysis was conducted. Results Of the 24 patient materials assessed, 12 were about treatments, 11 were about screening and 1 was about prevention. The median score for patient materials using IPDAS criteria was 10/16 (range: 8–11) for screening topics and 6/12 (range: 6–9) for prevention and treatment topics. Commonly missed criteria were stating the decision (21/24 did not), providing balanced information on option benefits/harms (24/24 did not), citing evidence (24/24 did not) and updating policy (24/24 did not). Out of 24 patient materials, only 2 met the 6 IPDAS criteria to qualify as patient decision aids, and neither of these 2 met the 6 certifying criteria. Conclusions Patient materials developed by Choosing Wisely Canada do not meet the IPDAS minimal qualifying or certifying criteria for patient decision aids. Modifications to the Choosing Wisely Canada patient materials would help to ensure that they qualify as patient decision aids and thus as more effective shared decision-making tools. PMID:27566638

  18. The use of multi-criteria decision analysis to tackle waste management problems: a literature review.

    PubMed

    Achillas, Charisios; Moussiopoulos, Nicolas; Karagiannidis, Avraam; Banias, Georgias; Perkoulidis, George

    2013-02-01

    Problems in waste management have become more and more complex during recent decades. The increasing volumes of waste produced and social environmental consciousness present prominent drivers for environmental managers towards the achievement of a sustainable waste management scheme. However, in practice, there are many factors and influences - often mutually conflicting - criteria for finding solutions in real-life applications. This paper presents a review of the literature on multi-criteria decision aiding in waste management problems for all reported waste streams. Despite limitations, which are clearly stated, most of the work published in this field is reviewed. The present review aims to provide environmental managers and decision-makers with a thorough list of practical applications of the multi-criteria decision analysis techniques that are used to solve real-life waste management problems, as well as the criteria that are mostly employed in such applications according to the nature of the problem under study. Moreover, the paper explores the advantages and disadvantages of using multi-criteria decision analysis techniques in waste management problems in comparison to other available alternatives.

  19. Image analysis in medical imaging: recent advances in selected examples.

    PubMed

    Dougherty, G

    2010-01-01

    Medical imaging has developed into one of the most important fields within scientific imaging due to the rapid and continuing progress in computerised medical image visualisation and advances in analysis methods and computer-aided diagnosis. Several research applications are selected to illustrate the advances in image analysis algorithms and visualisation. Recent results, including previously unpublished data, are presented to illustrate the challenges and ongoing developments.

  20. Logical Criteria Applied in Writing and in Editing by Text Analysis.

    ERIC Educational Resources Information Center

    Mandersloot, Wim G. B.

    1996-01-01

    Argues that technical communication editing is most effective if it deals with structure first, and that structure deficiencies can be detected by applying a range of logical analysis criteria to each text part. Concludes that lists, headings, classifications, and organograms must comply with the laws of categorization and relevant logical…

  1. Quantitative multi-image analysis for biomedical Raman spectroscopic imaging.

    PubMed

    Hedegaard, Martin A B; Bergholt, Mads S; Stevens, Molly M

    2016-05-01

    Imaging by Raman spectroscopy enables unparalleled label-free insights into cell and tissue composition at the molecular level. With established approaches limited to single image analysis, there are currently no general guidelines or consensus on how to quantify biochemical components across multiple Raman images. Here, we describe a broadly applicable methodology for the combination of multiple Raman images into a single image for analysis. This is achieved by removing image specific background interference, unfolding the series of Raman images into a single dataset, and normalisation of each Raman spectrum to render comparable Raman images. Multivariate image analysis is finally applied to derive the contributing 'pure' biochemical spectra for relative quantification. We present our methodology using four independently measured Raman images of control cells and four images of cells treated with strontium ions from substituted bioactive glass. We show that the relative biochemical distribution per area of the cells can be quantified. In addition, using k-means clustering, we are able to discriminate between the two cell types over multiple Raman images. This study shows a streamlined quantitative multi-image analysis tool for improving cell/tissue characterisation and opens new avenues in biomedical Raman spectroscopic imaging.

  2. Imaging analysis of LDEF craters

    NASA Technical Reports Server (NTRS)

    Radicatidibrozolo, F.; Harris, D. W.; Chakel, J. A.; Fleming, R. H.; Bunch, T. E.

    1991-01-01

    Two small craters in Al from the Long Duration Exposure Facility (LDEF) experiment tray A11E00F (no. 74, 119 micron diameter and no. 31, 158 micron diameter) were analyzed using Auger electron spectroscopy (AES), time-of-flight secondary ion mass spectroscopy (TOF-SIMS), low voltage scanning electron microscopy (LVSEM), and SEM energy dispersive spectroscopy (EDS). High resolution images and sensitive elemental and molecular analysis were obtained with this combined approach. The result of these analyses are presented.

  3. Validating retinal fundus image analysis algorithms: issues and a proposal.

    PubMed

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; Al-Diri, Bashir; Cheung, Carol Y; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M; Jelinek, Herbert F; Meriaudeau, Fabrice; Quellec, Gwénolé; Macgillivray, Tom; Dhillon, Bal

    2013-05-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison.

  4. The economics of project analysis: Optimal investment criteria and methods of study

    NASA Technical Reports Server (NTRS)

    Scriven, M. C.

    1979-01-01

    Insight is provided toward the development of an optimal program for investment analysis of project proposals offering commercial potential and its components. This involves a critique of economic investment criteria viewed in relation to requirements of engineering economy analysis. An outline for a systems approach to project analysis is given Application of the Leontief input-output methodology to analysis of projects involving multiple processes and products is investigated. Effective application of elements of neoclassical economic theory to investment analysis of project components is demonstrated. Patterns of both static and dynamic activity levels are incorporated.

  5. Image analysis of dye stained patterns in soils

    NASA Astrophysics Data System (ADS)

    Bogner, Christina; Trancón y Widemann, Baltasar; Lange, Holger

    2013-04-01

    Quality of surface water and groundwater is directly affected by flow processes in the unsaturated zone. In general, it is difficult to measure or model water flow. Indeed, parametrization of hydrological models is problematic and often no unique solution exists. To visualise flow patterns in soils directly dye tracer studies can be done. These experiments provide images of stained soil profiles and their evaluation demands knowledge in hydrology as well as in image analysis and statistics. First, these photographs are converted to binary images classifying the pixels in dye stained and non-stained ones. Then, some feature extraction is necessary to discern relevant hydrological information. In our study we propose to use several index functions to extract different (ideally complementary) features. We associate each image row with a feature vector (i.e. a certain number of image function values) and use these features to cluster the image rows to identify similar image areas. Because images of stained profiles might have different reasonable clusterings, we calculate multiple consensus clusterings. An expert can explore these different solutions and base his/her interpretation of predominant flow mechanisms on quantitative (objective) criteria. The complete workflow from reading-in binary images to final clusterings has been implemented in the free R system, a language and environment for statistical computing. The calculation of image indices is part of our own package Indigo, manipulation of binary images, clustering and visualization of results are done using either build-in facilities in R, additional R packages or the LATEX system.

  6. Can medical criteria settle priority-setting debates? The need for ethical analysis.

    PubMed

    Dickenson, D L

    1999-01-01

    Medical criteria rooted in evidence-based medicine are often seen as a value-neutral 'trump card' which puts paid to any further debate about setting priorities for treatment. On this argument, doctors should stop providing treatment at the point when it becomes medically futile, and that is also the threshold at which the health purchaser should stop purchasing. This paper offers three kinds of ethical criteria as a counterweight to analysis based solely on medical criteria. The first set of arguments concerns futility, probability and utility; the second, justice and fairness; the third, consent and competence. The argument is illustrated by two recent case studies about futility and priority-setting: the U.S. example of 'Baby Ryan' and the U.K. case of 'Child B'.

  7. Planning applications in image analysis

    NASA Technical Reports Server (NTRS)

    Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.

    1994-01-01

    We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.

  8. An analysis of the criteria used to diagnose children with Nonverbal Learning Disability (NLD).

    PubMed

    Mammarella, Irene C; Cornoldi, Cesare

    2014-01-01

    Based on a review of the literature, the diagnostic criteria used for children with nonverbal learning disabilities (NLD) were identified as follows: (a) low visuospatial intelligence; (b) discrepancy between verbal and visuospatial intelligence; (c) visuoconstructive and fine-motor coordination skills; (d) visuospatial memory tasks; (e) reading better than mathematical achievement; and (f) socioemotional skills. An analysis of the effect size was used to investigate the strength of criteria for diagnosing NLD considering 35 empirical studies published from January 1980 to February 2011. Overall, our results showed that the most important criteria for distinguishing children with NLD from controls were as follows: a low visuospatial intelligence with a relatively good verbal intelligence, visuoconstructive and fine-motor coordination impairments, good reading decoding together with low math performance. Deficits in visuospatial memory and social skills were also present. A preliminary set of criteria for diagnosing NLD was developed on these grounds. It was concluded, however, that-although some consensus is emerging-further research is needed to definitively establish shared diagnostic criteria for children with NLD.

  9. Automated image analysis of uterine cervical images

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Gu, Jia; Ferris, Daron; Poirson, Allen

    2007-03-01

    Cervical Cancer is the second most common cancer among women worldwide and the leading cause of cancer mortality of women in developing countries. If detected early and treated adequately, cervical cancer can be virtually prevented. Cervical precursor lesions and invasive cancer exhibit certain morphologic features that can be identified during a visual inspection exam. Digital imaging technologies allow us to assist the physician with a Computer-Aided Diagnosis (CAD) system. In colposcopy, epithelium that turns white after application of acetic acid is called acetowhite epithelium. Acetowhite epithelium is one of the major diagnostic features observed in detecting cancer and pre-cancerous regions. Automatic extraction of acetowhite regions from cervical images has been a challenging task due to specular reflection, various illumination conditions, and most importantly, large intra-patient variation. This paper presents a multi-step acetowhite region detection system to analyze the acetowhite lesions in cervical images automatically. First, the system calibrates the color of the cervical images to be independent of screening devices. Second, the anatomy of the uterine cervix is analyzed in terms of cervix region, external os region, columnar region, and squamous region. Third, the squamous region is further analyzed and subregions based on three levels of acetowhite are identified. The extracted acetowhite regions are accompanied by color scores to indicate the different levels of acetowhite. The system has been evaluated by 40 human subjects' data and demonstrates high correlation with experts' annotations.

  10. Statistical analysis of dynamic sequences for functional imaging

    NASA Astrophysics Data System (ADS)

    Kao, Chien-Min; Chen, Chin-Tu; Wernick, Miles N.

    2000-04-01

    Factor analysis of medical image sequences (FAMIS), in which one concerns the problem of simultaneous identification of homogeneous regions (factor images) and the characteristic temporal variations (factors) inside these regions from a temporal sequence of images by statistical analysis, is one of the major challenges in medical imaging. In this research, we contribute to this important area of research by proposing a two-step approach. First, we study the use of the noise- adjusted principal component (NAPC) analysis developed by Lee et. al. for identifying the characteristic temporal variations in dynamic scans acquired by PET and MRI. NAPC allows us to effectively reject data noise and substantially reduce data dimension based on signal-to-noise ratio consideration. Subsequently, a simple spatial analysis based on the criteria of minimum spatial overlapping and non-negativity of the factor images is applied for extraction of the factors and factor images. In our simulation study, our preliminary results indicate that the proposed approach can accurately identify the factor images. However, the factors are not completely separated.

  11. A water quality monitoring network design using fuzzy theory and multiple criteria analysis.

    PubMed

    Chang, Chia-Ling; Lin, You-Tze

    2014-10-01

    A proper water quality monitoring design is required in a watershed, particularly in a water resource protected area. As numerous factors can influence the water quality monitoring design, this study applies multiple criteria analysis to evaluate the suitability of the water quality monitoring design in the Taipei Water Resource Domain (TWRD) in northern Taiwan. Seven criteria, which comprise percentage of farmland area, percentage of built-up area, amount of non-point source pollution, green cover ratio, landslide area ratio, ratio of over-utilization on hillsides, and density of water quality monitoring stations, are selected in the multiple criteria analysis. The criteria are normalized and weighted. The weighted method is applied to score the subbasins. The density of water quality stations needs to be increased in priority in the subbasins with a higher score. The fuzzy theory is utilized to prioritize the need for a higher density of water quality monitoring stations. The results show that the need for more water quality stations in subbasin 2 in the Bei-Shih Creek Basin is much higher than those in the other subbasins. Furthermore, the existing water quality station in subbasin 2 requires maintenance. It is recommended that new water quality stations be built in subbasin 2.

  12. Designing a Software for Flood Risk Assessment Based on Multi Criteria Desicion Analysis and Information Diffusion Methods

    NASA Astrophysics Data System (ADS)

    Musaoglu, N.; Saral, A.; Seker, D. Z.

    2012-12-01

    Flooding is one of the major natural disasters not only in Turkey but also in all over the world and it causes serious damage and harm. It is estimated that of the total economic loss caused by all kinds of disasters, 40% was due to floods. In July 1995, the Ayamama Creek in Istanbul was flooded, the insurance sector received around 1,200 claims notices during that period, insurance companies had to pay a total of $40 million for claims. In 2009, the same creek was flooded again and killed 31 people over two days and insurance firms paid for damages around cost €150 million for claims. To solve these kinds of problems modern tools such as GIS and Remote Sensing should be utilized. In this study, a software was designed for the flood risk analysis with Analytic Hierarchy Process (AHP) and Information Diffusion( InfoDif) methods.In the developed sofware, five evaluation criterias were taken into account, which were slope, aspect, elevation, geology and land use which were extracted from the satellite sensor data. The Digital Elevation Model (DEM) of the Ayamama River Basin was acquired from the SPOT 5 satellite image with 2.5 meter spatial resolution. Slope and aspect values of the study basin were extracted from this DEM. The land use of the Ayamama Creek was obtained by performing object-oriented nearest neighbor classification method by image segmentation on SPOT 5 image dated 2010. All produced data were used as an input for the part of Multi Criteria Desicion Analysis (MCDA) method of this software. Criterias and their each sub criteras were weighted and flood vulnerability was determined with MCDA-AHP. Also, daily flood data was collected from Florya Meteorological Station, between 1975 to 2009 years and the daily flood peak discharge was calculated with the method of Soil Conservation Service-Curve Number (SCS-CN) and were used as an input in the software for the part of InfoDif.Obtained results were verified using ground truth data and it has been clearly

  13. Using multi-criteria decision analysis to assess the vulnerability of drinking water utilities.

    PubMed

    Joerin, Florent; Cool, Geneviève; Rodriguez, Manuel J; Gignac, Marc; Bouchard, Christian

    2010-07-01

    Outbreaks of microbiological waterborne disease have increased governmental concern regarding the importance of drinking water safety. Considering the multi-barrier approach to safe drinking water may improve management decisions to reduce contamination risks. However, the application of this approach must consider numerous and diverse kinds of information simultaneously. This makes it difficult for authorities to apply the approach to decision making. For this reason, multi-criteria decision analysis can be helpful in applying the multi-barrier approach to vulnerability assessment. The goal of this study is to propose an approach based on a multi-criteria analysis method in order to rank drinking water systems (DWUs) based on their vulnerability to microbiological contamination. This approach is illustrated with an application carried out on 28 DWUs supplied by groundwater in the Province of Québec, Canada. The multi-criteria analysis method chosen is measuring attractiveness by a categorical based evaluation technique methodology allowing the assessment of a microbiological vulnerability indicator (MVI) for each DWU. Results are presented on a scale ranking DWUs from less vulnerable to most vulnerable to contamination. MVI results are tested using a sensitivity analysis on barrier weights and they are also compared with historical data on contamination at the utilities. The investigation demonstrates that MVI provides a good representation of the vulnerability of DWUs to microbiological contamination.

  14. Automated Microarray Image Analysis Toolbox for MATLAB

    SciTech Connect

    White, Amanda M.; Daly, Don S.; Willse, Alan R.; Protic, Miroslava; Chandler, Darrell P.

    2005-09-01

    The Automated Microarray Image Analysis (AMIA) Toolbox for MATLAB is a flexible, open-source microarray image analysis tool that allows the user to customize analysis of sets of microarray images. This tool provides several methods of identifying and quantify spot statistics, as well as extensive diagnostic statistics and images to identify poor data quality or processing. The open nature of this software allows researchers to understand the algorithms used to provide intensity estimates and to modify them easily if desired.

  15. Multispectral Image Analysis of Hurricane Gilbert

    DTIC Science & Technology

    1989-05-19

    Classification) Multispectral Image Analysis of Hurrican Gilbert (unclassified) 12. PERSONAL AUTHOR(S) Kleespies, Thomas J. (GL/LYS) 13a. TYPE OF REPORT...cloud top height. component, of tle image in the red channel, and similarly for the green and blue channels. Multispectral Muti.pectral image analysis can...However, there seems to be few references to the human range of vision, the selection as to which mllti.pp.tral image analysis of scenes or

  16. Multi-criteria decision analysis in conservation planning: Designing conservation area networks in San Diego County

    NASA Astrophysics Data System (ADS)

    MacDonald, Garrick Richard

    applicable to the research project, however, it did exhibit a few limitations. Both the advantages and disadvantages of ConsNet should be considered before using ConsNet for future conservation planning projects. The research project is an example of a large data scenario undertaken with a multiple criteria decision analysis (MCDA) approach.

  17. Enclosure fire hazard analysis using relative energy release criteria. [burning rate and combustion control

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1978-01-01

    A method for predicting the probable course of fire development in an enclosure is presented. This fire modeling approach uses a graphic plot of five fire development constraints, the relative energy release criteria (RERC), to bound the heat release rates in an enclosure as a function of time. The five RERC are flame spread rate, fuel surface area, ventilation, enclosure volume, and total fuel load. They may be calculated versus time based on the specified or empirical conditions describing the specific enclosure, the fuel type and load, and the ventilation. The calculation of these five criteria, using the common basis of energy release rates versus time, provides a unifying framework for the utilization of available experimental data from all phases of fire development. The plot of these criteria reveals the probable fire development envelope and indicates which fire constraint will be controlling during a criteria time period. Examples of RERC application to fire characterization and control and to hazard analysis are presented along with recommendations for the further development of the concept.

  18. Quantitative analysis of in vivo confocal microscopy images: a review.

    PubMed

    Patel, Dipika V; McGhee, Charles N

    2013-01-01

    In vivo confocal microscopy (IVCM) is a non-invasive method of examining the living human cornea. The recent trend towards quantitative studies using IVCM has led to the development of a variety of methods for quantifying image parameters. When selecting IVCM images for quantitative analysis, it is important to be consistent regarding the location, depth, and quality of images. All images should be de-identified, randomized, and calibrated prior to analysis. Numerous image analysis software are available, each with their own advantages and disadvantages. Criteria for analyzing corneal epithelium, sub-basal nerves, keratocytes, endothelium, and immune/inflammatory cells have been developed, although there is inconsistency among research groups regarding parameter definition. The quantification of stromal nerve parameters, however, remains a challenge. Most studies report lower inter-observer repeatability compared with intra-observer repeatability, and observer experience is known to be an important factor. Standardization of IVCM image analysis through the use of a reading center would be crucial for any future large, multi-centre clinical trials using IVCM.

  19. Adaptation and Evaluation of a Multi-Criteria Decision Analysis Model for Lyme Disease Prevention.

    PubMed

    Aenishaenslin, Cécile; Gern, Lise; Michel, Pascal; Ravel, André; Hongoh, Valérie; Waaub, Jean-Philippe; Milord, François; Bélanger, Denise

    2015-01-01

    Designing preventive programs relevant to vector-borne diseases such as Lyme disease (LD) can be complex given the need to include multiple issues and perspectives into prioritizing public health actions. A multi-criteria decision aid (MCDA) model was previously used to rank interventions for LD prevention in Quebec, Canada, where the disease is emerging. The aim of the current study was to adapt and evaluate the decision model constructed in Quebec under a different epidemiological context, in Switzerland, where LD has been endemic for the last thirty years. The model adaptation was undertaken with a group of Swiss stakeholders using a participatory approach. The PROMETHEE method was used for multi-criteria analysis. Key elements and results of the MCDA model are described and contrasted with the Quebec model. All criteria and most interventions of the MCDA model developed for LD prevention in Quebec were directly transferable to the Swiss context. Four new decision criteria were added, and the list of proposed interventions was modified. Based on the overall group ranking, interventions targeting human populations were prioritized in the Swiss model, with the top ranked action being the implementation of a large communication campaign. The addition of criteria did not significantly alter the intervention rankings, but increased the capacity of the model to discriminate between highest and lowest ranked interventions. The current study suggests that beyond the specificity of the MCDA models developed for Quebec and Switzerland, their general structure captures the fundamental and common issues that characterize the complexity of vector-borne disease prevention. These results should encourage public health organizations to adapt, use and share MCDA models as an effective and functional approach to enable the integration of multiple perspectives and considerations in the prevention and control of complex public health issues such as Lyme disease or other vector

  20. Adaptation and Evaluation of a Multi-Criteria Decision Analysis Model for Lyme Disease Prevention

    PubMed Central

    Aenishaenslin, Cécile; Gern, Lise; Michel, Pascal; Ravel, André; Hongoh, Valérie; Waaub, Jean-Philippe; Milord, François; Bélanger, Denise

    2015-01-01

    Designing preventive programs relevant to vector-borne diseases such as Lyme disease (LD) can be complex given the need to include multiple issues and perspectives into prioritizing public health actions. A multi-criteria decision aid (MCDA) model was previously used to rank interventions for LD prevention in Quebec, Canada, where the disease is emerging. The aim of the current study was to adapt and evaluate the decision model constructed in Quebec under a different epidemiological context, in Switzerland, where LD has been endemic for the last thirty years. The model adaptation was undertaken with a group of Swiss stakeholders using a participatory approach. The PROMETHEE method was used for multi-criteria analysis. Key elements and results of the MCDA model are described and contrasted with the Quebec model. All criteria and most interventions of the MCDA model developed for LD prevention in Quebec were directly transferable to the Swiss context. Four new decision criteria were added, and the list of proposed interventions was modified. Based on the overall group ranking, interventions targeting human populations were prioritized in the Swiss model, with the top ranked action being the implementation of a large communication campaign. The addition of criteria did not significantly alter the intervention rankings, but increased the capacity of the model to discriminate between highest and lowest ranked interventions. The current study suggests that beyond the specificity of the MCDA models developed for Quebec and Switzerland, their general structure captures the fundamental and common issues that characterize the complexity of vector-borne disease prevention. These results should encourage public health organizations to adapt, use and share MCDA models as an effective and functional approach to enable the integration of multiple perspectives and considerations in the prevention and control of complex public health issues such as Lyme disease or other vector

  1. Latent Class Analysis of DSM-5 Alcohol Use Disorder Criteria among Heavy-Drinking College Students

    PubMed Central

    Neighbors, Clayton

    2015-01-01

    The DSM-5 has created significant changes in the definition of alcohol use disorders (AUD). Limited work has considered the impact of these changes in specific populations, such as heavy-drinking college students. Latent class analysis (LCA) is a person-centered approach that divides a population into mutually exclusive and exhaustive latent classes, based on observable indicator variables. The present research was designed to examine whether there were distinct classes of heavy-drinking college students who met DSM-5 criteria for an AUD and whether gender, perceived social norms, use of protective behavioral strategies (PBS), drinking refusal self-efficacy (DRSE), self-perceptions of drinking identity, psychological distress, and membership in a fraternity/sorority would be associated with class membership. Three-hundred and ninety-four college students who met DSM-5 criteria for an AUD were recruited from three different universities. Two distinct classes emerged: Less Severe (86%), the majority of whom endorsed both drinking more than intended and tolerance, as well as met criteria for a mild AUD; and More Severe (14%), the majority of whom endorsed at least half of the DSM-5 AUD criteria and met criteria for a severe AUD. Relative to the Less Severe class, membership in the More Severe class was negatively associated with DRSE and positively associated with self-identification as a drinker. There is a distinct class of heavy-drinking college students with a more severe AUD and for whom intervention content needs to be more focused and tailored. Clinical implications are discussed. PMID:26051027

  2. Micro-CT imaging: Developing criteria for examining fetal skeletons in regulatory developmental toxicology studies - A workshop report.

    PubMed

    Solomon, Howard M; Makris, Susan L; Alsaid, Hasan; Bermudez, Oscar; Beyer, Bruce K; Chen, Antong; Chen, Connie L; Chen, Zhou; Chmielewski, Gary; DeLise, Anthony M; de Schaepdrijver, Luc; Dogdas, Belma; French, Julian; Harrouk, Wafa; Helfgott, Jonathan; Henkelman, R Mark; Hesterman, Jacob; Hew, Kok-Wah; Hoberman, Alan; Lo, Cecilia W; McDougal, Andrew; Minck, Daniel R; Scott, Lelia; Stewart, Jane; Sutherland, Vicki; Tatiparthi, Arun K; Winkelmann, Christopher T; Wise, L David; Wood, Sandra L; Ying, Xiaoyou

    2016-06-01

    During the past two decades the use and refinements of imaging modalities have markedly increased making it possible to image embryos and fetuses used in pivotal nonclinical studies submitted to regulatory agencies. Implementing these technologies into the Good Laboratory Practice environment requires rigorous testing, validation, and documentation to ensure the reproducibility of data. A workshop on current practices and regulatory requirements was held with the goal of defining minimal criteria for the proper implementation of these technologies and subsequent submission to regulatory agencies. Micro-computed tomography (micro-CT) is especially well suited for high-throughput evaluations, and is gaining popularity to evaluate fetal skeletons to assess the potential developmental toxicity of test agents. This workshop was convened to help scientists in the developmental toxicology field understand and apply micro-CT technology to nonclinical toxicology studies and facilitate the regulatory acceptance of imaging data. Presentations and workshop discussions covered: (1) principles of micro-CT fetal imaging; (2) concordance of findings with conventional skeletal evaluations; and (3) regulatory requirements for validating the system. Establishing these requirements for micro-CT examination can provide a path forward for laboratories considering implementing this technology and provide regulatory agencies with a basis to consider the acceptability of data generated via this technology.

  3. Principles and clinical applications of image analysis.

    PubMed

    Kisner, H J

    1988-12-01

    Image processing has traveled to the lunar surface and back, finding its way into the clinical laboratory. Advances in digital computers have improved the technology of image analysis, resulting in a wide variety of medical applications. Offering improvements in turnaround time, standardized systems, increased precision, and walkaway automation, digital image analysis has likely found a permanent home as a diagnostic aid in the interpretation of microscopic as well as macroscopic laboratory images.

  4. Multiple criteria analysis of remotely piloted aircraft systems for monitoring the crops vegetation status

    NASA Astrophysics Data System (ADS)

    Cristea, L.; Luculescu, M. C.; Zamfira, S. C.; Boer, A. L.; Pop, S.

    2016-08-01

    The paper presents an analysis of Remotely Piloted Aircraft Systems (RPAS) used for monitoring the crops vegetation status. The study focuses on two types of RPAS, namely the flying wing and the multi-copter. The following criteria were taken into account: technical characteristics, power consumption, flight autonomy, flight conditions, costs, data acquisition systems used for monitoring, crops area and so on. Based on this analysis, advantages and disadvantages are emphasized offering a useful tool for choosing the proper solution according to the specific application conditions.

  5. FFDM image quality assessment using computerized image texture analysis

    NASA Astrophysics Data System (ADS)

    Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina

    2010-04-01

    Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.

  6. [THE COMPARATIVE ANALYSIS OF INFORMATION VALUE OF MAIN CLINICAL CRITERIA USED TO DIAGNOSE OF BACTERIAL VAGINOSIS].

    PubMed

    Tsvetkova, A V; Murtazina, Z A; Markusheva, T V; Mavzutov, A R

    2015-05-01

    The bacterial vaginosis is one of the most frequent causes of women visiting gynecologist. The diagnostics of bacterial vaginosis is predominantly based on Amsel criteria (1983). Nowadays, the objectivity of these criteria is disputed more often. The analysis of excretion of mucous membranes of posterolateral fornix of vagina was applied to 640 women with clinical diagnosis bacterial vaginosis. The application of light microscopy to mounts of excretion confirmed in laboratory way the diagnosis of bacterial vaginosis in 100 (15.63%) women. The complaints of burning and unpleasant smell and the Amsel criterion of detection of "key cells" against the background of pH > 4.5 were established as statistically significant for bacterial vaginosis. According study data, the occurrence of excretions has no statistical reliable obligation for differentiation of bacterial vaginosis form other inflammatory pathological conditions of female reproductive sphere. At the same time, detection of "key cells" in mount reliably correlated with bacterial vaginosis.

  7. Using modified visual-inspection criteria to interpret functional analysis outcomes.

    PubMed

    Roane, Henry S; Fisher, Wayne W; Kelley, Michael E; Mevers, Joanna L; Bouxsein, Kelly J

    2013-01-01

    The development of functional analysis (FA) methodologies allows the identification of the reinforcers that maintain problem behavior and improved intervention efficacy in the form of function-based treatments. Despite the profound impact of FA on clinical practice and research, questions still remain about the methods by which clinicians and researchers interpret FA graphs. In the current study, 141 FA data sets were evaluated using the structured visual-inspection criteria developed by Hagopian et al. (1997). However, the criteria were modified for FAs of varying lengths. Interobserver agreement assessments revealed high agreement coefficients across expert judges, postdoctoral reviewers, master's-level reviewers, and postbaccalaureate reviewers. Once the validity of the modified visual-inspection procedures was established, the utility of those procedures was examined by using them to categorize the maintaining reinforcement contingency related to problem behavior for all 141 data sets and for the 101 participants who contributed to the 141 data sets.

  8. MetaQC: objective quality control and inclusion/exclusion criteria for genomic meta-analysis.

    PubMed

    Kang, Dongwan D; Sibille, Etienne; Kaminski, Naftali; Tseng, George C

    2012-01-01

    Genomic meta-analysis to combine relevant and homogeneous studies has been widely applied, but the quality control (QC) and objective inclusion/exclusion criteria have been largely overlooked. Currently, the inclusion/exclusion criteria mostly depend on ad-hoc expert opinion or naïve threshold by sample size or platform. There are pressing needs to develop a systematic QC methodology as the decision of study inclusion greatly impacts the final meta-analysis outcome. In this article, we propose six quantitative quality control measures, covering internal homogeneity of coexpression structure among studies, external consistency of coexpression pattern with pathway database, and accuracy and consistency of differentially expressed gene detection or enriched pathway identification. Each quality control index is defined as the minus log transformed P values from formal hypothesis testing. Principal component analysis biplots and a standardized mean rank are applied to assist visualization and decision. We applied the proposed method to 4 large-scale examples, combining 7 brain cancer, 9 prostate cancer, 8 idiopathic pulmonary fibrosis and 17 major depressive disorder studies, respectively. The identified problematic studies were further scrutinized for potential technical or biological causes of their lower quality to determine their exclusion from meta-analysis. The application and simulation results concluded a systematic quality assessment framework for genomic meta-analysis.

  9. Image analysis: a consumer's guide.

    PubMed

    Meyer, F

    1983-01-01

    The last years have seen an explosion of systems in image analysis. It is hard for the pathologist or the cytologist to make the right choice of equipment. All machines are stupid, and the only valuable thing is the human work put into it. So make your benefit of the work other people have done for you. Chose a method largely used on many systems and which has proved fertile in many domains and not only for your specific to day's application: Mathematical Morphology, to which are to be added the linear convolutions present on all machines is a strong candidate for becoming such a method. The paper illustrates a working day of an ideal system: research and diagnostic directed work during the working hours, automatic screening of cervical (or other) smears during night.

  10. Spreadsheet-like image analysis

    NASA Astrophysics Data System (ADS)

    Wilson, Paul

    1992-08-01

    This report describes the design of a new software system being built by the Army to support and augment automated nondestructive inspection (NDI) on-line equipment implemented by the Army for detection of defective manufactured items. The new system recalls and post-processes (off-line) the NDI data sets archived by the on-line equipment for the purpose of verifying the correctness of the inspection analysis paradigms, of developing better analysis paradigms and to gather statistics on the defects of the items inspected. The design of the system is similar to that of a spreadsheet, i.e., an array of cells which may be programmed to contain functions with arguments being data from other cells and whose resultant is the output of that cell's function. Unlike a spreadsheet, the arguments and the resultants of a cell may be a matrix such as a two-dimensional matrix of picture elements (pixels). Functions include matrix mathematics, neural networks and image processing as well as those ordinarily found in spreadsheets. The system employs all of the common environmental supports of the Macintosh computer, which is the hardware platform. The system allows the resultant of a cell to be displayed in any of multiple formats such as a matrix of numbers, text, an image, or a chart. Each cell is a window onto the resultant. Like a spreadsheet if the input value of any cell is changed its effect is cascaded into the resultants of all cells whose functions use that value directly or indirectly. The system encourages the user to play what-of games, as ordinary spreadsheets do.

  11. Naval Signal and Image Analysis Conference Report

    DTIC Science & Technology

    1998-02-26

    Arlington Hilton Hotel in Arlington, Virginia. The meeting was by invitation only and consisted of investigators in the ONR Signal and Image Analysis Program...in signal and image analysis . The conference provided an opportunity for technical interaction between academic researchers and Naval scientists and...plan future directions for the ONR Signal and Image Analysis Program as well as informal recommendations to the Program Officer.

  12. Satellite image analysis using neural networks

    NASA Technical Reports Server (NTRS)

    Sheldon, Roger A.

    1990-01-01

    The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.

  13. Design criteria for a multiple input land use system. [digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.; Bryant, N. A.

    1975-01-01

    A design is presented that proposes the use of digital image processing techniques to interface existing geocoded data sets and information management systems with thematic maps and remote sensed imagery. The basic premise is that geocoded data sets can be referenced to a raster scan that is equivalent to a grid cell data set, and that images taken of thematic maps or from remote sensing platforms can be converted to a raster scan. A major advantage of the raster format is that x, y coordinates are implicitly recognized by their position in the scan, and z values can be treated as Boolean layers in a three-dimensional data space. Such a system permits the rapid incorporation of data sets, rapid comparison of data sets, and adaptation to variable scales by resampling the raster scans.

  14. Regulatory analysis on criteria for the release of patients administered radioactive material. Final report

    SciTech Connect

    Schneider, S.; McGuire, S.A.

    1997-02-01

    This regulatory analysis was developed to respond to three petitions for rulemaking to amend 10 CFR parts 20 and 35 regarding release of patients administered radioactive material. The petitions requested revision of these regulations to remove the ambiguity that existed between the 1-millisievert (0.1-rem) total effective dose equivalent (TEDE) public dose limit in Part 20, adopted in 1991, and the activity-based release limit in 10 CFR 35.75 that, in some instances, would permit release of individuals in excess of the current public dose limit. Three alternatives for resolution of the petitions were evaluated. Under Alternative 1, NRC would amend its patient release criteria in 10 CFR 35.75 to match the annual public dose limit in Part 20 of 1 millisievert (0.1 rem) TEDE. Alternative 2 would maintain the status quo of using the activity-based release criteria currently found in 10 CFR 35.75. Under Alternative 3, the NRC would revise the release criteria in 10 CFR 35.75 to specify a dose limit of 5 millisieverts (0.5 rem) TEDE.

  15. Microscopy image segmentation tool: Robust image data analysis

    SciTech Connect

    Valmianski, Ilya Monton, Carlos; Schuller, Ivan K.

    2014-03-15

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  16. Microscopy image segmentation tool: robust image data analysis.

    PubMed

    Valmianski, Ilya; Monton, Carlos; Schuller, Ivan K

    2014-03-01

    We present a software package called Microscopy Image Segmentation Tool (MIST). MIST is designed for analysis of microscopy images which contain large collections of small regions of interest (ROIs). Originally developed for analysis of porous anodic alumina scanning electron images, MIST capabilities have been expanded to allow use in a large variety of problems including analysis of biological tissue, inorganic and organic film grain structure, as well as nano- and meso-scopic structures. MIST provides a robust segmentation algorithm for the ROIs, includes many useful analysis capabilities, and is highly flexible allowing incorporation of specialized user developed analysis. We describe the unique advantages MIST has over existing analysis software. In addition, we present a number of diverse applications to scanning electron microscopy, atomic force microscopy, magnetic force microscopy, scanning tunneling microscopy, and fluorescent confocal laser scanning microscopy.

  17. Imaging-based enrichment criteria using deep learning algorithms for efficient clinical trials in mild cognitive impairment.

    PubMed

    Ithapu, Vamsi K; Singh, Vikas; Okonkwo, Ozioma C; Chappell, Richard J; Dowling, N Maritza; Johnson, Sterling C

    2015-12-01

    The mild cognitive impairment (MCI) stage of Alzheimer's disease (AD) may be optimal for clinical trials to test potential treatments for preventing or delaying decline to dementia. However, MCI is heterogeneous in that not all cases progress to dementia within the time frame of a trial and some may not have underlying AD pathology. Identifying those MCIs who are most likely to decline during a trial and thus most likely to benefit from treatment will improve trial efficiency and power to detect treatment effects. To this end, using multimodal, imaging-derived, inclusion criteria may be especially beneficial. Here, we present a novel multimodal imaging marker that predicts future cognitive and neural decline from [F-18]fluorodeoxyglucose positron emission tomography (PET), amyloid florbetapir PET, and structural magnetic resonance imaging, based on a new deep learning algorithm (randomized denoising autoencoder marker, rDAm). Using ADNI2 MCI data, we show that using rDAm as a trial enrichment criterion reduces the required sample estimates by at least five times compared with the no-enrichment regime and leads to smaller trials with high statistical power, compared with existing methods.

  18. Regulatory analysis on criteria for the release of patients administered radioactive material

    SciTech Connect

    Schneider, S.; McGuire, S.A.; Behling, U.H.; Behling, K.; Goldin, D.

    1994-05-01

    The Nuclear Regulatory Commission (NRC) has received two petitions to amend its regulations in 10 CFR Parts 20 and 35 as they apply to doses received by members of the public exposed to patients released from a hospital after they have been administered radioactive material. While the two petitions are not identical they both request that the NRC establish a dose limit of 5 millisieverts (0.5 rem) per year for individuals exposed to patients who have been administered radioactive materials. This Regulatory Analysis evaluates three alternatives. Alternative 1 is for the NRC to amend its patient release criteria in 10 CFR 35.75 to use the more stringent dose limit of 1 millisievert per year in 10 CFR 20.1301(a) for its patient release criteria. Alternative 2 is for the NRC to continue using the existing patient release criteria in 10 CFR 35.75 of 1,110 megabecquerels of activity or a dose rate at one meter from the patient of 0.05 millisievert per hour. Alternative 3 is for the NRC to amend the patient release criteria in 10 CFR 35.75 to specify a dose limit of 5 millisieverts for patient release. The evaluation indicates that Alternative 1 would cause a prohibitively large increase in the national health care cost from retaining patients in a hospital longer and would cause significant personal and psychological costs to patients and their families. The choice of Alternatives 2 or 3 would affect only thyroid cancer patients treated with iodine-131. For those patients, Alternative 3 would result in less hospitalization than Alternative 2. Alternative 3 has a potential decrease in national health care cost of $30,000,000 per year but would increase the potential collective dose from released therapy patients by about 2,700 person-rem per year, mainly to family members.

  19. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  20. Image processing software for imaging spectrometry data analysis

    NASA Technical Reports Server (NTRS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-01-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  1. Image processing software for imaging spectrometry data analysis

    NASA Astrophysics Data System (ADS)

    Mazer, Alan; Martin, Miki; Lee, Meemong; Solomon, Jerry E.

    1988-02-01

    Imaging spectrometers simultaneously collect image data in hundreds of spectral channels, from the near-UV to the IR, and can thereby provide direct surface materials identification by means resembling laboratory reflectance spectroscopy. Attention is presently given to a software system, the Spectral Analysis Manager (SPAM) for the analysis of imaging spectrometer data. SPAM requires only modest computational resources and is composed of one main routine and a set of subroutine libraries. Additions and modifications are relatively easy, and special-purpose algorithms have been incorporated that are tailored to geological applications.

  2. SU-E-J-27: Appropriateness Criteria for Deformable Image Registration and Dose Propagation

    SciTech Connect

    Papanikolaou, P; Tuohy, Rachel; Mavroidis, P; Eng, T; Gutierrez, A; Stathakis, S

    2014-06-01

    Purpose: Several commercial software packages have been recently released that allow the user to apply deformable registration algorithms (DRA) for image fusion and dose propagation. Although the idea of anatomically tracking the daily patient dose in the context of adaptive radiotherapy or merely adding the dose from prior treatment to the current one is very intuitive, the accuracy and applicability of such algorithms needs to be investigated as it remains somewhat subjective. In our study, we used true anatomical data where we introduced changes in the density, volume and location of segmented structures to test the DRA for its sensitivity and accuracy. Methods: The CT scan of a prostate patient was selected for this study. The CT images were first segmented to define structure such as the PTV, bladder, rectum, intestines and pelvic bone anatomy. To perform our study, we introduced anatomical changes in the reference patient image set in three different ways: (i) we kept the segmented volumes constant and changed the density of rectum and bladder in increments of 5% (ii) we changed the volume of rectum and bladder in increments of 5% and (iii) we kept the segmented volumes constant but changed their location by moving their COM in increments of 3mm. Using the Velocity software, we evaluated the accuracy of the DRA for each incremental change in all three scenarios. Results: The DRA performs reasonably well when the differential density difference against the background is more than 5%. For the volume change study, the DRA results became unreliable for relative volume changes greater than 10%. Finally for the location study, the DRA performance was acceptable for shifts below 9mm. Conclusion: Site specific and patient specific QA for DRA is an important step to evaluate such algorithms prior to their use for dose propagation.

  3. Multi-criteria analysis on how to select solar radiation hydrogen production system

    NASA Astrophysics Data System (ADS)

    Badea, G.; Naghiu, G. S.; Felseghi, R.-A.; Rǎboacǎ, S.; Aşchilean, I.; Giurca, I.

    2015-12-01

    The purpose of this article is to present a method of selecting hydrogen-production systems using the electric power obtained in photovoltaic systems, and as a selecting method, we suggest the use of the Advanced Multi-Criteria Analysis based on the FRISCO formula. According to the case study on how to select the solar radiation hydrogen production system, the most convenient alternative is the alternative A4, namely the technical solution involving a hydrogen production system based on the electrolysis of water vapor obtained with concentrated solar thermal systems and electrical power obtained using concentrating photovoltaic systems.

  4. Multi-criteria analysis on how to select solar radiation hydrogen production system

    SciTech Connect

    Badea, G.; Naghiu, G. S. Felseghi, R.-A.; Giurca, I.; Răboacă, S.; Aşchilean, I.

    2015-12-23

    The purpose of this article is to present a method of selecting hydrogen-production systems using the electric power obtained in photovoltaic systems, and as a selecting method, we suggest the use of the Advanced Multi-Criteria Analysis based on the FRISCO formula. According to the case study on how to select the solar radiation hydrogen production system, the most convenient alternative is the alternative A4, namely the technical solution involving a hydrogen production system based on the electrolysis of water vapor obtained with concentrated solar thermal systems and electrical power obtained using concentrating photovoltaic systems.

  5. ALARA Analysis of Radiological Control Criteria Associated with Alternatives for Disposal of Hazardous Wastes

    SciTech Connect

    Aaberg, Rosanne L.; Bilyard, Gordon R.; Branch, Kristi M.; Lavender, Jay C.; Miller, Peter L.

    2002-05-15

    This ALARA analysis of Radiological Control Criteria (RCC) considers alternatives to continued storage of certain DOE mixed wastes. It also considers the option of treating hazardous wastes generated by DOE facilities, which have a very low concentration of radionuclide contaminants, as purely hazardous waste. Alternative allowable contaminant levels examined correspond to doses to an individual ranging from 0.01 mrem/yr to 10 to 20 mrem/yr. Generic waste inventory data and radionuclide source terms are used in the assessment. Economic issues, potential health and safety issues, and qualitative factors relating to the use of RCCs are considered.

  6. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  7. Hyperspectral image classification using functional data analysis.

    PubMed

    Li, Hong; Xiao, Guangrun; Xia, Tian; Tang, Y Y; Li, Luoqing

    2014-09-01

    The large number of spectral bands acquired by hyperspectral imaging sensors allows us to better distinguish many subtle objects and materials. Unlike other classical hyperspectral image classification methods in the multivariate analysis framework, in this paper, a novel method using functional data analysis (FDA) for accurate classification of hyperspectral images has been proposed. The central idea of FDA is to treat multivariate data as continuous functions. From this perspective, the spectral curve of each pixel in the hyperspectral images is naturally viewed as a function. This can be beneficial for making full use of the abundant spectral information. The relevance between adjacent pixel elements in the hyperspectral images can also be utilized reasonably. Functional principal component analysis is applied to solve the classification problem of these functions. Experimental results on three hyperspectral images show that the proposed method can achieve higher classification accuracies in comparison to some state-of-the-art hyperspectral image classification methods.

  8. Multi-criteria decision analysis for the optimal management of nitrate contamination of aquifers.

    PubMed

    Almasri, Mohammad N; Kaluarachchi, Jagath J

    2005-03-01

    We present an integrated methodology for the optimal management of nitrate contamination of ground water combining environmental assessment and economic cost evaluation through multi-criteria decision analysis. The proposed methodology incorporates an integrated physical modeling framework accounting for on-ground nitrogen loading and losses, soil nitrogen dynamics, and fate and transport of nitrate in ground water to compute the sustainable on-ground nitrogen loading such that the maximum contaminant level is not violated. A number of protection alternatives to stipulate the predicted sustainable on-ground nitrogen loading are evaluated using the decision analysis that employs the importance order of criteria approach for ranking and selection of the protection alternatives. The methodology was successfully demonstrated for the Sumas-Blaine aquifer in Washington State. The results showed the importance of using this integrated approach which predicts the sustainable on-ground nitrogen loadings and provides an insight into the economic consequences generated in satisfying the environmental constraints. The results also show that the proposed decision analysis framework, within certain limitations, is effective when selecting alternatives with competing demands.

  9. Multi-attribute criteria applied to electric generation energy system analysis LDRD.

    SciTech Connect

    Kuswa, Glenn W.; Tsao, Jeffrey Yeenien; Drennen, Thomas E.; Zuffranieri, Jason V.; Paananen, Orman Henrie; Jones, Scott A.; Ortner, Juergen G.; Brewer, Jeffrey D.; Valdez, Maximo M.

    2005-10-01

    This report began with a Laboratory-Directed Research and Development (LDRD) project to improve Sandia National Laboratories multidisciplinary capabilities in energy systems analysis. The aim is to understand how various electricity generating options can best serve needs in the United States. The initial product is documented in a series of white papers that span a broad range of topics, including the successes and failures of past modeling studies, sustainability, oil dependence, energy security, and nuclear power. Summaries of these projects are included here. These projects have provided a background and discussion framework for the Energy Systems Analysis LDRD team to carry out an inter-comparison of many of the commonly available electric power sources in present use, comparisons of those options, and efforts needed to realize progress towards those options. A computer aid has been developed to compare various options based on cost and other attributes such as technological, social, and policy constraints. The Energy Systems Analysis team has developed a multi-criteria framework that will allow comparison of energy options with a set of metrics that can be used across all technologies. This report discusses several evaluation techniques and introduces the set of criteria developed for this LDRD.

  10. Image and flow cytometry: companion techniques for adherent and non-adherent cell analysis and sorting.

    PubMed

    Métézeau, P

    1993-01-01

    Flow cytometry (FMC) is an analytical and preparative technique whereas image analysis is only applied to cell analysis. Recently, image analysis has been adapted as a preparative method using a new technique: image cytometry for analysis and sorting (ICAS). FCM and ICAS are complementary. Flow cytometry allows rapid, quantitative and precise study of fluorescence and light scattering in a large number of cells in suspension, while ICAS analyses fewer cells (adherent cells or tissue) on the basis of fluorescence, morphology and size. ICAS can use these criteria to destroy unwanted cells and hence sort selected cells. ICAS can also be used for confocal microscopy and laser surgery.

  11. A Mathematical Framework for Image Analysis

    DTIC Science & Technology

    1991-08-01

    The results reported here were derived from the research project ’A Mathematical Framework for Image Analysis ’ supported by the Office of Naval...Research, contract N00014-88-K-0289 to Brown University. A common theme for the work reported is the use of probabilistic methods for problems in image ... analysis and image reconstruction. Five areas of research are described: rigid body recognition using a decision tree/combinatorial approach; nonrigid

  12. Image Reconstruction Using Analysis Model Prior

    PubMed Central

    Han, Yu; Du, Huiqian; Lam, Fan; Mei, Wenbo; Fang, Liping

    2016-01-01

    The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims. PMID:27379171

  13. Coastal flooding as a parameter in multi-criteria analysis for industrial site selection

    NASA Astrophysics Data System (ADS)

    Christina, C.; Memos, C.; Diakoulaki, D.

    2014-12-01

    Natural hazards can trigger major industrial accidents, which apart from affecting industrial installations may cause a series of accidents with serious impacts on human health and the environment far beyond the site boundary. Such accidents, also called Na-Tech (natural - technical) accidents, deserve particular attention since they can cause release of hazardous substances possibly resulting in severe environmental pollution, explosions and/or fires. There are different kinds of natural events or, in general terms, of natural causes of industrial accidents, such as landslides, hurricanes, high winds, tsunamis, lightning, cold/hot temperature, floods, heavy rains etc that have caused accidents. The scope of this paper is to examine the coastal flooding as a parameter in causing an industrial accident, such as the nuclear disaster in Fukushima, Japan, and the critical role of this parameter in industrial site selection. Land use planning is a complex procedure that requires multi-criteria decision analysis involving economic, environmental and social parameters. In this context the parameter of a natural hazard occurrence, such as coastal flooding, for industrial site selection should be set by the decision makers. In this paper it is evaluated the influence that has in the outcome of a multi-criteria decision analysis for industrial spatial planning the parameter of an accident risk triggered by coastal flooding. The latter is analyzed in the context of both sea-and-inland induced flooding.

  14. On the predictive information criteria for model determination in seismic hazard analysis

    NASA Astrophysics Data System (ADS)

    Varini, Elisa; Rotondi, Renata

    2016-04-01

    estimate, but it is hardly applicable to data which are not independent given parameters (Watanabe, J. Mach. Learn. Res., 2010). A solution is given by Ando and Tsay criterion where the joint density may be decomposed into the product of the conditional densities (Ando and Tsay, Int. J. Forecast., 2010). The above mentioned criteria are global summary measures of model performance, but more detailed analysis could be required to discover the reasons for poor global performance. In this latter case, a retrospective predictive analysis is performed on each individual observation. In this study we performed the Bayesian analysis of Italian data sets by four versions of a long-term hazard model known as the stress release model (Vere-Jones, J. Physics Earth, 1978; Bebbington and Harte, Geophys. J. Int., 2003; Varini and Rotondi, Environ. Ecol. Stat., 2015). Then we illustrate the results on their performance evaluated by Bayes Factor, predictive information criteria and retrospective predictive analysis.

  15. Selecting Essential Information for Biosurveillance—A Multi-Criteria Decision Analysis

    PubMed Central

    Generous, Nicholas; Margevicius, Kristen J.; Taylor-McCabe, Kirsten J.; Brown, Mac; Daniel, W. Brent; Castro, Lauren; Hengartner, Andrea; Deshpande, Alina

    2014-01-01

    The National Strategy for Biosurveillancedefines biosurveillance as “the process of gathering, integrating, interpreting, and communicating essential information related to all-hazards threats or disease activity affecting human, animal, or plant health to achieve early detection and warning, contribute to overall situational awareness of the health aspects of an incident, and to enable better decision-making at all levels.” However, the strategy does not specify how “essential information” is to be identified and integrated into the current biosurveillance enterprise, or what the metrics qualify information as being “essential”. Thequestion of data stream identification and selection requires a structured methodology that can systematically evaluate the tradeoffs between the many criteria that need to be taken in account. Multi-Attribute Utility Theory, a type of multi-criteria decision analysis, can provide a well-defined, structured approach that can offer solutions to this problem. While the use of Multi-Attribute Utility Theoryas a practical method to apply formal scientific decision theoretical approaches to complex, multi-criteria problems has been demonstrated in a variety of fields, this method has never been applied to decision support in biosurveillance.We have developed a formalized decision support analytic framework that can facilitate identification of “essential information” for use in biosurveillance systems or processes and we offer this framework to the global BSV community as a tool for optimizing the BSV enterprise. To demonstrate utility, we applied the framework to the problem of evaluating data streams for use in an integrated global infectious disease surveillance system. PMID:24489748

  16. Selecting essential information for biosurveillance--a multi-criteria decision analysis.

    PubMed

    Generous, Nicholas; Margevicius, Kristen J; Taylor-McCabe, Kirsten J; Brown, Mac; Daniel, W Brent; Castro, Lauren; Hengartner, Andrea; Deshpande, Alina

    2014-01-01

    The National Strategy for Biosurveillance defines biosurveillance as "the process of gathering, integrating, interpreting, and communicating essential information related to all-hazards threats or disease activity affecting human, animal, or plant health to achieve early detection and warning, contribute to overall situational awareness of the health aspects of an incident, and to enable better decision-making at all levels." However, the strategy does not specify how "essential information" is to be identified and integrated into the current biosurveillance enterprise, or what the metrics qualify information as being "essential". The question of data stream identification and selection requires a structured methodology that can systematically evaluate the tradeoffs between the many criteria that need to be taken in account. Multi-Attribute Utility Theory, a type of multi-criteria decision analysis, can provide a well-defined, structured approach that can offer solutions to this problem. While the use of Multi-Attribute Utility Theoryas a practical method to apply formal scientific decision theoretical approaches to complex, multi-criteria problems has been demonstrated in a variety of fields, this method has never been applied to decision support in biosurveillance.We have developed a formalized decision support analytic framework that can facilitate identification of "essential information" for use in biosurveillance systems or processes and we offer this framework to the global BSV community as a tool for optimizing the BSV enterprise. To demonstrate utility, we applied the framework to the problem of evaluating data streams for use in an integrated global infectious disease surveillance system.

  17. Spatially explicit multi-criteria decision analysis for managing vector-borne diseases

    PubMed Central

    2011-01-01

    The complex epidemiology of vector-borne diseases creates significant challenges in the design and delivery of prevention and control strategies, especially in light of rapid social and environmental changes. Spatial models for predicting disease risk based on environmental factors such as climate and landscape have been developed for a number of important vector-borne diseases. The resulting risk maps have proven value for highlighting areas for targeting public health programs. However, these methods generally only offer technical information on the spatial distribution of disease risk itself, which may be incomplete for making decisions in a complex situation. In prioritizing surveillance and intervention strategies, decision-makers often also need to consider spatially explicit information on other important dimensions, such as the regional specificity of public acceptance, population vulnerability, resource availability, intervention effectiveness, and land use. There is a need for a unified strategy for supporting public health decision making that integrates available data for assessing spatially explicit disease risk, with other criteria, to implement effective prevention and control strategies. Multi-criteria decision analysis (MCDA) is a decision support tool that allows for the consideration of diverse quantitative and qualitative criteria using both data-driven and qualitative indicators for evaluating alternative strategies with transparency and stakeholder participation. Here we propose a MCDA-based approach to the development of geospatial models and spatially explicit decision support tools for the management of vector-borne diseases. We describe the conceptual framework that MCDA offers as well as technical considerations, approaches to implementation and expected outcomes. We conclude that MCDA is a powerful tool that offers tremendous potential for use in public health decision-making in general and vector-borne disease management in particular

  18. Assessing Interventions to Manage West Nile Virus Using Multi-Criteria Decision Analysis with Risk Scenarios

    PubMed Central

    Hongoh, Valerie; Campagna, Céline; Panic, Mirna; Samuel, Onil; Gosselin, Pierre; Waaub, Jean-Philippe; Ravel, André; Samoura, Karim; Michel, Pascal

    2016-01-01

    The recent emergence of West Nile virus (WNV) in North America highlights vulnerability to climate sensitive diseases and stresses the importance of preventive efforts to reduce their public health impact. Effective prevention involves reducing environmental risk of exposure and increasing adoption of preventive behaviours, both of which depend on knowledge and acceptance of such measures. When making operational decisions about disease prevention and control, public health must take into account a wide range of operational, environmental, social and economic considerations in addition to intervention effectiveness. The current study aimed to identify, assess and rank possible risk reduction measures taking into account a broad set of criteria and perspectives applicable to the management of WNV in Quebec under increasing transmission risk scenarios, some of which may be related to ongoing warming in higher-latitude regions. A participatory approach was used to collect information on categories of concern to relevant stakeholders with respect to WNV prevention and control. Multi-criteria decision analysis was applied to examine stakeholder perspectives and their effect on strategy rankings under increasing transmission risk scenarios. Twenty-three preventive interventions were retained for evaluation using eighteen criteria identified by stakeholders. Combined evaluations revealed that, at an individual-level, inspecting window screen integrity, wearing light colored, long clothing, eliminating peridomestic larval sites and reducing outdoor activities at peak times were top interventions under six WNV transmission scenarios. At a regional-level, the use of larvicides was a preferred strategy in five out of six scenarios, while use of adulticides and dissemination of sterile male mosquitoes were found to be among the least favoured interventions in almost all scenarios. Our findings suggest that continued public health efforts aimed at reinforcing individual

  19. Item Response Theory Analysis of DSM-IV Cannabis Abuse and Dependence Criteria in Adolescents

    ERIC Educational Resources Information Center

    Hartman, Christie A.; Gelhorn, Heather; Crowley, Thomas J.; Sakai, Joseph T.; Stallings, Michael; Young, Susan E.; Rhee, Soo Hyun; Corley, Robin; Hewitt, John K.; Hopfer, Christian J.

    2008-01-01

    A study to examine the DSM-IV criteria for cannabis abuse and dependence among adolescents is conducted. Results conclude that abuse and dependence criteria were not found to affect the different levels of severity in cannabis use.

  20. Comparative Analysis of Thermoeconomic Evaluation Criteria for an Actual Heat Engine

    NASA Astrophysics Data System (ADS)

    Özel, Gülcan; Açıkkalp, Emin; Savaş, Ahmet Fevzi; Yamık, Hasan

    2016-07-01

    In the present study, an actual heat engine is investigated by using different thermoeconomic evaluation criteria in the literature. A criteria that has not been investigated in detail is considered and it is called as ecologico-economical criteria (F_{EC}). It is the difference of power cost and exergy destruction rate cost of the system. All four criteria are applied to an irreversible Carnot heat engine, results are presented numerically and some suggestions are made.

  1. Secure thin client architecture for DICOM image analysis

    NASA Astrophysics Data System (ADS)

    Mogatala, Harsha V. R.; Gallet, Jacqueline

    2005-04-01

    This paper presents a concept of Secure Thin Client (STC) Architecture for Digital Imaging and Communications in Medicine (DICOM) image analysis over Internet. STC Architecture provides in-depth analysis and design of customized reports for DICOM images using drag-and-drop and data warehouse technology. Using a personal computer and a common set of browsing software, STC can be used for analyzing and reporting detailed patient information, type of examinations, date, Computer Tomography (CT) dose index, and other relevant information stored within the images header files as well as in the hospital databases. STC Architecture is three-tier architecture. The First-Tier consists of drag-and-drop web based interface and web server, which provides customized analysis and reporting ability to the users. The Second-Tier consists of an online analytical processing (OLAP) server and database system, which serves fast, real-time, aggregated multi-dimensional data using OLAP technology. The Third-Tier consists of a smart algorithm based software program which extracts DICOM tags from CT images in this particular application, irrespective of CT vendor's, and transfers these tags into a secure database system. This architecture provides Winnipeg Regional Health Authorities (WRHA) with quality indicators for CT examinations in the hospitals. It also provides health care professionals with analytical tool to optimize radiation dose and image quality parameters. The information is provided to the user by way of a secure socket layer (SSL) and role based security criteria over Internet. Although this particular application has been developed for WRHA, this paper also discusses the effort to extend the Architecture to other hospitals in the region. Any DICOM tag from any imaging modality could be tracked with this software.

  2. Multi-criteria decision analysis for waste management in Saharawi refugee camps

    SciTech Connect

    Garfi, M. Tondelli, S.; Bonoli, A.

    2009-10-15

    The aim of this paper is to compare different waste management solutions in Saharawi refugee camps (Algeria) and to test the feasibility of a decision-making method developed to be applied in particular conditions in which environmental and social aspects must be considered. It is based on multi criteria analysis, and in particular on the analytic hierarchy process (AHP), a mathematical technique for multi-criteria decision making (Saaty, T.L., 1980. The Analytic Hierarchy Process. McGraw-Hill, New York, USA; Saaty, T.L., 1990. How to Make a Decision: The Analytic Hierarchy Process. European Journal of Operational Research; Saaty, T.L., 1994. Decision Making for Leaders: The Analytic Hierarchy Process in a Complex World. RWS Publications, Pittsburgh, PA), and on participatory approach, focusing on local community's concerns. The research compares four different waste collection and management alternatives: waste collection by using three tipper trucks, disposal and burning in an open area; waste collection by using seven dumpers and disposal in a landfill; waste collection by using seven dumpers and three tipper trucks and disposal in a landfill; waste collection by using three tipper trucks and disposal in a landfill. The results show that the second and the third solutions provide better scenarios for waste management. Furthermore, the discussion of the results points out the multidisciplinarity of the approach, and the equilibrium between social, environmental and technical impacts. This is a very important aspect in a humanitarian and environmental project, confirming the appropriateness of the chosen method.

  3. Multi-criteria decision analysis for waste management in Saharawi refugee camps.

    PubMed

    Garfì, M; Tondelli, S; Bonoli, A

    2009-10-01

    The aim of this paper is to compare different waste management solutions in Saharawi refugee camps (Algeria) and to test the feasibility of a decision-making method developed to be applied in particular conditions in which environmental and social aspects must be considered. It is based on multi criteria analysis, and in particular on the analytic hierarchy process (AHP), a mathematical technique for multi-criteria decision making (Saaty, T.L., 1980. The Analytic Hierarchy Process. McGraw-Hill, New York, USA; Saaty, T.L., 1990. How to Make a Decision: The Analytic Hierarchy Process. European Journal of Operational Research; Saaty, T.L., 1994. Decision Making for Leaders: The Analytic Hierarchy Process in a Complex World. RWS Publications, Pittsburgh, PA), and on participatory approach, focusing on local community's concerns. The research compares four different waste collection and management alternatives: waste collection by using three tipper trucks, disposal and burning in an open area; waste collection by using seven dumpers and disposal in a landfill; waste collection by using seven dumpers and three tipper trucks and disposal in a landfill; waste collection by using three tipper trucks and disposal in a landfill. The results show that the second and the third solutions provide better scenarios for waste management. Furthermore, the discussion of the results points out the multidisciplinarity of the approach, and the equilibrium between social, environmental and technical impacts. This is a very important aspect in a humanitarian and environmental project, confirming the appropriateness of the chosen method.

  4. A multi-criteria decision analysis assessment of waste paper management options

    SciTech Connect

    Hanan, Deirdre; Burnley, Stephen; Cooke, David

    2013-03-15

    Highlights: ► Isolated communities have particular problems in terms of waste management. ► An MCDA tool allowed a group of non-experts to evaluate waste management options. ► The group preferred local waste management solutions to export to the mainland. ► Gasification of paper was the preferred option followed by recycling. ► The group concluded that they could be involved in the decision making process. - Abstract: The use of Multi-criteria Decision Analysis (MCDA) was investigated in an exercise using a panel of local residents and stakeholders to assess the options for managing waste paper on the Isle of Wight. Seven recycling, recovery and disposal options were considered by the panel who evaluated each option against seven environmental, financial and social criteria. The panel preferred options where the waste was managed on the island with gasification and recycling achieving the highest scores. Exporting the waste to the English mainland for incineration or landfill proved to be the least preferred options. This research has demonstrated that MCDA is an effective way of involving community groups in waste management decision making.

  5. Spatial multi-criteria decision analysis to predict suitability for African swine fever endemicity in Africa

    PubMed Central

    2014-01-01

    Background African swine fever (ASF) is endemic in several countries of Africa and may pose a risk to all pig producing areas on the continent. Official ASF reporting is often rare and there remains limited awareness of the continent-wide distribution of the disease. In the absence of accurate ASF outbreak data and few quantitative studies on the epidemiology of the disease in Africa, we used spatial multi-criteria decision analysis (MCDA) to derive predictions of the continental distribution of suitability for ASF persistence in domestic pig populations as part of sylvatic or domestic transmission cycles. In order to incorporate the uncertainty in the relative importance of different criteria in defining suitability, we modelled decisions within the MCDA framework using a stochastic approach. The predictive performance of suitability estimates was assessed via a partial ROC analysis using ASF outbreak data reported to the OIE since 2005. Results Outputs from the spatial MCDA indicate that large areas of sub-Saharan Africa may be suitable for ASF persistence as part of either domestic or sylvatic transmission cycles. Areas with high suitability for pig to pig transmission (‘domestic cycles’) were estimated to occur throughout sub-Saharan Africa, whilst areas with high suitability for introduction from wildlife reservoirs (‘sylvatic cycles’) were found predominantly in East, Central and Southern Africa. Based on average AUC ratios from the partial ROC analysis, the predictive ability of suitability estimates for domestic cycles alone was considerably higher than suitability estimates for sylvatic cycles alone, or domestic and sylvatic cycles in combination. Conclusions This study provides the first standardised estimates of the distribution of suitability for ASF transmission associated with domestic and sylvatic cycles in Africa. We provide further evidence for the utility of knowledge-driven risk mapping in animal health, particularly in data

  6. A fully multiple-criteria implementation of the Sobol‧ method for parameter sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Rosolem, Rafael; Gupta, Hoshin V.; Shuttleworth, W. James; Zeng, Xubin; de Gonçalves, Luis Gustavo Gonçalves

    2012-04-01

    We present a novel rank-based fully multiple-criteria implementation of the Sobol' variance-based sensitivity analysis approach that implements an objective strategy to evaluate parameter sensitivity when model evaluation involves several metrics of performance. The method is superior to single-criterion approaches while avoiding the subjectivity observed in "pseudo" multiple-criteria methods. Further, it contributes to our understanding of structural characteristics of a model and simplifies parameter estimation by identifying insensitive parameters that can be fixed to default values during model calibration studies. We illustrate the approach by applying it to the problem of identifying the most influential parameters in the Simple Biosphere 3 (SiB3) model using a network of flux towers in Brazil. We find 27-31 (out of 42) parameters to be influential, most (˜78%) of which are primarily associated with physiology, soil, and carbon properties, and that uncertainties in the physiological properties of the model contribute most to total model uncertainty in regard to energy and carbon fluxes. We also find that the second most important model component contributing to the total output uncertainty varies according to the flux analyzed; whereas morphological properties play an important role in sensible heat flux, soil properties are important for latent heat flux, and carbon properties (mainly associated with the soil respiration submodel) are important for carbon flux (as expected). These distinct sensitivities emphasize the need to account for the multioutput nature of land surface models during sensitivity analysis and parameter estimation. Applied to other similar models, our approach can help to establish which soil-plant-atmosphere processes matter most in land surface models of Amazonia and thereby aid in the design of field campaigns to characterize and measure the associated parameters. The approach can also be used with other sensitivity analysis

  7. Optical Analysis of Microscope Images

    NASA Astrophysics Data System (ADS)

    Biles, Jonathan R.

    Microscope images were analyzed with coherent and incoherent light using analog optical techniques. These techniques were found to be useful for analyzing large numbers of nonsymbolic, statistical microscope images. In the first part phase coherent transparencies having 20-100 human multiple myeloma nuclei were simultaneously photographed at 100 power magnification using high resolution holographic film developed to high contrast. An optical transform was obtained by focussing the laser onto each nuclear image and allowing the diffracted light to propagate onto a one dimensional photosensor array. This method reduced the data to the position of the first two intensity minima and the intensity of successive maxima. These values were utilized to estimate the four most important cancer detection clues of nuclear size, shape, darkness, and chromatin texture. In the second part, the geometric and holographic methods of phase incoherent optical processing were investigated for pattern recognition of real-time, diffuse microscope images. The theory and implementation of these processors was discussed in view of their mutual problems of dimness, image bias, and detector resolution. The dimness problem was solved by either using a holographic correlator or a speckle free laser microscope. The latter was built using a spinning tilted mirror which caused the speckle to change so quickly that it averaged out during the exposure. To solve the bias problem low image bias templates were generated by four techniques: microphotography of samples, creation of typical shapes by computer graphics editor, transmission holography of photoplates of samples, and by spatially coherent color image bias removal. The first of these templates was used to perform correlations with bacteria images. The aperture bias was successfully removed from the correlation with a video frame subtractor. To overcome the limited detector resolution it is necessary to discover some analog nonlinear intensity

  8. Imaging flow cytometry for phytoplankton analysis.

    PubMed

    Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S

    2017-01-01

    This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments.

  9. Digital Image Analysis for DETCHIP® Code Determination

    PubMed Central

    Lyon, Marcus; Wilson, Mark V.; Rouhier, Kerry A.; Symonsbergen, David J.; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E.

    2013-01-01

    DETECHIP® is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP® used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP®. Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods. PMID:25267940

  10. Materials characterization through quantitative digital image analysis

    SciTech Connect

    J. Philliber; B. Antoun; B. Somerday; N. Yang

    2000-07-01

    A digital image analysis system has been developed to allow advanced quantitative measurement of microstructural features. This capability is maintained as part of the microscopy facility at Sandia, Livermore. The system records images digitally, eliminating the use of film. Images obtained from other sources may also be imported into the system. Subsequent digital image processing enhances image appearance through the contrast and brightness adjustments. The system measures a variety of user-defined microstructural features--including area fraction, particle size and spatial distributions, grain sizes and orientations of elongated particles. These measurements are made in a semi-automatic mode through the use of macro programs and a computer controlled translation stage. A routine has been developed to create large montages of 50+ separate images. Individual image frames are matched to the nearest pixel to create seamless montages. Results from three different studies are presented to illustrate the capabilities of the system.

  11. Imaging System and Method for Biomedical Analysis

    DTIC Science & Technology

    2013-03-11

    compressive decoding of sparse objects” by A. Coskum et al. 136 Analyst No. 17, pp. 3512–3518, (7 September 2011). Their fluorescent microscopy lensless ...prior art concept is a lensless imaging system proposed in a research paper entitled “ Lensless wide-field fluorescent imaging on a chip using...analysis. [0008] An article published by Sang Jun Moon et al., “Integrating Micro-fluidics and Lensless Imaging for Point-of- Care,” 24 Biosens

  12. Theory of Image Analysis and Recognition.

    DTIC Science & Technology

    1983-01-24

    Narendra Ahuja Image models Ramalingam Chellappa Image models Matti Pietikainen * Texture analysis b David G. Morgenthaler’ 3D digital geometry c Angela Y. Wu...Restoration Parameter Choice A Quantitative Guide," TR-965, October 1980. 70. Matti Pietikainen , "On the Use of Hierarchically Computed ’Mexican Hat...81. Matti Pietikainen and Azriel Rosenfeld, "Image Segmenta- tion by Texture Using Pyramid Node Linking," TR-1008, February 1981. 82. David G. 1

  13. Analysis of dynamic brain imaging data.

    PubMed Central

    Mitra, P P; Pesaran, B

    1999-01-01

    Modern imaging techniques for probing brain function, including functional magnetic resonance imaging, intrinsic and extrinsic contrast optical imaging, and magnetoencephalography, generate large data sets with complex content. In this paper we develop appropriate techniques for analysis and visualization of such imaging data to separate the signal from the noise and characterize the signal. The techniques developed fall into the general category of multivariate time series analysis, and in particular we extensively use the multitaper framework of spectral analysis. We develop specific protocols for the analysis of fMRI, optical imaging, and MEG data, and illustrate the techniques by applications to real data sets generated by these imaging modalities. In general, the analysis protocols involve two distinct stages: "noise" characterization and suppression, and "signal" characterization and visualization. An important general conclusion of our study is the utility of a frequency-based representation, with short, moving analysis windows to account for nonstationarity in the data. Of particular note are 1) the development of a decomposition technique (space-frequency singular value decomposition) that is shown to be a useful means of characterizing the image data, and 2) the development of an algorithm, based on multitaper methods, for the removal of approximately periodic physiological artifacts arising from cardiac and respiratory sources. PMID:9929474

  14. [Studies of performance evaluation and criteria for trans-fatty acids analysis using GC-FID].

    PubMed

    Watanabe, Takahiro; Ishikawa, Tomoko; Matsuda, Rieko

    2013-01-01

    Performance evaluation methods and criteria for trans-fatty acids analysis using GC-FID were examined. The measurement method constructed in this study was based on the American Oil Chemists' Society (AOCS) official standard methods Ce1h-05. The method for fat extraction from general foods was based on the methods for nutrition labeling notified by the Ministry of Health, Labour and Welfare of Japan and AOAC 996.06. To estimate trueness and precision, fortified samples were analyzed following the established experimental design. Five molecular species of trans-fatty acids that are rarely contained in foods were used for preparing the fortified samples. To estimate precision, more than four degrees of freedom of variance are required. Based on the results, within-laboratory trueness and reproducibility will be set at 90-110% and 10% (RSD%), respectively.

  15. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    PubMed

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  16. Harnessing Ecosystem Models and Multi-Criteria Decision Analysis for the Support of Forest Management

    NASA Astrophysics Data System (ADS)

    Wolfslehner, Bernhard; Seidl, Rupert

    2010-12-01

    The decision-making environment in forest management (FM) has changed drastically during the last decades. Forest management planning is facing increasing complexity due to a widening portfolio of forest goods and services, a societal demand for a rational, transparent decision process and rising uncertainties concerning future environmental conditions (e.g., climate change). Methodological responses to these challenges include an intensified use of ecosystem models to provide an enriched, quantitative information base for FM planning. Furthermore, multi-criteria methods are increasingly used to amalgamate information, preferences, expert judgments and value expressions, in support of the participatory and communicative dimensions of modern forestry. Although the potential of combining these two approaches has been demonstrated in a number of studies, methodological aspects in interfacing forest ecosystem models (FEM) and multi-criteria decision analysis (MCDA) are scarcely addressed explicitly. In this contribution we review the state of the art in FEM and MCDA in the context of FM planning and highlight some of the crucial issues when combining ecosystem and preference modeling. We discuss issues and requirements in selecting approaches suitable for supporting FM planning problems from the growing body of FEM and MCDA concepts. We furthermore identify two major challenges in a harmonized application of FEM-MCDA: (i) the design and implementation of an indicator-based analysis framework capturing ecological and social aspects and their interactions relevant for the decision process, and (ii) holistic information management that supports consistent use of different information sources, provides meta-information as well as information on uncertainties throughout the planning process.

  17. Digital image processing in cephalometric analysis.

    PubMed

    Jäger, A; Döler, W; Schormann, T

    1989-01-01

    Digital image processing methods were applied to improve the practicability of cephalometric analysis. The individual X-ray film was digitized by the aid of a high resolution microscope-photometer. Digital processing was done using a VAX 8600 computer system. An improvement of the image quality was achieved by means of various digital enhancement and filtering techniques.

  18. Multi-level multi-criteria analysis of alternative fuels for waste collection vehicles in the United States.

    PubMed

    Maimoun, Mousa; Madani, Kaveh; Reinhart, Debra

    2016-04-15

    Historically, the U.S. waste collection fleet was dominated by diesel-fueled waste collection vehicles (WCVs); the growing need for sustainable waste collection has urged decision makers to incorporate economically efficient alternative fuels, while mitigating environmental impacts. The pros and cons of alternative fuels complicate the decisions making process, calling for a comprehensive study that assesses the multiple factors involved. Multi-criteria decision analysis (MCDA) methods allow decision makers to select the best alternatives with respect to selection criteria. In this study, two MCDA methods, Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) and Simple Additive Weighting (SAW), were used to rank fuel alternatives for the U.S. waste collection industry with respect to a multi-level environmental and financial decision matrix. The environmental criteria consisted of life-cycle emissions, tail-pipe emissions, water footprint (WFP), and power density, while the financial criteria comprised of vehicle cost, fuel price, fuel price stability, and fueling station availability. The overall analysis showed that conventional diesel is still the best option, followed by hydraulic-hybrid WCVs, landfill gas (LFG) sourced natural gas, fossil natural gas, and biodiesel. The elimination of the WFP and power density criteria from the environmental criteria ranked biodiesel 100 (BD100) as an environmentally better alternative compared to other fossil fuels (diesel and natural gas). This result showed that considering the WFP and power density as environmental criteria can make a difference in the decision process. The elimination of the fueling station and fuel price stability criteria from the decision matrix ranked fossil natural gas second after LFG-sourced natural gas. This scenario was found to represent the status quo of the waste collection industry. A sensitivity analysis for the status quo scenario showed the overall ranking of diesel and

  19. Three-dimensional freehand ultrasound: image reconstruction and volume analysis.

    PubMed

    Barry, C D; Allott, C P; John, N W; Mellor, P M; Arundel, P A; Thomson, D S; Waterton, J C

    1997-01-01

    A system is described that rapidly produces a regular 3-dimensional (3-D) data block suitable for processing by conventional image analysis and volume measurement software. The system uses electromagnetic spatial location of 2-dimensional (2-D) freehand-scanned ultrasound B-mode images, custom-built signal-conditioning hardware, UNIX-based computer processing and an efficient 3-D reconstruction algorithm. Utilisation of images from multiple angles of insonation, "compounding," reduces speckle contrast, improves structure coherence within the reconstructed grey-scale image and enhances the ability to detect structure boundaries and to segment and quantify features. Volume measurements using a series of water-filled latex and cylindrical foam rubber phantoms with volumes down to 0.7 mL show that a high degree of accuracy, precision and reproducibility can be obtained. Extension of the technique to handle in vivo data sets by allowing physiological criteria to be taken into account in selecting the images used for construction is also illustrated.

  20. Machine learning applications in cell image analysis.

    PubMed

    Kan, Andrey

    2017-04-04

    Machine learning (ML) refers to a set of automatic pattern recognition methods that have been successfully applied across various problem domains, including biomedical image analysis. This review focuses on ML applications for image analysis in light microscopy experiments with typical tasks of segmenting and tracking individual cells, and modelling of reconstructed lineage trees. After describing a typical image analysis pipeline and highlighting challenges of automatic analysis (for example, variability in cell morphology, tracking in presence of clutters) this review gives a brief historical outlook of ML, followed by basic concepts and definitions required for understanding examples. This article then presents several example applications at various image processing stages, including the use of supervised learning methods for improving cell segmentation, and the application of active learning for tracking. The review concludes with remarks on parameter setting and future directions.Immunology and Cell Biology advance online publication, 4 April 2017; doi:10.1038/icb.2017.16.

  1. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts

  2. ASCI 2010 appropriateness criteria for cardiac computed tomography: a report of the Asian Society of Cardiovascular Imaging Cardiac Computed Tomography and Cardiac Magnetic Resonance Imaging Guideline Working Group.

    PubMed

    Tsai, I-Chen; Choi, Byoung Wook; Chan, Carmen; Jinzaki, Masahiro; Kitagawa, Kakuya; Yong, Hwan Seok; Yu, Wei

    2010-02-01

    In Asia, the healthcare system, populations and patterns of disease differ from Western countries. The current reports on the criteria for cardiac CT scans, provided by Western professional societies, are not appropriate for Asian cultures. The Asian Society of Cardiovascular Imaging, the only society dedicated to cardiovascular imaging in Asia, formed a Working Group and invited 23 Technical Panel members representing a variety of Asian countries to rate the 51 indications for cardiac CT in clinical practice in Asia. The indications were rated as 'appropriate' (7-9), 'uncertain' (4-6), or 'inappropriate' (1-3) on a scale of 1-9. The median score was used for the final result if there was no disagreement. The final ratings for indications were 33 appropriate, 14 uncertain and 4 inappropriate. And 20 of them are highly agreed (19 appropriate and 1 inappropriate). Specifically, the Asian representatives considered cardiac CT as an appropriate modality for Kawasaki disease and congenital heart diseases in follow up and in symptomatic patients. In addition, except for some specified conditions, cardiac CT was considered to be an appropriate modality for one-stop shop ischemic heart disease evaluation due to its general appropriateness in coronary, structure and function evaluation. This report is expected to have a significant impact on the clinical practice, research and reimbursement policy in Asia.

  3. Appropriate use criteria for amyloid PET: a report of the Amyloid Imaging Task Force, the Society of Nuclear Medicine and Molecular Imaging, and the Alzheimer's Association.

    PubMed

    Johnson, Keith A; Minoshima, Satoshi; Bohnen, Nicolaas I; Donohoe, Kevin J; Foster, Norman L; Herscovitch, Peter; Karlawish, Jason H; Rowe, Christopher C; Carrillo, Maria C; Hartley, Dean M; Hedrick, Saima; Pappas, Virginia; Thies, William H

    2013-01-01

    Positron emission tomography (PET) of brain amyloid b is a technology that is becoming more available, but its clinical utility in medical practice requires careful definition. To provide guidance to dementia care practitioners, patients, and caregivers, the Alzheimer's Association and the Society of Nuclear Medicine and Molecular Imaging convened the Amyloid Imaging Taskforce (AIT). The AIT considered a broad range of specific clinical scenarios in which amyloid PET could potentially be used appropriately. Peer-reviewed, published literature was searched to ascertain available evidence relevant to these scenarios, and the AIT developed a consensus of expert opinion. Although empirical evidence of impact on clinical outcomes is not yet available, a set of specific appropriate use criteria (AUC) were agreed on that define the types of patients and clinical circumstances in which amyloid PET could be used. Both appropriate and inappropriate uses were considered and formulated,and are reported and discussed here. Because both dementia care and amyloid PET technology are in active development, these AUC will require periodic reassessment. Future research directions are also outlined, including diagnostic utility and patient-centered outcomes.

  4. Appropriate use criteria for amyloid PET: a report of the Amyloid Imaging Task Force, the Society of Nuclear Medicine and Molecular Imaging, and the Alzheimer's Association.

    PubMed

    Johnson, Keith A; Minoshima, Satoshi; Bohnen, Nicolaas I; Donohoe, Kevin J; Foster, Norman L; Herscovitch, Peter; Karlawish, Jason H; Rowe, Christopher C; Carrillo, Maria C; Hartley, Dean M; Hedrick, Saima; Pappas, Virginia; Thies, William H

    2013-03-01

    Positron emission tomography (PET) of brain amyloid β is a technology that is becoming more available, but its clinical utility in medical practice requires careful definition. To provide guidance to dementia care practitioners, patients, and caregivers, the Alzheimer's Association and the Society of Nuclear Medicine and Molecular Imaging convened the Amyloid Imaging Taskforce (AIT). The AIT considered a broad range of specific clinical scenarios in which amyloid PET could potentially be used appropriately. Peer-reviewed, published literature was searched to ascertain available evidence relevant to these scenarios, and the AIT developed a consensus of expert opinion. Although empirical evidence of impact on clinical outcomes is not yet available, a set of specific appropriate use criteria (AUC) were agreed on that define the types of patients and clinical circumstances in which amyloid PET could be used. Both appropriate and inappropriate uses were considered and formulated, and are reported and discussed here. Because both dementia care and amyloid PET technology are in active development, these AUC will require periodic reassessment. Future research directions are also outlined, including diagnostic utility and patient-centered outcomes.

  5. On image analysis in fractography (Methodological Notes)

    NASA Astrophysics Data System (ADS)

    Shtremel', M. A.

    2015-10-01

    As other spheres of image analysis, fractography has no universal method for information convolution. An effective characteristic of an image is found by analyzing the essence and origin of every class of objects. As follows from the geometric definition of a fractal curve, its projection onto any straight line covers a certain segment many times; therefore, neither a time series (one-valued function of time) nor an image (one-valued function of plane) can be a fractal. For applications, multidimensional multiscale characteristics of an image are necessary. "Full" wavelet series break the law of conservation of information.

  6. Use of the European preliminary criteria, the Breiman-classification tree and the American-European criteria for diagnosis of primary Sjögren's Syndrome in daily practice: a retrospective analysis.

    PubMed

    Langegger, C; Wenger, M; Duftner, C; Dejaco, C; Baldissera, I; Moncayo, R; Schirmer, M

    2007-06-01

    This study was conducted to assess the use of the European preliminary criteria, the Breiman-classification tree and the American-European criteria for diagnosis of primary Sjögren's Syndrome (pSS) in daily practice. A retrospective analysis of 17 consecutive patients with pSS (European criteria) was performed evaluating the application of the Schirmer test, semiquantitative sialoscintigraphy, immunologic tests, including rheumatoid factor, antinuclear antibodies, Sjögren's syndrome autoantibodies (SS-A, SS-B) and lip biopsy. Out of the 17 patients with pSS according to the European criteria, 15 patients fulfilled the classification tree (=88.2%), and 4 patients fulfilled the American-European criteria (=23.5%, P = 0.001). In the four patients fulfilling the American-European criteria, a positive result of the sialoscintigraphy was not crucial for the diagnosis according to these criteria. In conclusion, the American-European criteria are more stringent than the European preliminary criteria. We assume the role of sialoscintigraphy to be reduced when applying the American-European criteria.

  7. Retinal image analysis: concepts, applications and potential.

    PubMed

    Patton, Niall; Aslam, Tariq M; MacGillivray, Thomas; Deary, Ian J; Dhillon, Baljean; Eikelboom, Robert H; Yogesan, Kanagasingam; Constable, Ian J

    2006-01-01

    As digital imaging and computing power increasingly develop, so too does the potential to use these technologies in ophthalmology. Image processing, analysis and computer vision techniques are increasing in prominence in all fields of medical science, and are especially pertinent to modern ophthalmology, as it is heavily dependent on visually oriented signs. The retinal microvasculature is unique in that it is the only part of the human circulation that can be directly visualised non-invasively in vivo, readily photographed and subject to digital image analysis. Exciting developments in image processing relevant to ophthalmology over the past 15 years includes the progress being made towards developing automated diagnostic systems for conditions, such as diabetic retinopathy, age-related macular degeneration and retinopathy of prematurity. These diagnostic systems offer the potential to be used in large-scale screening programs, with the potential for significant resource savings, as well as being free from observer bias and fatigue. In addition, quantitative measurements of retinal vascular topography using digital image analysis from retinal photography have been used as research tools to better understand the relationship between the retinal microvasculature and cardiovascular disease. Furthermore, advances in electronic media transmission increase the relevance of using image processing in 'teleophthalmology' as an aid in clinical decision-making, with particular relevance to large rural-based communities. In this review, we outline the principles upon which retinal digital image analysis is based. We discuss current techniques used to automatically detect landmark features of the fundus, such as the optic disc, fovea and blood vessels. We review the use of image analysis in the automated diagnosis of pathology (with particular reference to diabetic retinopathy). We also review its role in defining and performing quantitative measurements of vascular topography

  8. Use of power analysis to develop detectable significance criteria for sea urchin toxicity tests

    USGS Publications Warehouse

    Carr, R.S.; Biedenbach, J.M.

    1999-01-01

    When sufficient data are available, the statistical power of a test can be determined using power analysis procedures. The term “detectable significance” has been coined to refer to this criterion based on power analysis and past performance of a test. This power analysis procedure has been performed with sea urchin (Arbacia punctulata) fertilization and embryological development data from sediment porewater toxicity tests. Data from 3100 and 2295 tests for the fertilization and embryological development tests, respectively, were used to calculate the criteria and regression equations describing the power curves. Using Dunnett's test, a minimum significant difference (MSD) (β = 0.05) of 15.5% and 19% for the fertilization test, and 16.4% and 20.6% for the embryological development test, for α ≤ 0.05 and α ≤ 0.01, respectively, were determined. The use of this second criterion reduces type I (false positive) errors and helps to establish a critical level of difference based on the past performance of the test.

  9. Scenario and multiple criteria decision analysis for energy and environmental security of military and industrial installations.

    PubMed

    Karvetski, Christopher W; Lambert, James H; Linkov, Igor

    2011-04-01

    Military and industrial facilities need secure and reliable power generation. Grid outages can result in cascading infrastructure failures as well as security breaches and should be avoided. Adding redundancy and increasing reliability can require additional environmental, financial, logistical, and other considerations and resources. Uncertain scenarios consisting of emergent environmental conditions, regulatory changes, growth of regional energy demands, and other concerns result in further complications. Decisions on selecting energy alternatives are made on an ad hoc basis. The present work integrates scenario analysis and multiple criteria decision analysis (MCDA) to identify combinations of impactful emergent conditions and to perform a preliminary benefits analysis of energy and environmental security investments for industrial and military installations. Application of a traditional MCDA approach would require significant stakeholder elicitations under multiple uncertain scenarios. The approach proposed in this study develops and iteratively adjusts a scoring function for investment alternatives to find the scenarios with the most significant impacts on installation security. A robust prioritization of investment alternatives can be achieved by integrating stakeholder preferences and focusing modeling and decision-analytical tools on a few key emergent conditions and scenarios. The approach is described and demonstrated for a campus of several dozen interconnected industrial buildings within a major installation.

  10. Edge enhanced morphology for infrared image analysis

    NASA Astrophysics Data System (ADS)

    Bai, Xiangzhi; Liu, Haonan

    2017-01-01

    Edge information is one of the critical information for infrared images. Morphological operators have been widely used for infrared image analysis. However, the edge information in infrared image is weak and the morphological operators could not well utilize the edge information of infrared images. To strengthen the edge information in morphological operators, the edge enhanced morphology is proposed in this paper. Firstly, the edge enhanced dilation and erosion operators are given and analyzed. Secondly, the pseudo operators which are derived from the edge enhanced dilation and erosion operators are defined. Finally, the applications for infrared image analysis are shown to verify the effectiveness of the proposed edge enhanced morphological operators. The proposed edge enhanced morphological operators are useful for the applications related to edge features, which could be extended to wide area of applications.

  11. Net Clinical Benefit of Oral Anticoagulants: A Multiple Criteria Decision Analysis

    PubMed Central

    Yang, Yea-Huei Kao; Lu, Christine Y.

    2015-01-01

    Background This study quantitatively evaluated the comparative efficacy and safety of new oral anticoagulants (dabigatran, rivaroxaban, and apizaban) and warfarin for treatment of nonvalvular atrial fibrillation. We also compared these agents under different scenarios, including population with high risk of stroke and for primary vs. secondary stroke prevention. Methods We used multiple criteria decision analysis (MCDA) to assess the benefit-risk of these medications. Our MCDA models contained criteria for benefits (prevention of ischemic stroke and systemic embolism) and risks (intracranial and extracranial bleeding). We calculated a performance score for each drug accounting for benefits and risks in comparison to treatment alternatives. Results Overall, new agents had higher performance scores than warfarin; in order of performance scores: dabigatran 150 mg (0.529), rivaroxaban (0.462), apixaban (0.426), and warfarin (0.191). For patients at a higher risk of stroke (CHADS2 score≥3), apixaban had the highest performance score (0.686); performance scores for other drugs were 0.462 for dabigatran 150 mg, 0.392 for dabigatran 110 mg, 0.271 for rivaroxaban, and 0.116 for warfarin. Dabigatran 150 mg had the highest performance score for primary stroke prevention, while dabigatran 110 mg had the highest performance score for secondary prevention. Conclusions Our results suggest that new oral anticoagulants might be preferred over warfarin. Selecting appropriate medicines according to the patient’s condition based on information from an integrated benefit-risk assessment of treatment options is crucial to achieve optimal clinical outcomes. PMID:25897861

  12. Sustainability assessment of tertiary wastewater treatment technologies: a multi-criteria analysis.

    PubMed

    Plakas, K V; Georgiadis, A A; Karabelas, A J

    2016-01-01

    The multi-criteria analysis gives the opportunity to researchers, designers and decision-makers to examine decision options in a multi-dimensional fashion. On this basis, four tertiary wastewater treatment (WWT) technologies were assessed regarding their sustainability performance in producing recycled wastewater, considering a 'triple bottom line' approach (i.e. economic, environmental, and social). These are powdered activated carbon adsorption coupled with ultrafiltration membrane separation (PAC-UF), reverse osmosis, ozone/ultraviolet-light oxidation and heterogeneous photo-catalysis coupled with low-pressure membrane separation (photocatalytic membrane reactor, PMR). The participatory method called simple multi-attribute rating technique exploiting ranks was employed for assigning weights to selected sustainability indicators. This sustainability assessment approach resulted in the development of a composite index as a final metric, for each WWT technology evaluated. The PAC-UF technology appears to be the most appropriate technology, attaining the highest composite value regarding the sustainability performance. A scenario analysis confirmed the results of the original scenario in five out of seven cases. In parallel, the PMR was highlighted as the technology with the least variability in its performance. Nevertheless, additional actions and approaches are proposed to strengthen the objectivity of the final results.

  13. A comparative analysis based on different strength criteria for evaluation of risk factor for dental implants.

    PubMed

    Natali, A N; Pavan, P G

    2002-04-01

    A numerical analysis is developed to study the interaction phenomena between endousseus titanium dental implants and surrounding jawbone tissue. The interest is focused on the most appropriate evaluation of the stress state arising in the tissue because of the implant under physiological loading. The problem is considered with regard to linear elastic response of the one and to short time effect. Different configurations of bone-implant system are described, using axial-symmetrical and three-dimensional models, by means of finite and geometric element method. The investigation attains to the stress states induced in bone that lead to a limit condition near the effective failure surface. The parameter commonly adopted in literature, such as the Von Mises stress, represents an excessive simplification of problem formulation, leading to an incorrect evaluation of the real failure risk for the implant, due to the assumption of the isotropic and deviatoric nature of the adopted stress measure. More suitable criterion can be assumed, such as the Tsai-Wu criterion, to take into account the anisotropy that characterises the response of bone, as well as the influence of a hydrostatic stress state. The analysis developed offers a comparison of results by using different criteria, leading to an evaluation of reliability of the procedure to be followed and addressing also to an evaluation of a risk factor for the implant investigated.

  14. Strategic rehabilitation planning of piped water networks using multi-criteria decision analysis.

    PubMed

    Scholten, Lisa; Scheidegger, Andreas; Reichert, Peter; Maurer, Max; Mauer, Max; Lienert, Judit

    2014-02-01

    To overcome the difficulties of strategic asset management of water distribution networks, a pipe failure and a rehabilitation model are combined to predict the long-term performance of rehabilitation strategies. Bayesian parameter estimation is performed to calibrate the failure and replacement model based on a prior distribution inferred from three large water utilities in Switzerland. Multi-criteria decision analysis (MCDA) and scenario planning build the framework for evaluating 18 strategic rehabilitation alternatives under future uncertainty. Outcomes for three fundamental objectives (low costs, high reliability, and high intergenerational equity) are assessed. Exploitation of stochastic dominance concepts helps to identify twelve non-dominated alternatives and local sensitivity analysis of stakeholder preferences is used to rank them under four scenarios. Strategies with annual replacement of 1.5-2% of the network perform reasonably well under all scenarios. In contrast, the commonly used reactive replacement is not recommendable unless cost is the only relevant objective. Exemplified for a small Swiss water utility, this approach can readily be adapted to support strategic asset management for any utility size and based on objectives and preferences that matter to the respective decision makers.

  15. Multi-criteria decision analysis in environmental sciences: ten years of applications and trends.

    PubMed

    Huang, Ivy B; Keisler, Jeffrey; Linkov, Igor

    2011-09-01

    Decision-making in environmental projects requires consideration of trade-offs between socio-political, environmental, and economic impacts and is often complicated by various stakeholder views. Multi-criteria decision analysis (MCDA) emerged as a formal methodology to face available technical information and stakeholder values to support decisions in many fields and can be especially valuable in environmental decision making. This study reviews environmental applications of MCDA. Over 300 papers published between 2000 and 2009 reporting MCDA applications in the environmental field were identified through a series of queries in the Web of Science database. The papers were classified by their environmental application area, decision or intervention type. In addition, the papers were also classified by the MCDA methods used in the analysis (analytic hierarchy process, multi-attribute utility theory, and outranking). The results suggest that there is a significant growth in environmental applications of MCDA over the last decade across all environmental application areas. Multiple MCDA tools have been successfully used for environmental applications. Even though the use of the specific methods and tools varies in different application areas and geographic regions, our review of a few papers where several methods were used in parallel with the same problem indicates that recommended course of action does not vary significantly with the method applied.

  16. GIS-based multicriteria municipal solid waste landfill suitability analysis: a review of the methodologies performed and criteria implemented.

    PubMed

    Demesouka, O E; Vavatsikos, A P; Anagnostopoulos, K P

    2014-04-01

    Multicriteria spatial decision support systems (MC-SDSS) have emerged as an integration of the geographical information systems (GIS) and multiple criteria decision analysis (MCDA) methods. GIS-based MCDA allows the incorporation of conflicting objectives and decision maker (DM) preferences into spatial decision models. During recent decades, a variety of research articles have been published regarding the implementation of methods and/or tools in a variety of real-world case studies. The article discusses, in detail, the criteria and methods that are implemented in GIS-based landfill siting suitability analysis and especially the exclusionary and non-exclusionary criteria that can be considered when selecting sites for municipal solid waste (MSW) landfills. This paper reviews 36 seminal articles in which the evaluation of candidate landfill sites is conducted using MCDA methods. After a brief description of the main components of a MC-SDSS and the applied decision rules, the review focuses on the criteria incorporated into the decision models. The review provides a comprehensive guide to the landfill siting analysis criteria, providing details regarding the utilization methods, their decision or exclusionary nature and their monotonicity.

  17. Multi-Criteria Analysis for Biomass Utilization Applying Data Envelopment Analysis

    NASA Astrophysics Data System (ADS)

    Morimoto, Hidetsugu; Hoshino, Satoshi; Kuki, Yasuaki

    This paper aimed to consider about material-recycling, preventing global warming, and economic efficiency on preset and planed 195 Biomass Towns applying DEA (Data Envelopment Analysis), which can evaluate operational efficiency entities such as private companies or projects. In the results, although the Biomass Town can recycle material efficiently, it was clarified that preventing global warming and business profitability was brushed off like it in Biomass Town Design. Moreover, from the point of view of operational efficiency, we suggested an improvement of the Biomass Town scale for more efficiency-enhancing applying DEA. We found that applying DEA was able to catch more improvements or indicator as compared with cost-benefit analysis and cost-effectiveness analysis.

  18. MRI Image Processing Based on Fractal Analysis

    PubMed

    Marusina, Mariya Y; Mochalina, Alexandra P; Frolova, Ekaterina P; Satikov, Valentin I; Barchuk, Anton A; Kuznetcov, Vladimir I; Gaidukov, Vadim S; Tarakanov, Segrey A

    2017-01-01

    Background: Cancer is one of the most common causes of human mortality, with about 14 million new cases and 8.2 million deaths reported in in 2012. Early diagnosis of cancer through screening allows interventions to reduce mortality. Fractal analysis of medical images may be useful for this purpose. Materials and Methods: In this study, we examined magnetic resonance (MR) images of healthy livers and livers containing metastases from colorectal cancer. The fractal dimension and the Hurst exponent were chosen as diagnostic features for tomographic imaging using Image J software package for image processings FracLac for applied for fractal analysis with a 120x150 pixel area. Calculations of the fractal dimensions of pathological and healthy tissue samples were performed using the box-counting method. Results: In pathological cases (foci formation), the Hurst exponent was less than 0.5 (the region of unstable statistical characteristics). For healthy tissue, the Hurst index is greater than 0.5 (the zone of stable characteristics). Conclusions: The study indicated the possibility of employing fractal rapid analysis for the detection of focal lesions of the liver. The Hurst exponent can be used as an important diagnostic characteristic for analysis of medical images.

  19. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-03-13

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art.

  20. Experiment in multiple-criteria energy policy analysis. [Using HOPE (holistic preference evaluation)

    SciTech Connect

    Ho, J K

    1980-07-01

    An international panel of energy analysts participated in an experiment to use HOPE (holistic preference evaluation): an interactive parametric linear-programming method for multiple-criteria optimization. The criteria of cost, environmental effect, crude oil, and nuclear fuel were considered according to BESOM: an energy model for the US in the year 2000.

  1. The Politics of Determining Merit Aid Eligibility Criteria: An Analysis of the Policy Process

    ERIC Educational Resources Information Center

    Ness, Erik C.

    2010-01-01

    Despite the scholarly attention on the effects of merit aid on college access and choice, particularly on the significant effect that states' varied eligibility criteria play, no studies have examined the policy process through which merit aid criteria are determined. This is surprising given the recent attention to state-level policy dynamics and…

  2. The application of integral performance criteria to the analysis of discrete maneuvers in a driving simulator

    NASA Technical Reports Server (NTRS)

    Repa, B. S.; Zucker, R. S.; Wierwille, W. W.

    1977-01-01

    The influence of vehicle transient response characteristics on driver-vehicle performance in discrete maneuvers as measured by integral performance criteria was investigated. A group of eight ordinary drivers was presented with a series of eight vehicle transfer function configurations in a driving simulator. Performance in two discrete maneuvers was analyzed by means of integral performance criteria. Results are presented.

  3. Rock fracture image acquisition and analysis

    NASA Astrophysics Data System (ADS)

    Wang, W.; Zongpu, Jia; Chen, Liwan

    2007-12-01

    As a cooperation project between Sweden and China, this paper presents: rock fracture image acquisition and analysis. Rock fracture images are acquired by using UV light illumination and visible optical illumination. To present fracture network reasonable, we set up some models to characterize the network, based on the models, we used Best fit Ferret method to auto-determine fracture zone, then, through skeleton fractures to obtain endpoints, junctions, holes, particles, and branches. Based on the new parameters and a part of common parameters, the fracture network density, porosity, connectivity and complexities can be obtained, and the fracture network is characterized. In the following, we first present a basic consideration and basic parameters for fractures (Primary study of characteristics of rock fractures), then, set up a model for fracture network analysis (Fracture network analysis), consequently to use the model to analyze fracture network with different images (Two dimensional fracture network analysis based on slices), and finally give conclusions and suggestions.

  4. GIS, Geoscience, Multi-criteria Analysis and Integrated Management of the Coastal Zone

    NASA Astrophysics Data System (ADS)

    Kacimi, Y.; Barich, A.

    2011-12-01

    In this 3rd millennium, geology can be considered as a science of decision that intervenes in all the society domains. It has passed its academic dimension to spread toward some domains that until now were out of reach. Combining different Geoscience sub-disciplines emanates from a strong will to demonstrate the contribution of this science and its impact on the daily life, especially by making it applicable to various innovative projects. Geophysics, geochemistry and structural geology are complementary disciplines that can be applied in perfect symbiosis in many domains like construction, mining prospection, impact assessment, environment, etc. This can be proved by using collected data from these studies and integrate them into Geographic Information Systems (GIS), in order to make a multi-criteria analysis, which gives generally very impressive results. From this point, it is easy to set mining, eco-geotouristic and risk assessment models in order to establish land use projects but also in the case of integrated management of the coastal zone (IMCZ). Touristic projects in Morocco focus on its coast which represents at least 3500 km ; the management of this zone for building marinas or touristic infrastructures requires a deep and detailed study of marine currents on the coast, for example, by creating surveillance models and a coastal hazards map. An innovative project that will include geophysical, geochemical and structural geology studies associated to a multi-criteria analysis. The data will be integrated into a GIS to establish a coastal map that will highlight low-risk erosion zones and thus will facilitate implementation of ports and other construction projects. YES Morocco is a chapter of the International YES Network that aims to promote Geoscience in the service of society and professional development of Young and Early Career Geoscientists. Our commitment for such project will be of qualitative aspect into an associative framework that will involve

  5. ACR appropriateness criteria jaundice.

    PubMed

    Lalani, Tasneem; Couto, Corey A; Rosen, Max P; Baker, Mark E; Blake, Michael A; Cash, Brooks D; Fidler, Jeff L; Greene, Frederick L; Hindman, Nicole M; Katz, Douglas S; Kaur, Harmeet; Miller, Frank H; Qayyum, Aliya; Small, William C; Sudakoff, Gary S; Yaghmai, Vahid; Yarmish, Gail M; Yee, Judy

    2013-06-01

    A fundamental consideration in the workup of a jaundiced patient is the pretest probability of mechanical obstruction. Ultrasound is the first-line modality to exclude biliary tract obstruction. When mechanical obstruction is present, additional imaging with CT or MRI can clarify etiology, define level of obstruction, stage disease, and guide intervention. When mechanical obstruction is absent, additional imaging can evaluate liver parenchyma for fat and iron deposition and help direct biopsy in cases where underlying parenchymal disease or mass is found. Imaging techniques are reviewed for the following clinical scenarios: (1) the patient with painful jaundice, (2) the patient with painless jaundice, and (3) the patient with a nonmechanical cause for jaundice. The ACR Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every 2 years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances where evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.

  6. Single particle raster image analysis of diffusion.

    PubMed

    Longfils, M; Schuster, E; Lorén, N; Särkkä, A; Rudemo, M

    2017-04-01

    As a complement to the standard RICS method of analysing Raster Image Correlation Spectroscopy images with estimation of the image correlation function, we introduce the method SPRIA, Single Particle Raster Image Analysis. Here, we start by identifying individual particles and estimate the diffusion coefficient for each particle by a maximum likelihood method. Averaging over the particles gives a diffusion coefficient estimate for the whole image. In examples both with simulated and experimental data, we show that the new method gives accurate estimates. It also gives directly standard error estimates. The method should be possible to extend to study heterogeneous materials and systems of particles with varying diffusion coefficient, as demonstrated in a simple simulation example. A requirement for applying the SPRIA method is that the particle concentration is low enough so that we can identify the individual particles. We also describe a bootstrap method for estimating the standard error of standard RICS.

  7. Functional data analysis in brain imaging studies.

    PubMed

    Tian, Tian Siva

    2010-01-01

    Functional data analysis (FDA) considers the continuity of the curves or functions, and is a topic of increasing interest in the statistics community. FDA is commonly applied to time-series and spatial-series studies. The development of functional brain imaging techniques in recent years made it possible to study the relationship between brain and mind over time. Consequently, an enormous amount of functional data is collected and needs to be analyzed. Functional techniques designed for these data are in strong demand. This paper discusses three statistically challenging problems utilizing FDA techniques in functional brain imaging analysis. These problems are dimension reduction (or feature extraction), spatial classification in functional magnetic resonance imaging studies, and the inverse problem in magneto-encephalography studies. The application of FDA to these issues is relatively new but has been shown to be considerably effective. Future efforts can further explore the potential of FDA in functional brain imaging studies.

  8. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  9. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  10. Quantitative analysis of qualitative images

    NASA Astrophysics Data System (ADS)

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  11. On Two-Dimensional ARMA Models for Image Analysis.

    DTIC Science & Technology

    1980-03-24

    2-D ARMA models for image analysis . Particular emphasis is placed on restoration of noisy images using 2-D ARMA models. Computer results are...is concluded that the models are very effective linear models for image analysis . (Author)

  12. VAICo: visual analysis for image comparison.

    PubMed

    Schmidt, Johanna; Gröller, M Eduard; Bruckner, Stefan

    2013-12-01

    Scientists, engineers, and analysts are confronted with ever larger and more complex sets of data, whose analysis poses special challenges. In many situations it is necessary to compare two or more datasets. Hence there is a need for comparative visualization tools to help analyze differences or similarities among datasets. In this paper an approach for comparative visualization for sets of images is presented. Well-established techniques for comparing images frequently place them side-by-side. A major drawback of such approaches is that they do not scale well. Other image comparison methods encode differences in images by abstract parameters like color. In this case information about the underlying image data gets lost. This paper introduces a new method for visualizing differences and similarities in large sets of images which preserves contextual information, but also allows the detailed analysis of subtle variations. Our approach identifies local changes and applies cluster analysis techniques to embed them in a hierarchy. The results of this process are then presented in an interactive web application which allows users to rapidly explore the space of differences and drill-down on particular features. We demonstrate the flexibility of our approach by applying it to multiple distinct domains.

  13. Rural tourism spatial distribution based on multi-criteria decision analysis and GIS

    NASA Astrophysics Data System (ADS)

    Zhang, Hongxian; Yang, Qingsheng

    2008-10-01

    To study spatial distribution of rural tourism can provide scientific decision basis for developing rural economics. Traditional ways of tourism spatial distribution have some limitations in quantifying priority locations of tourism development on small units. They can only produce the overall tourism distribution locations and whether locations are suitable to tourism development simply while the tourism develop ranking with different decision objectives should be considered. This paper presents a way to find ranking of location of rural tourism development in spatial by integrating multi-criteria decision analysis (MCDA) and geography information system (GIS). In order to develop country economics with inconvenient transportation, undeveloped economy and better tourism resource, these locations should be firstly develop rural tourism. Based on this objective, the tourism develop priority utility of each town is calculated with MCDA and GIS. Towns which should be first develop rural tourism can be selected with higher tourism develop priority utility. The method is used to find ranking of location of rural tourism in Ningbo City successfully. The result shows that MCDA is an effective way for distribution rural tourism in spatial based on special decision objectives and rural tourism can promote economic development.

  14. Using soil function evaluation in multi-criteria decision analysis for sustainability appraisal of remediation alternatives.

    PubMed

    Volchko, Yevheniya; Norrman, Jenny; Rosén, Lars; Bergknut, Magnus; Josefsson, Sarah; Söderqvist, Tore; Norberg, Tommy; Wiberg, Karin; Tysklind, Mats

    2014-07-01

    Soil contamination is one of the major threats constraining proper functioning of the soil and thus provision of ecosystem services. Remedial actions typically only address the chemical soil quality by reducing total contaminant concentrations to acceptable levels guided by land use. However, emerging regulatory requirements on soil protection demand a holistic view on soil assessment in remediation projects thus accounting for a variety of soil functions. Such a view would require not only that the contamination concentrations are assessed and attended to, but also that other aspects are taking into account, thus addressing also physical and biological as well as other chemical soil quality indicators (SQIs). This study outlines how soil function assessment can be a part of a holistic sustainability appraisal of remediation alternatives using multi-criteria decision analysis (MCDA). The paper presents a method for practitioners for evaluating the effects of remediation alternatives on selected ecological soil functions using a suggested minimum data set (MDS) containing physical, biological and chemical SQIs. The measured SQIs are transformed into sub-scores by the use of scoring curves, which allows interpretation and the integration of soil quality data into the MCDA framework. The method is demonstrated at a study site (Marieberg, Sweden) and the results give an example of how soil analyses using the suggested MDS can be used for soil function assessment and subsequent input to the MCDA framework.

  15. Extreme value distribution based gene selection criteria for discriminant microarray data analysis using logistic regression.

    PubMed

    Li, Wentian; Sun, Fengzhu; Grosse, Ivo

    2004-01-01

    One important issue commonly encountered in the analysis of microarray data is to decide which and how many genes should be selected for further studies. For discriminant microarray data analyses based on statistical models, such as the logistic regression models, gene selection can be accomplished by a comparison of the maximum likelihood of the model given the real data, L(D|M), and the expected maximum likelihood of the model given an ensemble of surrogate data with randomly permuted label, L(D(0)|M). Typically, the computational burden for obtaining L(D(0)M) is immense, often exceeding the limits of available computing resources by orders of magnitude. Here, we propose an approach that circumvents such heavy computations by mapping the simulation problem to an extreme-value problem. We present the derivation of an asymptotic distribution of the extreme-value as well as its mean, median, and variance. Using this distribution, we propose two gene selection criteria, and we apply them to two microarray datasets and three classification tasks for illustration.

  16. Mapping wetland functions using Earth observation data and multi-criteria analysis.

    PubMed

    Rapinel, Sébastien; Hubert-Moy, Laurence; Clément, Bernard; Maltby, Edward

    2016-11-01

    Wetland functional assessment is commonly conducted based on field observations, and thus, is generally limited to small areas. However, there is often a need for wetland managers to obtain information on wetland functional performance over larger areas. For this purpose, we are proposing a new field-based functional assessment procedure in which wetland functions are evaluated and classified into hydrogeomorphic units according to a multi-criteria analysis approach. Wetland-related geographic information system layers derived from Earth observation data (LiDAR, multispectral and radar data) are used in this study for a large-scale functional evaluation. These include maps of a hydrogeomorphic units, ditches, vegetation, annual flood duration, biomass, meadows management, and wetland boundaries. To demonstrate the feasibility of this approach, a 132 km(2) international long-term ecological research site located in the west of France was assessed. Four wetland functions were evaluated: flood peak attenuation, low water attenuation, denitrification, and habitat. A spatial distribution map of the individual wetland functions was generated, and the intensity levels of the functions were highlighted. Antagonisms between functions within individual hydrogeomorphic units were also identified. Mapping of hydrological, biogeochemical, and ecological wetland functions over large areas can provide an efficient tool for policy makers and other stakeholders including water authorities, nature conservation agencies, and farmers. Specifically, this tool has the potential to provide a mapping of ecosystem services, conservation management priorities, and possible improvements in water resources management.

  17. Multi-Criteria Decision Making for a Spatial Decision Support System on the Analysis of Changing Risk

    NASA Astrophysics Data System (ADS)

    Olyazadeh, Roya; van Westen, Cees; Bakker, Wim H.; Aye, Zar Chi; Jaboyedoff, Michel; Derron, Marc-Henri

    2014-05-01

    Natural hazard risk management requires decision making in several stages. Decision making on alternatives for risk reduction planning starts with an intelligence phase for recognition of the decision problems and identifying the objectives. Development of the alternatives and assigning the variable by decision makers to each alternative are employed to the design phase. Final phase evaluates the optimal choice by comparing the alternatives, defining indicators, assigning a weight to each and ranking them. This process is referred to as Multi-Criteria Decision Making analysis (MCDM), Multi-Criteria Evaluation (MCE) or Multi-Criteria Analysis (MCA). In the framework of the ongoing 7th Framework Program "CHANGES" (2011-2014, Grant Agreement No. 263953) of the European Commission, a Spatial Decision Support System is under development, that has the aim to analyse changes in hydro-meteorological risk and provide support to selecting the best risk reduction alternative. This paper describes the module for Multi-Criteria Decision Making analysis (MCDM) that incorporates monetary and non-monetary criteria in the analysis of the optimal alternative. The MCDM module consists of several components. The first step is to define criteria (or Indicators) which are subdivided into disadvantages (criteria that indicate the difficulty for implementing the risk reduction strategy, also referred to as Costs) and advantages (criteria that indicate the favorability, also referred to as benefits). In the next step the stakeholders can use the developed web-based tool for prioritizing criteria and decision matrix. Public participation plays a role in decision making and this is also planned through the use of a mobile web-version where the general local public can indicate their agreement on the proposed alternatives. The application is being tested through a case study related to risk reduction of a mountainous valley in the Alps affected by flooding. Four alternatives are evaluated in

  18. From Image Analysis to Computer Vision: Motives, Methods, and Milestones.

    DTIC Science & Technology

    1998-07-01

    images. Initially, work on digital image analysis dealt with specific classes of images such as text, photomicrographs, nuclear particle tracks, and aerial...photographs; but by the 1960’s, general algorithms and paradigms for image analysis began to be formulated. When the artificial intelligence...scene, but eventually from image sequences obtained by a moving camera; at this stage, image analysis had become scene analysis or computer vision

  19. Tunable filter-based multispectral imaging for detection of blood stains on construction material substrates. Part 1. Developing blood stain discrimination criteria.

    PubMed

    Janchaysang, Suwatwong; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2012-10-10

    In this article, we establish blood stain detection criteria that are less substrate dependent for use in a liquid crystal tunable filter-based multispectral-imaging system. Kubelka-Munk (KM) theory is applied to transform the acquired stains' reflectance spectra into the less substrate dependent spectra. Chosen spectral parameters are extracted from the KM absorbance spectra of several stain samples on several substrates. Blood discrimination criteria based upon those spectral parameters are then established from empirical data, tested, and refined. In our newly invented method, instead of introducing conventional contrast enhancement on the blood stain image, blood stain determination is executed mathematically via Boolean logic, resulting in more discriminative blood stain identification. This proposed approach allows for nondestructive, quick, discriminative, and easy-to-improve presumptive blood stain detection. Experimental results confirm that our blood stain discrimination criteria can be used to locate blood stains on several construction materials with high precision. True positive rates (sensitivity) from 0.60 to 0.95 are achieved depending on blood stain faintness and substrate types. Also, true negative rates (specificity) between 0.55 and 0.96 and identification time of 4-5 min are accomplished, respectively. The established blood stain discrimination criteria will be incorporated in a real blood stain detection system in part 2 of this article, where system design and considerations as well as speed issues are discussed.

  20. Selecting an image analysis minicomputer system

    NASA Technical Reports Server (NTRS)

    Danielson, R.

    1981-01-01

    Factors to be weighed when selecting a minicomputer system as the basis for an image analysis computer facility vary depending on whether the user organization procures a new computer or selects an existing facility to serve as an image analysis host. Some conditions not directly related to hardware or software should be considered such as the flexibility of the computer center staff, their encouragement of innovation, and the availability of the host processor to a broad spectrum of potential user organizations. Particular attention must be given to: image analysis software capability; the facilities of a potential host installation; the central processing unit; the operating system and languages; main memory; disk storage; tape drives; hardcopy output; and other peripherals. The operational environment, accessibility; resource limitations; and operational supports are important. Charges made for program execution and data storage must also be examined.

  1. SCORE: a novel multi-criteria decision analysis approach to assessing the sustainability of contaminated land remediation.

    PubMed

    Rosén, Lars; Back, Pär-Erik; Söderqvist, Tore; Norrman, Jenny; Brinkhoff, Petra; Norberg, Tommy; Volchko, Yevheniya; Norin, Malin; Bergknut, Magnus; Döberl, Gernot

    2015-04-01

    The multi-criteria decision analysis (MCDA) method provides for a comprehensive and transparent basis for performing sustainability assessments. Development of a relevant MCDA-method requires consideration of a number of key issues, e.g. (a) definition of assessment boundaries, (b) definition of performance scales, both temporal and spatial, (c) selection of relevant criteria (indicators) that facilitate a comprehensive sustainability assessment while avoiding double-counting of effects, and (d) handling of uncertainties. Adding to the complexity is the typically wide variety of inputs, including quantifications based on existing data, expert judgements, and opinions expressed in interviews. The SCORE (Sustainable Choice Of REmediation) MCDA-method was developed to provide a transparent assessment of the sustainability of possible remediation alternatives for contaminated sites relative to a reference alternative, considering key criteria in the economic, environmental, and social sustainability domains. The criteria were identified based on literature studies, interviews and focus-group meetings. SCORE combines a linear additive model to rank the alternatives with a non-compensatory approach to identify alternatives regarded as non-sustainable. The key strengths of the SCORE method are as follows: a framework that at its core is designed to be flexible and transparent; the possibility to integrate both quantitative and qualitative estimations on criteria; its ability, unlike other sustainability assessment tools used in industry and academia, to allow for the alteration of boundary conditions where necessary; the inclusion of a full uncertainty analysis of the results, using Monte Carlo simulation; and a structure that allows preferences and opinions of involved stakeholders to be openly integrated into the analysis. A major insight from practical application of SCORE is that its most important contribution may be that it initiates a process where criteria

  2. The impact of expert knowledge on natural hazard susceptibility assessment using spatial multi-criteria analysis

    NASA Astrophysics Data System (ADS)

    Karlsson, Caroline; Kalantari, Zahra; Mörtberg, Ulla; Olofsson, Bo; Lyon, Steve

    2016-04-01

    Road and railway networks are one of the key factors to a country's economic growth. Inadequate infrastructural networks could be detrimental to a society if the transport between locations are hindered or delayed. Logistical hindrances can often be avoided whereas natural hindrances are more difficult to control. One natural hindrance that can have a severe adverse effect on both infrastructure and society is flooding. Intense and heavy rainfall events can trigger other natural hazards such as landslides and debris flow. Disruptions caused by landslides are similar to that of floods and increase the maintenance cost considerably. The effect on society by natural disasters is likely to increase due to a changed climate with increasing precipitation. Therefore, there is a need for risk prevention and mitigation of natural hazards. Determining susceptible areas and incorporating them in the decision process may reduce the infrastructural harm. Spatial multi-criteria analysis (SMCA) is a part of decision analysis, which provides a set of procedures for analysing complex decision problems through a Geographic Information System (GIS). The objective and aim of this study was to evaluate the usefulness of expert judgements for inundation, landslide and debris flow susceptibility assessments through a SMCA approach using hydrological, geological and land use factors. The sensitivity of the SMCA model was tested in relation to each perspective and impact on the resulting susceptibility. A least cost path function was used to compare new alternative road lines with the existing ones. This comparison was undertaken to identify the resulting differences in the susceptibility assessments using expert judgements as well as historic incidences of flooding and landslides in order to discuss the usefulness of the model in road planning.

  3. Automated eXpert Spectral Image Analysis

    SciTech Connect

    Keenan, Michael R.

    2003-11-25

    AXSIA performs automated factor analysis of hyperspectral images. In such images, a complete spectrum is collected an each point in a 1-, 2- or 3- dimensional spatial array. One of the remaining obstacles to adopting these techniques for routine use is the difficulty of reducing the vast quantities of raw spectral data to meaningful information. Multivariate factor analysis techniques have proven effective for extracting the essential information from high dimensional data sets into a limted number of factors that describe the spectral characteristics and spatial distributions of the pure components comprising the sample. AXSIA provides tools to estimate different types of factor models including Singular Value Decomposition (SVD), Principal Component Analysis (PCA), PCA with factor rotation, and Alternating Least Squares-based Multivariate Curve Resolution (MCR-ALS). As part of the analysis process, AXSIA can automatically estimate the number of pure components that comprise the data and can scale the data to account for Poisson noise. The data analysis methods are fundamentally based on eigenanalysis of the data crossproduct matrix coupled with orthogonal eigenvector rotation and constrained alternating least squares refinement. A novel method for automatically determining the number of significant components, which is based on the eigenvalues of the crossproduct matrix, has also been devised and implemented. The data can be compressed spectrally via PCA and spatially through wavelet transforms, and algorithms have been developed that perform factor analysis in the transform domain while retaining full spatial and spectral resolution in the final result. These latter innovations enable the analysis of larger-than core-memory spectrum-images. AXSIA was designed to perform automated chemical phase analysis of spectrum-images acquired by a variety of chemical imaging techniques. Successful applications include Energy Dispersive X-ray Spectroscopy, X-ray Fluorescence

  4. Objective facial photograph analysis using imaging software.

    PubMed

    Pham, Annette M; Tollefson, Travis T

    2010-05-01

    Facial analysis is an integral part of the surgical planning process. Clinical photography has long been an invaluable tool in the surgeon's practice not only for accurate facial analysis but also for enhancing communication between the patient and surgeon, for evaluating postoperative results, for medicolegal documentation, and for educational and teaching opportunities. From 35-mm slide film to the digital technology of today, clinical photography has benefited greatly from technological advances. With the development of computer imaging software, objective facial analysis becomes easier to perform and less time consuming. Thus, while the original purpose of facial analysis remains the same, the process becomes much more efficient and allows for some objectivity. Although clinical judgment and artistry of technique is never compromised, the ability to perform objective facial photograph analysis using imaging software may become the standard in facial plastic surgery practices in the future.

  5. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2007-11-02

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  6. Recognizing systemic sclerosis: comparative analysis of various sets of classification criteria.

    PubMed

    Romanowska-Próchnicka, Katarzyna; Walczyk, Marcela; Olesińska, Marzena

    2016-01-01

    Systemic sclerosis is a complex disease characterized by autoimmunity, vasculopathy and tissue fibrosis. Although most patients present with some degree of skin sclerosis, which is a distinguishing hallmark, the clinical presentation vary greatly complicating the diagnosis. In this regard, new classification criteria were jointly published in 2013 by American College of Rheumatology (ACR) and European League Against Rheumatism (EULAR). A recent major development in the classification criteria is improved sensitivity, particularly for detecting early disease. The new criteria allow more cases to be classified as having systemic sclerosis (SSc), which leads to earlier treatment. Moreover it is clinically beneficial in preventing the disease progression with its irreversible fibrosis and organ damage. The aim of this review is to give insight into new classification criteria and current trends in the diagnosis of systemic sclerosis.

  7. Recognizing systemic sclerosis: comparative analysis of various sets of classification criteria

    PubMed Central

    Romanowska-Próchnicka, Katarzyna; Olesińska, Marzena

    2016-01-01

    Systemic sclerosis is a complex disease characterized by autoimmunity, vasculopathy and tissue fibrosis. Although most patients present with some degree of skin sclerosis, which is a distinguishing hallmark, the clinical presentation vary greatly complicating the diagnosis. In this regard, new classification criteria were jointly published in 2013 by American College of Rheumatology (ACR) and European League Against Rheumatism (EULAR). A recent major development in the classification criteria is improved sensitivity, particularly for detecting early disease. The new criteria allow more cases to be classified as having systemic sclerosis (SSc), which leads to earlier treatment. Moreover it is clinically beneficial in preventing the disease progression with its irreversible fibrosis and organ damage. The aim of this review is to give insight into new classification criteria and current trends in the diagnosis of systemic sclerosis. PMID:28115780

  8. Analysis of extensively washed hair from cocaine users and drug chemists to establish new reporting criteria.

    PubMed

    Morris-Kukoski, Cynthia L; Montgomery, Madeline A; Hammer, Rena L

    2014-01-01

    Samples from a self-proclaimed cocaine (COC) user, from 19 drug users (postmortem) and from 27 drug chemists were extensively washed and analyzed for COC, benzoylecgonine, norcocaine (NC), cocaethylene (CE) and aryl hydroxycocaines by liquid chromatography-tandem mass spectrometry. Published wash criteria and cutoffs were applied to the results. Additionally, the data were used to formulate new reporting criteria and interpretation guidelines for forensic casework. Applying the wash and reporting criteria, hair that was externally contaminated with COC was distinguished from hair collected from individuals known to have consumed COC. In addition, CE, NC and hydroxycocaine metabolites were only present in COC users' hair and not in drug chemists' hair. When properly applied, the use of an extended wash, along with the reporting criteria defined here, will exclude false-positive results from environmental contact with COC.

  9. Discussion paper on applicability of oil and grease analysis for RCRA closure criteria

    SciTech Connect

    1995-02-01

    A site characterization (SC) was performed for the Building 9409-5 Diked Tank Storage Facility. The initial SC indicated areas which had oil and grease levels above the criteria of the currently proposed RCRA closure plan. After further investigation, it was demonstrated that the oil and grease parameter may not be an accurate indication of a release from this facility and should not be included as a contaminant of concern in the closure criteria.

  10. Motion Analysis From Television Images

    NASA Astrophysics Data System (ADS)

    Silberberg, George G.; Keller, Patrick N.

    1982-02-01

    The Department of Defense ranges have relied on photographic instrumentation for gathering data of firings for all types of ordnance. A large inventory of cameras are available on the market that can be used for these tasks. A new set of optical instrumentation is beginning to appear which, in many cases, can directly replace photographic cameras for a great deal of the work being performed now. These are television cameras modified so they can stop motion, see in the dark, perform under hostile environments, and provide real time information. This paper discusses techniques for modifying television cameras so they can be used for motion analysis.

  11. Addressing preference heterogeneity in public health policy by combining Cluster Analysis and Multi-Criteria Decision Analysis: Proof of Method.

    PubMed

    Kaltoft, Mette Kjer; Turner, Robin; Cunich, Michelle; Salkeld, Glenn; Nielsen, Jesper Bo; Dowie, Jack

    2015-01-01

    The use of subgroups based on biological-clinical and socio-demographic variables to deal with population heterogeneity is well-established in public policy. The use of subgroups based on preferences is rare, except when religion based, and controversial. If it were decided to treat subgroup preferences as valid determinants of public policy, a transparent analytical procedure is needed. In this proof of method study we show how public preferences could be incorporated into policy decisions in a way that respects both the multi-criterial nature of those decisions, and the heterogeneity of the population in relation to the importance assigned to relevant criteria. It involves combining Cluster Analysis (CA), to generate the subgroup sets of preferences, with Multi-Criteria Decision Analysis (MCDA), to provide the policy framework into which the clustered preferences are entered. We employ three techniques of CA to demonstrate that not only do different techniques produce different clusters, but that choosing among techniques (as well as developing the MCDA structure) is an important task to be undertaken in implementing the approach outlined in any specific policy context. Data for the illustrative, not substantive, application are from a Randomized Controlled Trial of online decision aids for Australian men aged 40-69 years considering Prostate-specific Antigen testing for prostate cancer. We show that such analyses can provide policy-makers with insights into the criterion-specific needs of different subgroups. Implementing CA and MCDA in combination to assist in the development of policies on important health and community issues such as drug coverage, reimbursement, and screening programs, poses major challenges -conceptual, methodological, ethical-political, and practical - but most are exposed by the techniques, not created by them.

  12. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    NASA Astrophysics Data System (ADS)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  13. Multilocus Genetic Analysis of Brain Images

    PubMed Central

    Hibar, Derrek P.; Kohannim, Omid; Stein, Jason L.; Chiang, Ming-Chang; Thompson, Paul M.

    2011-01-01

    The quest to identify genes that influence disease is now being extended to find genes that affect biological markers of disease, or endophenotypes. Brain images, in particular, provide exquisitely detailed measures of anatomy, function, and connectivity in the living brain, and have identified characteristic features for many neurological and psychiatric disorders. The emerging field of imaging genomics is discovering important genetic variants associated with brain structure and function, which in turn influence disease risk and fundamental cognitive processes. Statistical approaches for testing genetic associations are not straightforward to apply to brain images because the data in brain images is spatially complex and generally high dimensional. Neuroimaging phenotypes typically include 3D maps across many points in the brain, fiber tracts, shape-based analyses, and connectivity matrices, or networks. These complex data types require new methods for data reduction and joint consideration of the image and the genome. Image-wide, genome-wide searches are now feasible, but they can be greatly empowered by sparse regression or hierarchical clustering methods that isolate promising features, boosting statistical power. Here we review the evolution of statistical approaches to assess genetic influences on the brain. We outline the current state of multivariate statistics in imaging genomics, and future directions, including meta-analysis. We emphasize the power of novel multivariate approaches to discover reliable genetic influences with small effect sizes. PMID:22303368

  14. Medical image analysis with artificial neural networks.

    PubMed

    Jiang, J; Trundle, P; Ren, J

    2010-12-01

    Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging.

  15. Curvelet Based Offline Analysis of SEM Images

    PubMed Central

    Shirazi, Syed Hamad; Haq, Nuhman ul; Hayat, Khizar; Naz, Saeeda; Haque, Ihsan ul

    2014-01-01

    Manual offline analysis, of a scanning electron microscopy (SEM) image, is a time consuming process and requires continuous human intervention and efforts. This paper presents an image processing based method for automated offline analyses of SEM images. To this end, our strategy relies on a two-stage process, viz. texture analysis and quantification. The method involves a preprocessing step, aimed at the noise removal, in order to avoid false edges. For texture analysis, the proposed method employs a state of the art Curvelet transform followed by segmentation through a combination of entropy filtering, thresholding and mathematical morphology (MM). The quantification is carried out by the application of a box-counting algorithm, for fractal dimension (FD) calculations, with the ultimate goal of measuring the parameters, like surface area and perimeter. The perimeter is estimated indirectly by counting the boundary boxes of the filled shapes. The proposed method, when applied to a representative set of SEM images, not only showed better results in image segmentation but also exhibited a good accuracy in the calculation of surface area and perimeter. The proposed method outperforms the well-known Watershed segmentation algorithm. PMID:25089617

  16. Fourier analysis: from cloaking to imaging

    NASA Astrophysics Data System (ADS)

    Wu, Kedi; Cheng, Qiluan; Wang, Guo Ping

    2016-04-01

    Regarding invisibility cloaks as an optical imaging system, we present a Fourier approach to analytically unify both Pendry cloaks and complementary media-based invisibility cloaks into one kind of cloak. By synthesizing different transfer functions, we can construct different devices to realize a series of interesting functions such as hiding objects (events), creating illusions, and performing perfect imaging. In this article, we give a brief review on recent works of applying Fourier approach to analysis invisibility cloaks and optical imaging through scattering layers. We show that, to construct devices to conceal an object, no constructive materials with extreme properties are required, making most, if not all, of the above functions realizable by using naturally occurring materials. As instances, we experimentally verify a method of directionally hiding distant objects and create illusions by using all-dielectric materials, and further demonstrate a non-invasive method of imaging objects completely hidden by scattering layers.

  17. Deep Learning in Medical Image Analysis.

    PubMed

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-03-09

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement. Expected final online publication date for the Annual Review of Biomedical Engineering Volume 19 is June 4, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

  18. Multi-criteria decision analysis for bioenergy in the Centre Region of Portugal

    NASA Astrophysics Data System (ADS)

    Esteves, T. C. J.; Cabral, P.; Ferreira, A. J. D.; Teixeira, J. C.

    2012-04-01

    With the consumption of fossil fuels, the resources essential to Man's survival are being rapidly contaminated. A sustainable future may be achieved by the use of renewable energies, allowing countries without non-renewable energy resources to guarantee energetic sovereignty. Using bioenergy may mean a steep reduction and/or elimination of the external dependency, enhancing the countries' capital and potentially reducing of the negative effects that outcome from the use of fossil fuels, such as loss of biodiversity, air, water, and soil pollution, … This work's main focus is to increase bioenergy use in the centre region of Portugal by allying R&D to facilitate determination of bioenergy availability and distribution throughout the study area.This analysis is essential, given that nowadays this knowledge is still very limited in the study area. Geographic Information Systems (GIS) was the main tool used to asses this study, due to its unseeingly ability to integrate various types of information (such as alphanumerical, statistical, geographical, …) and various sources of biomass (forest, agricultural, husbandry, municipal and industrial residues, shrublands, used vegetable oil and energy crops) to determine the bioenergy potential of the study area, as well as their spatial distribution. By allying GIS with multi-criteria decision analysis, the initial table-like information of difficult comprehension is transformed into tangible and easy to read results: both intermediate and final results of the created models will facilitate the decision making process. General results show that the major contributors for the bioenergy potential in the Centre Region of Portugal are forest residues, which are mostly located in the inner region of the study area. However, a more detailed analysis should be made to analyze the viability to use energy crops. As a main conclusion, we can say that, although this region may not use only this type of energy to be completely

  19. Measuring toothbrush interproximal penetration using image analysis

    NASA Astrophysics Data System (ADS)

    Hayworth, Mark S.; Lyons, Elizabeth K.

    1994-09-01

    An image analysis method of measuring the effectiveness of a toothbrush in reaching the interproximal spaces of teeth is described. Artificial teeth are coated with a stain that approximates real plaque and then brushed with a toothbrush on a brushing machine. The teeth are then removed and turned sideways so that the interproximal surfaces can be imaged. The areas of stain that have been removed within masked regions that define the interproximal regions are measured and reported. These areas correspond to the interproximal areas of the tooth reached by the toothbrush bristles. The image analysis method produces more precise results (10-fold decrease in standard deviation) in a fraction (22%) of the time as compared to our prior visual grading method.

  20. Four Common Simplifications of Multi-Criteria Decision Analysis do not hold for River Rehabilitation

    PubMed Central

    2016-01-01

    River rehabilitation aims at alleviating negative effects of human impacts such as loss of biodiversity and reduction of ecosystem services. Such interventions entail difficult trade-offs between different ecological and often socio-economic objectives. Multi-Criteria Decision Analysis (MCDA) is a very suitable approach that helps assessing the current ecological state and prioritizing river rehabilitation measures in a standardized way, based on stakeholder or expert preferences. Applications of MCDA in river rehabilitation projects are often simplified, i.e. using a limited number of objectives and indicators, assuming linear value functions, aggregating individual indicator assessments additively, and/or assuming risk neutrality of experts. Here, we demonstrate an implementation of MCDA expert preference assessments to river rehabilitation and provide ample material for other applications. To test whether the above simplifications reflect common expert opinion, we carried out very detailed interviews with five river ecologists and a hydraulic engineer. We defined essential objectives and measurable quality indicators (attributes), elicited the experts´ preferences for objectives on a standardized scale (value functions) and their risk attitude, and identified suitable aggregation methods. The experts recommended an extensive objectives hierarchy including between 54 and 93 essential objectives and between 37 to 61 essential attributes. For 81% of these, they defined non-linear value functions and in 76% recommended multiplicative aggregation. The experts were risk averse or risk prone (but never risk neutral), depending on the current ecological state of the river, and the experts´ personal importance of objectives. We conclude that the four commonly applied simplifications clearly do not reflect the opinion of river rehabilitation experts. The optimal level of model complexity, however, remains highly case-study specific depending on data and resource

  1. Unsupervised hyperspectral image analysis using independent component analysis (ICA)

    SciTech Connect

    S. S. Chiang; I. W. Ginsberg

    2000-06-30

    In this paper, an ICA-based approach is proposed for hyperspectral image analysis. It can be viewed as a random version of the commonly used linear spectral mixture analysis, in which the abundance fractions in a linear mixture model are considered to be unknown independent signal sources. It does not require the full rank of the separating matrix or orthogonality as most ICA methods do. More importantly, the learning algorithm is designed based on the independency of the material abundance vector rather than the independency of the separating matrix generally used to constrain the standard ICA. As a result, the designed learning algorithm is able to converge to non-orthogonal independent components. This is particularly useful in hyperspectral image analysis since many materials extracted from a hyperspectral image may have similar spectral signatures and may not be orthogonal. The AVIRIS experiments have demonstrated that the proposed ICA provides an effective unsupervised technique for hyperspectral image classification.

  2. Digital image analysis of haematopoietic clusters.

    PubMed

    Benzinou, A; Hojeij, Y; Roudot, A-C

    2005-02-01

    Counting and differentiating cell clusters is a tedious task when performed with a light microscope. Moreover, biased counts and interpretation are difficult to avoid because of the difficulties to evaluate the limits between different types of clusters. Presented here, is a computer-based application able to solve these problems. The image analysis system is entirely automatic, from the stage screening, to the statistical analysis of the results of each experimental plate. Good correlations are found with measurements made by a specialised technician.

  3. Fake fingerprint detection based on image analysis

    NASA Astrophysics Data System (ADS)

    Jin, Sang-il; Bae, You-suk; Maeng, Hyun-ju; Lee, Hyun-suk

    2010-01-01

    Fingerprint recognition systems have become prevalent in various security applications. However, recent studies have shown that it is not difficult to deceive the system with fake fingerprints made of silicon or gelatin. The fake fingerprints have almost the same ridge-valley patterns as ones of genuine fingerprints so that conventional systems are unable to detect fake fingerprints without a particular detection method. Many previous works against fake fingers required extra sensors; thus, they lacked practicality. This paper proposes a practical and effective method that detects fake fingerprints, using only an image sensor. Two criteria are introduced to differentiate genuine and fake fingerprints: the histogram distance and Fourier spectrum distance. In the proposed method, after identifying an input fingerprint of a user, the system computes two distances between the input and the reference that comes from the registered fingerprints of the user. Depending on the two distances, the system classifies the input as a genuine fingerprint or a fake. In the experiment, 2,400 fingerprint images including 1,600 fakes were tested, and the proposed method has shown a high recognition rate of 95%. The fake fingerprints were all accepted by a commercial system; thus, the use of these fake fingerprints qualifies the experiment.

  4. COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    EPA Science Inventory



    COMPUTER ANALYSIS OF PLANAR GAMMA CAMERA IMAGES

    T Martonen1 and J Schroeter2

    1Experimental Toxicology Division, National Health and Environmental Effects Research Laboratory, U.S. EPA, Research Triangle Park, NC 27711 USA and 2Curriculum in Toxicology, Unive...

  5. Scale Free Reduced Rank Image Analysis.

    ERIC Educational Resources Information Center

    Horst, Paul

    In the traditional Guttman-Harris type image analysis, a transformation is applied to the data matrix such that each column of the transformed data matrix is the best least squares estimate of the corresponding column of the data matrix from the remaining columns. The model is scale free. However, it assumes (1) that the correlation matrix is…

  6. Using Image Analysis to Build Reading Comprehension

    ERIC Educational Resources Information Center

    Brown, Sarah Drake; Swope, John

    2010-01-01

    Content area reading remains a primary concern of history educators. In order to better prepare students for encounters with text, the authors propose the use of two image analysis strategies tied with a historical theme to heighten student interest in historical content and provide a basis for improved reading comprehension.

  7. Visualization of parameter space for image analysis.

    PubMed

    Pretorius, A Johannes; Bray, Mark-Anthony P; Carpenter, Anne E; Ruddle, Roy A

    2011-12-01

    Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.

  8. Evaluation of the indications for performing magnetic resonance imaging of the female pelvis at a referral center for cancer, according to the American College of Radiology criteria

    PubMed Central

    Boaventura, Camila Silva; Rodrigues, Daniel Padilha; Silva, Olimpio Antonio Cornehl; Beltrani, Fabrício Henrique; de Melo, Rayssa Araruna Bezerra; Bitencourt, Almir Galvão Vieira; Mendes, Gustavo Gomes; Chojniak, Rubens

    2017-01-01

    Objective To evaluate the indications for performing magnetic resonance imaging of the female pelvis at a referral center for cancer. Materials and Methods This was a retrospective, single-center study, conducted by reviewing medical records and imaging reports. We included 1060 female patients who underwent magnetic resonance imaging of the pelvis at a cancer center between January 2013 and June 2014. The indications for performing the examination were classified according to the American College of Radiology (ACR) criteria. Results The mean age of the patients was 52.6 ± 14.8 years, and 49.8% were perimenopausal or postmenopausal. The majority (63.9%) had a history of cancer, which was gynecologic in 29.5% and nongynecologic in 34.4%. Of the patients evaluated, 44.0% had clinical complaints, the most common being pelvic pain (in 11.5%) and bleeding (in 9.8%), and 34.7% of patients had previously had abnormal findings on ultrasound. Most (76.7%) of the patients met the criteria for undergoing magnetic resonance imaging, according to the ACR guidelines. The main indications were evaluation of tumor recurrence after surgical resection (in 25.9%); detection and staging of gynecologic neoplasms (in 23.3%); and evaluation of pelvic pain or of a mass (in 17.1%). Conclusion In the majority of the cases evaluated, magnetic resonance imaging was clearly indicated according to the ACR criteria. The main indication was local recurrence after surgical treatment of pelvic malignancies, which is consistent with the routine protocols at cancer centers. PMID:28298725

  9. ImageJ: Image processing and analysis in Java

    NASA Astrophysics Data System (ADS)

    Rasband, W. S.

    2012-06-01

    ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.

  10. Good relationships between computational image analysis and radiological physics

    NASA Astrophysics Data System (ADS)

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-01

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  11. Good relationships between computational image analysis and radiological physics

    SciTech Connect

    Arimura, Hidetaka; Kamezawa, Hidemi; Jin, Ze; Nakamoto, Takahiro; Soufi, Mazen

    2015-09-30

    Good relationships between computational image analysis and radiological physics have been constructed for increasing the accuracy of medical diagnostic imaging and radiation therapy in radiological physics. Computational image analysis has been established based on applied mathematics, physics, and engineering. This review paper will introduce how computational image analysis is useful in radiation therapy with respect to radiological physics.

  12. Analysis of diagnostic criteria in adamantiades-behçet disease: a retrospective study.

    PubMed

    di Meo, Nicola; Bergamo, S; Vidimari, P; Bonin, S; Trevisan, G

    2013-07-01

    Adamantiades-Behçet's disease (ABD) is a chronic-relapsing, inflammatory and multi-systemic disease. Any organ or system may be involved: ABD presents a great variety of cutaneous and mucosal lesions, ocular manifestations, central and peripheral nervous system abnormalities, joint as well as gastrointestinal involvement. Since clear pathognomonic clinical features and laboratory tests are lacking, the diagnosis of ABD mainly relies on the characteristic clinical features. Several sets of diagnostic criteria have been used. The International Study Group for Behçet Disease (ISGBD) in 1990 formulated a set of criteria to warrant uniformity of both diagnosis and classification. Therefore, in 2006, a new set was proposed by the International Team for the Revision of the International Criteria for Behçet's Disease (ITR-ICBD) not only to uniform the previous criteria but also to establish best accuracy, along with an optimum sensivity and specificity. The aims of this study are both to analyze the clinical features of ABD patients and to validate the ISGBD and ITR-ICDB criteria for the diagnosis of ABD in our cohort.

  13. Automated retinal image analysis over the internet.

    PubMed

    Tsai, Chia-Ling; Madore, Benjamin; Leotta, Matthew J; Sofka, Michal; Yang, Gehua; Majerovics, Anna; Tanenbaum, Howard L; Stewart, Charles V; Roysam, Badrinath

    2008-07-01

    Retinal clinicians and researchers make extensive use of images, and the current emphasis is on digital imaging of the retinal fundus. The goal of this paper is to introduce a system, known as retinal image vessel extraction and registration system, which provides the community of retinal clinicians, researchers, and study directors an integrated suite of advanced digital retinal image analysis tools over the Internet. The capabilities include vasculature tracing and morphometry, joint (simultaneous) montaging of multiple retinal fields, cross-modality registration (color/red-free fundus photographs and fluorescein angiograms), and generation of flicker animations for visualization of changes from longitudinal image sequences. Each capability has been carefully validated in our previous research work. The integrated Internet-based system can enable significant advances in retina-related clinical diagnosis, visualization of the complete fundus at full resolution from multiple low-angle views, analysis of longitudinal changes, research on the retinal vasculature, and objective, quantitative computer-assisted scoring of clinical trials imagery. It could pave the way for future screening services from optometry facilities.

  14. Digital imaging analysis to assess scar phenotype.

    PubMed

    Smith, Brian J; Nidey, Nichole; Miller, Steven F; Moreno Uribe, Lina M; Baum, Christian L; Hamilton, Grant S; Wehby, George L; Dunnwald, Martine

    2014-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive, and unbiased assessments of postsurgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue digital images of postsurgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD imaging system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software, and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with intraclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (p ranging from 1.20(-05) to 1.95(-14) ). Physicians' clinical outcome ratings from the same images showed high interobserver variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome.

  15. Digital imaging analysis to assess scar phenotype

    PubMed Central

    Smith, Brian J.; Nidey, Nichole; Miller, Steven F.; Moreno, Lina M.; Baum, Christian L.; Hamilton, Grant S.; Wehby, George L.; Dunnwald, Martine

    2015-01-01

    In order to understand the link between the genetic background of patients and wound clinical outcomes, it is critical to have a reliable method to assess the phenotypic characteristics of healed wounds. In this study, we present a novel imaging method that provides reproducible, sensitive and unbiased assessments of post-surgical scarring. We used this approach to investigate the possibility that genetic variants in orofacial clefting genes are associated with suboptimal healing. Red-green-blue (RGB) digital images of post-surgical scars of 68 patients, following unilateral cleft lip repair, were captured using the 3dMD image system. Morphometric and colorimetric data of repaired regions of the philtrum and upper lip were acquired using ImageJ software and the unaffected contralateral regions were used as patient-specific controls. Repeatability of the method was high with interclass correlation coefficient score > 0.8. This method detected a very significant difference in all three colors, and for all patients, between the scarred and the contralateral unaffected philtrum (P ranging from 1.20−05 to 1.95−14). Physicians’ clinical outcome ratings from the same images showed high inter-observer variability (overall Pearson coefficient = 0.49) as well as low correlation with digital image analysis results. Finally, we identified genetic variants in TGFB3 and ARHGAP29 associated with suboptimal healing outcome. PMID:24635173

  16. SU-E-T-679: Retrospective Analysis of the Sensitivity of Planar Dose Measurements To Gamma Analysis Criteria

    SciTech Connect

    Elguindi, S; Ezzell, G; Gagneur, J

    2015-06-15

    Purpose: IMRT QA using planar dose measurements is still a widely used method for checking the accuracy of treatment plans. A pass/fail judgment is made using gamma analysis based on a single endpoint. Using more stringent criteria is a way to increase the sensitivity to planning and delivery errors. Before such implementation, it is necessary to understand how the sensitivity to different gamma criteria settings affects gamma passing rates (GPR). Methods: 752 IMRT QA measurements were re-analyzed with varying distance to agreement (DTA) and dose difference (DD) percentages using a Matlab program. Other quantifying information such as the mean dose difference in the treatment target (defined as points that are greater than 80% of maximal dose) were stored in a relational database for retrospective analysis. Results: The average and standard deviation of GPR (%) fell from 99.84 ± (0.43) to 89.61 ± (6.08) when restricting DD from 5 − 1% respectively, as compared to a drop from 99.15 ± (1.19) to 95.00 ± (4.43), when restricting the DTA from 5 − 1 mm respectively. The mean dose difference (%) in the treatment target between measured and calculated dose was −1.96 ± (0. 83), −0.09 ± (0.98), and 1.44 ± (0. 86) for each of our institution’s three matched linear accelerators (LINAC 1, 2, and 3 respectively). For plans that are approximately 2.7 sigma below the mean GPR, an average of 78.4% of those plans were measured on LINAC 1 or 3, while only 48% of the total plans were run on those machines. Conclusion: The data demonstrates that when restricting gamma criterion, such as the DD, the greatest indicator of reduced GPR in our institution is which matched LINAC the plan was measured on. While small, these differences manifest themselves to levels comparable to other treatment related differences and possibly confound the gamma analysis.

  17. Follow-up of multicentric HCC according to the mRECIST criteria: role of 320-Row CT with semi-automatic 3D analysis software for evaluating the response to systemic therapy

    PubMed Central

    TELEGRAFO, M.; DILORENZO, G.; DI GIOVANNI, G.; CORNACCHIA, I.; STABILE IANORA, A.A.; ANGELELLI, G.; MOSCHETTA, M.

    2016-01-01

    Aim To evaluate the role of 320-detector row computed tomography (MDCT) with 3D analysis software in follow up of patients affected by multicentric hepatocellular carcinoma (HCC) treated with systemic therapy by using modified response evaluation criteria in solid tumors (mRECIST). Patients and methods 38 patients affected by multicentric HCC underwent MDCT. All exams were performed before and after iodinate contrast material intravenous injection by using a 320-detection row CT device. CT images were analyzed by two radiologists using multi-planar reconstructions (MPR) in order to assess the response to systemic therapy according to mRECIST criteria: complete response (CR), partial response (PR), progressive disease (PD), stable disease (SD). 30 days later, the same two radiologists evaluated target lesion response to systemic therapy according to mRECIST criteria by using 3D analysis software. The difference between the two systems in assessing HCC response to therapy was assessed by the analysis of the variance (Anova Test). Interobserver agreement between the two radiologists by using MPR images and 3D analysis software was calculated by using Cohen’s Kappa test. Results PR occurred in 10/38 cases (26%), PD in 6/38 (16%), SD in 22/38 (58%). Anova Test showed no statistically significant difference between the two systems for assessing target lesion response to therapy (p >0.05). Inter-observer agreement (k) was respectively of 0.62 for MPR images measurements and 0.86 for 3D analysis ones. Conclusions 3D Analysis software provides a semiautomatic system for assessing target lesion response to therapy according to mRE-CIST criteria in patient affected by multifocal HCC treated with systemic therapy. The reliability of 3D analysis software makes it useful in the clinical practice. PMID:28098056

  18. ALISA: adaptive learning image and signal analysis

    NASA Astrophysics Data System (ADS)

    Bock, Peter

    1999-01-01

    ALISA (Adaptive Learning Image and Signal Analysis) is an adaptive statistical learning engine that may be used to detect and classify the surfaces and boundaries of objects in images. The engine has been designed, implemented, and tested at both the George Washington University and the Research Institute for Applied Knowledge Processing in Ulm, Germany over the last nine years with major funding from Robert Bosch GmbH and Lockheed-Martin Corporation. The design of ALISA was inspired by the multi-path cortical- column architecture and adaptive functions of the mammalian visual cortex.

  19. Characterization of microrod arrays by image analysis

    NASA Astrophysics Data System (ADS)

    Hillebrand, Reinald; Grimm, Silko; Giesa, Reiner; Schmidt, Hans-Werner; Mathwig, Klaus; Gösele, Ulrich; Steinhart, Martin

    2009-04-01

    The uniformity of the properties of array elements was evaluated by statistical analysis of microscopic images of array structures, assuming that the brightness of the array elements correlates quantitatively or qualitatively with a microscopically probed quantity. Derivatives and autocorrelation functions of cumulative frequency distributions of the object brightnesses were used to quantify variations in object properties throughout arrays. Thus, different specimens, the same specimen at different stages of its fabrication or use, and different imaging conditions can be compared systematically. As an example, we analyzed scanning electron micrographs of microrod arrays and calculated the percentage of broken microrods.

  20. Recent Advances in Morphological Cell Image Analysis

    PubMed Central

    Chen, Shengyong; Zhao, Mingzhu; Wu, Guang; Yao, Chunyan; Zhang, Jianwei

    2012-01-01

    This paper summarizes the recent advances in image processing methods for morphological cell analysis. The topic of morphological analysis has received much attention with the increasing demands in both bioinformatics and biomedical applications. Among many factors that affect the diagnosis of a disease, morphological cell analysis and statistics have made great contributions to results and effects for a doctor. Morphological cell analysis finds the cellar shape, cellar regularity, classification, statistics, diagnosis, and so forth. In the last 20 years, about 1000 publications have reported the use of morphological cell analysis in biomedical research. Relevant solutions encompass a rather wide application area, such as cell clumps segmentation, morphological characteristics extraction, 3D reconstruction, abnormal cells identification, and statistical analysis. These reports are summarized in this paper to enable easy referral to suitable methods for practical solutions. Representative contributions and future research trends are also addressed. PMID:22272215

  1. Weighting of Criteria for Disease Prioritization Using Conjoint Analysis and Based on Health Professional and Student Opinion

    PubMed Central

    Stebler, Nadine; Schuepbach-Regula, Gertraud; Braam, Peter; Falzon, Laura Cristina

    2016-01-01

    Disease prioritization exercises have been used by several organizations to inform surveillance and control measures. Though most methodologies for disease prioritization are based on expert opinion, it is becoming more common to include different stakeholders in the prioritization exercise. This study was performed to compare the weighting of disease criteria, and the consequent prioritization of zoonoses, by both health professionals and students in Switzerland using a Conjoint Analysis questionnaire. The health professionals comprised public health and food safety experts, cantonal physicians and cantonal veterinarians, while the student group comprised first-year veterinary and agronomy students. Eight criteria were selected for this prioritization based on expert elicitation and literature review. These criteria, described on a 3-tiered scale, were evaluated through a choice-based Conjoint Analysis questionnaire with 25 choice tasks. Questionnaire results were analyzed to obtain importance scores (for each criterion) and mean utility values (for each criterion level), and the latter were then used to rank 16 zoonoses. While the most important criterion for both groups was “Severity of the disease in humans”, the second ranked criteria by the health professionals and students were “Economy” and “Treatment in humans”, respectively. Regarding the criterion “Control and Prevention”, health professionals tended to prioritize a disease when the control and preventive measures were described to be 95% effective, while students prioritized a disease if there were almost no control and preventive measures available. Bovine Spongiform Encephalopathy was the top-ranked disease by both groups. Health professionals and students agreed on the weighting of certain criteria such as “Severity” and “Treatment of disease in humans”, but disagreed on others such as “Economy” or “Control and Prevention”. Nonetheless, the overall disease ranking

  2. Diagnosing Behavior Disorders: An Analysis of State Definitions, Eligibility Criteria and Recommended Procedures.

    ERIC Educational Resources Information Center

    Swartz, Stanley L.; And Others

    Using information collected in a survey of all 50 states and the District of Columbia, the study analyzed state definitions of the "behavior disordered/emotionally disturbed" (BD/ED) category of handicapped children, program entrance and exit criteria, and procedures for referral, evaluation, and program placement. A general lack of…

  3. SENSITIVITY ANALYSIS OF THE APPLICATION OF CHEMICAL EXPOSURE CRITERIA FOR COMPARING SITES AND WATERSHEDS

    EPA Science Inventory

    A methodology was developed for deriving quantitative exposure criteria useful for comparing a site or watershed to a reference condition. The prototype method used indicators of exposures to oil contamination and combustion by-products, naphthalene and benzo(a)pyrene metabolites...

  4. Analysis of Time-Sharing Contract Agreements with Related Suggested Systems Evaluation Criteria.

    ERIC Educational Resources Information Center

    Chanoux, Jo Ann J.

    While avoiding evaluation or specification of individual companies, computer time-sharing commercial contract agreements are analyzed. Price and non-price contract elements are analyzed according to 22 evaluation criteria: confidentiality measures assumed by the vendor; consultation services available; package programs and user routines; languages…

  5. Gender bias in diagnostic criteria for personality disorders: an item response theory analysis.

    PubMed

    Jane, J Serrita; Oltmanns, Thomas F; South, Susan C; Turkheimer, Eric

    2007-02-01

    The authors examined gender bias in the diagnostic criteria for Diagnostic and Statistical Manual of Mental Disorders (4th ed., text revision; American Psychiatric Association, 2000) personality disorders. Participants (N=599) were selected from 2 large, nonclinical samples on the basis of information from self-report questionnaires and peer nominations that suggested the presence of personality pathology. All were interviewed with the Structured Interview for DSM-IV Personality (B. Pfohl, N. Blum, & M. Zimmerman, 1997). Using item response theory methods, the authors compared data from 315 men and 284 women, searching for evidence of differential item functioning in the diagnostic features of 10 personality disorder categories. Results indicated significant but moderate measurement bias pertaining to gender for 6 specific criteria. In other words, men and women with equivalent levels of pathology endorsed the items at different rates. For 1 paranoid personality disorder criterion and 3 antisocial criteria, men were more likely to endorse the biased items. For 2 schizoid personality disorder criteria, women were more likely to endorse the biased items.

  6. Automated quantitative image analysis of nanoparticle assembly

    NASA Astrophysics Data System (ADS)

    Murthy, Chaitanya R.; Gao, Bo; Tao, Andrea R.; Arya, Gaurav

    2015-05-01

    The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated manner. The software outputs averages and distributions in the size, radius of gyration, fractal dimension, backbone length, end-to-end distance, anisotropic ratio, and aspect ratio of NP clusters as a function of time along with bootstrapped error bounds for all calculated properties. The polydispersity in the NP building blocks and biases in the sampling of NP clusters are accounted for through the use of probabilistic weights. This software, named Particle Image Characterization Tool (PICT), has been made publicly available and could be an invaluable resource for researchers studying NP assembly. To demonstrate its practical utility, we used PICT to analyze scanning electron microscopy images taken during the assembly of surface-functionalized metal NPs of differing shapes and sizes within a polymer matrix. PICT is used to characterize and analyze the morphology of NP clusters, providing quantitative information that can be used to elucidate the physical mechanisms governing NP assembly.The ability to characterize higher-order structures formed by nanoparticle (NP) assembly is critical for predicting and engineering the properties of advanced nanocomposite materials. Here we develop a quantitative image analysis software to characterize key structural properties of NP clusters from experimental images of nanocomposites. This analysis can be carried out on images captured at intermittent times during assembly to monitor the time evolution of NP clusters in a highly automated

  7. Estimation Criteria for Rock Brittleness Based on Energy Analysis During the Rupturing Process

    NASA Astrophysics Data System (ADS)

    Ai, Chi; Zhang, Jun; Li, Yu-wei; Zeng, Jia; Yang, Xin-liang; Wang, Ji-gang

    2016-12-01

    Brittleness is one of the most important mechanical properties of rock: it plays a significant role in evaluating the risk of rock bursts and in analysis of borehole-wall stability during shale gas development. Brittleness is also a critical parameter in the design of hydraulic fracturing. However, there is still no widely accepted definition of the concept of brittleness in rock mechanics. Although many criteria have been proposed to characterize rock brittleness, their applicability and reliability have yet to be verified. In this paper, the brittleness of rock under compression is defined as the ability of a rock to accumulate elastic energy during the pre-peak stage and to self-sustain fracture propagation in the post-peak stage. This ability is related to three types of energy: fracture energy, post-peak released energy and pre-peak dissipation energy. New brittleness evaluation indices B 1 and B 2 are proposed based on the stress-strain curve from the viewpoint of energy. The new indices can describe the entire transition of rock from absolute plasticity to absolute brittleness. In addition, the brittle characteristics reflected by other brittleness indices can be described, and the calculation results of B 1 and B 2 are continuous and monotonic. Triaxial compression tests on different types of rock were carried out under different confining pressures. Based on B 1 and B 2, the brittleness of different rocks shows different trends with rising confining pressure. The brittleness of red sandstone decreases with increasing confining pressure, whereas for black shale it initially increases and then decreases in a certain range of confining pressure. Granite displays a constant increasing trend. The brittleness anisotropy of black shale is discussed. The smaller the angle between the loading direction and the bedding plane, the greater the brittleness. The calculation B 1 and B 2 requires experimental data, and the values of these two indices represent only

  8. Alzheimer's disease - a neurospirochetosis. Analysis of the evidence following Koch's and Hill's criteria

    PubMed Central

    2011-01-01

    It is established that chronic spirochetal infection can cause slowly progressive dementia, brain atrophy and amyloid deposition in late neurosyphilis. Recently it has been suggested that various types of spirochetes, in an analogous way to Treponema pallidum, could cause dementia and may be involved in the pathogenesis of Alzheimer's disease (AD). Here, we review all data available in the literature on the detection of spirochetes in AD and critically analyze the association and causal relationship between spirochetes and AD following established criteria of Koch and Hill. The results show a statistically significant association between spirochetes and AD (P = 1.5 × 10-17, OR = 20, 95% CI = 8-60, N = 247). When neutral techniques recognizing all types of spirochetes were used, or the highly prevalent periodontal pathogen Treponemas were analyzed, spirochetes were observed in the brain in more than 90% of AD cases. Borrelia burgdorferi was detected in the brain in 25.3% of AD cases analyzed and was 13 times more frequent in AD compared to controls. Periodontal pathogen Treponemas (T. pectinovorum, T. amylovorum, T. lecithinolyticum, T. maltophilum, T. medium, T. socranskii) and Borrelia burgdorferi were detected using species specific PCR and antibodies. Importantly, co-infection with several spirochetes occurs in AD. The pathological and biological hallmarks of AD were reproduced in vitro by exposure of mammalian cells to spirochetes. The analysis of reviewed data following Koch's and Hill's postulates shows a probable causal relationship between neurospirochetosis and AD. Persisting inflammation and amyloid deposition initiated and sustained by chronic spirochetal infection form together with the various hypotheses suggested to play a role in the pathogenesis of AD a comprehensive entity. As suggested by Hill, once the probability of a causal relationship is established prompt action is needed. Support and attention should be given to this field of AD research

  9. Evidential Reasoning in Expert Systems for Image Analysis.

    DTIC Science & Technology

    1985-02-01

    techniques to image analysis (IA). There is growing evidence that these techniques offer significant improvements in image analysis , particularly in the...2) to provide a common framework for analysis, (3) to structure the ER process for major expert-system tasks in image analysis , and (4) to identify...approaches to three important tasks for expert systems in the domain of image analysis . This segment concluded with an assessment of the strengths

  10. Analysis of Handling Qualities Design Criteria for Active Inceptor Force-Feel Characteristics

    NASA Technical Reports Server (NTRS)

    Malpica, Carlos A.; Lusardi, Jeff A.

    2013-01-01

    ratio. While these two studies produced boundaries for acceptable/unacceptable stick dynamics for rotorcraft, they were not able to provide guidance on how variations of the stick dynamics in the acceptable region impact handling qualities. More recently, a ground based simulation study [5] suggested little benefit was to be obtained from variations of the damping ratio for a side-stick controller exhibiting high natural frequencies (greater than 17 rad/s) and damping ratios (greater than 2.0). A flight test campaign was conducted concurrently on the RASCAL JUH-60A in-flight simulator and the ACT/FHS EC-135 in flight simulator [6]. Upon detailed analysis of the pilot evaluations the study identified a clear preference for a high damping ratio and natural frequency of the center stick inceptors. Side stick controllers were found to be less sensitive to the damping. While these studies have compiled a substantial amount of data, in the form of qualitative and quantitative pilot opinion, a fundamental analysis of the effect of the inceptor force-feel system on flight control is found to be lacking. The study of Ref. [6] specifically concluded that a systematic analysis was necessary, since discrepancies with the assigned handling qualities showed that proposed analytical design metrics, or criteria, were not suitable. The overall goal of the present study is to develop a clearer fundamental understanding of the underlying mechanisms associated with the inceptor dynamics that govern the handling qualities using a manageable analytical methodology.

  11. BioImage Suite: An integrated medical image analysis suite: An update.

    PubMed

    Papademetris, Xenophon; Jackowski, Marcel P; Rajeevan, Nallakkandi; DiStasio, Marcello; Okuda, Hirohito; Constable, R Todd; Staib, Lawrence H

    2006-01-01

    BioImage Suite is an NIH-supported medical image analysis software suite developed at Yale. It leverages both the Visualization Toolkit (VTK) and the Insight Toolkit (ITK) and it includes many additional algorithms for image analysis especially in the areas of segmentation, registration, diffusion weighted image processing and fMRI analysis. BioImage Suite has a user-friendly user interface developed in the Tcl scripting language. A final beta version is freely available for download.

  12. The synthesis and analysis of color images

    NASA Technical Reports Server (NTRS)

    Wandell, B. A.

    1985-01-01

    A method is described for performing the synthesis and analysis of digital color images. The method is based on two principles. First, image data are represented with respect to the separate physical factors, surface reflectance and the spectral power distribution of the ambient light, that give rise to the perceived color of an object. Second, the encoding is made efficient by using a basis expansion for the surface spectral reflectance and spectral power distribution of the ambient light that takes advantage of the high degree of correlation across the visible wavelengths normally found in such functions. Within this framework, the same basic methods can be used to synthesize image data for color display monitors and printed materials, and to analyze image data into estimates of the spectral power distribution and surface spectral reflectances. The method can be applied to a variety of tasks. Examples of applications include the color balancing of color images, and the identification of material surface spectral reflectance when the lighting cannot be completely controlled.

  13. Image analysis for measuring rod network properties

    NASA Astrophysics Data System (ADS)

    Kim, Dongjae; Choi, Jungkyu; Nam, Jaewook

    2015-12-01

    In recent years, metallic nanowires have been attracting significant attention as next-generation flexible transparent conductive films. The performance of films depends on the network structure created by nanowires. Gaining an understanding of their structure, such as connectivity, coverage, and alignment of nanowires, requires the knowledge of individual nanowires inside the microscopic images taken from the film. Although nanowires are flexible up to a certain extent, they are usually depicted as rigid rods in many analysis and computational studies. Herein, we propose a simple and straightforward algorithm based on the filtering in the frequency domain for detecting the rod-shape objects inside binary images. The proposed algorithm uses a specially designed filter in the frequency domain to detect image segments, namely, the connected components aligned in a certain direction. Those components are post-processed to be combined under a given merging rule in a single rod object. In this study, the microscopic properties of the rod networks relevant to the analysis of nanowire networks were measured for investigating the opto-electric performance of transparent conductive films and their alignment distribution, length distribution, and area fraction. To verify and find the optimum parameters for the proposed algorithm, numerical experiments were performed on synthetic images with predefined properties. By selecting proper parameters, the algorithm was used to investigate silver nanowire transparent conductive films fabricated by the dip coating method.

  14. Applying Multiple Criteria Decision Analysis to Comparative Benefit-Risk Assessment: Choosing among Statins in Primary Prevention.

    PubMed

    Tervonen, Tommi; Naci, Huseyin; van Valkenhoef, Gert; Ades, Anthony E; Angelis, Aris; Hillege, Hans L; Postmus, Douwe

    2015-10-01

    Decision makers in different health care settings need to weigh the benefits and harms of alternative treatment strategies. Such health care decisions include marketing authorization by regulatory agencies, practice guideline formulation by clinical groups, and treatment selection by prescribers and patients in clinical practice. Multiple criteria decision analysis (MCDA) is a family of formal methods that help make explicit the tradeoffs that decision makers accept between the benefit and risk outcomes of different treatment options. Despite the recent interest in MCDA, certain methodological aspects are poorly understood. This paper presents 7 guidelines for applying MCDA in benefit-risk assessment and illustrates their use in the selection of a statin drug for the primary prevention of cardiovascular disease. We provide guidance on the key methodological issues of how to define the decision problem, how to select a set of nonoverlapping evaluation criteria, how to synthesize and summarize the evidence, how to translate relative measures to absolute ones that permit comparisons between the criteria, how to define suitable scale ranges, how to elicit partial preference information from the decision makers, and how to incorporate uncertainty in the analysis. Our example on statins indicates that fluvastatin is likely to be the most preferred drug by our decision maker and that this result is insensitive to the amount of preference information incorporated in the analysis.

  15. Computerized image analysis of digitized infrared images of breasts from a scanning infrared imaging system

    NASA Astrophysics Data System (ADS)

    Head, Jonathan F.; Lipari, Charles A.; Elliot, Robert L.

    1998-10-01

    Infrared imaging of the breasts has been shown to be of value in risk assessment, detection, diagnosis and prognosis of breast cancer. However, infrared imaging has not been widely accepted for a variety of reasons, including the lack of standardization of the subjective visual analysis method. The subjective nature of the standard visual analysis makes it difficult to achieve equivalent results with different equipment and different interpreters of the infrared patterns of the breasts. Therefore, this study was undertaken to develop more objective analysis methods for infrared images of the breasts by creating objective semiquantitative and quantitative analysis of computer assisted image analysis determined mean temperatures of whole breasts and quadrants of the breasts. When using objective quantitative data on whole breasts (comparing differences in means of left and right breasts), semiquantitative data on quadrants of the breast (determining an index by summation of scores for each quadrant), or summation of quantitative data on quadrants of the breasts there was a decrease in the number of abnormal patterns (positives) in patients being screen for breast cancer and an increases in the number of abnormal patterns (true positives) in the breast cancer patients. It is hoped that the decrease in positives in women being screened for breast cancer will translate into a decrease in the false positives but larger numbers of women with longer follow-up will be needed to clarify this. Also a much larger group of breast cancer patients will need to be studied in order to see if there is a true increase in the percentage of breast cancer patients presenting with abnormal infrared images of the breast with these objective image analysis methods.

  16. Vibration signature analysis of AFM images

    SciTech Connect

    Joshi, G.A.; Fu, J.; Pandit, S.M.

    1995-12-31

    Vibration signature analysis has been commonly used for the machine condition monitoring and the control of errors. However, it has been rarely employed for the analysis of the precision instruments such as an atomic force microscope (AFM). In this work, an AFM was used to collect vibration data from a sample positioning stage under different suspension and support conditions. Certain structural characteristics of the sample positioning stage show up as a result of the vibration signature analysis of the surface height images measured using an AFM. It is important to understand these vibration characteristics in order to reduce vibrational uncertainty, improve the damping and structural design, and to eliminate the imaging imperfections. The choice of method applied for vibration analysis may affect the results. Two methods, the data dependent systems (DDS) analysis and the Welch`s periodogram averaging method were investigated for application to this problem. Both techniques provide smooth spectrum plots from the data. Welch`s periodogram provides a coarse resolution as limited by the number of samples and requires a choice of window to be decided subjectively by the user. The DDS analysis provides sharper spectral peaks at a much higher resolution and a much lower noise floor. A decomposition of the signal variance in terms of the frequencies is provided as well. The technique is based on an objective model adequacy criterion.

  17. Pain related inflammation analysis using infrared images

    NASA Astrophysics Data System (ADS)

    Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata

    2016-05-01

    Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.

  18. ACR Appropriateness Criteria Myelopathy.

    PubMed

    Roth, Christopher J; Angevine, Peter D; Aulino, Joseph M; Berger, Kevin L; Choudhri, Asim F; Fries, Ian Blair; Holly, Langston T; Kendi, Ayse Tuba Karaqulle; Kessler, Marcus M; Kirsch, Claudia F; Luttrull, Michael D; Mechtler, Laszlo L; O'Toole, John E; Sharma, Aseem; Shetty, Vilaas S; West, O Clark; Cornelius, Rebecca S; Bykowski, Julie

    2016-01-01

    Patients presenting with myelopathic symptoms may have a number of causative intradural and extradural etiologies, including disc degenerative diseases, spinal masses, infectious or inflammatory processes, vascular compromise, and vertebral fracture. Patients may present acutely or insidiously and may progress toward long-term paralysis if not treated promptly and effectively. Noncontrast CT is the most appropriate first examination in acute trauma cases to diagnose vertebral fracture as the cause of acute myelopathy. In most nontraumatic cases, MRI is the modality of choice to evaluate the location, severity, and causative etiology of spinal cord myelopathy, and predicts which patients may benefit from surgery. Myelopathy from spinal stenosis and spinal osteoarthritis is best confirmed without MRI intravenous contrast. Many other myelopathic conditions are more easily visualized after contrast administration. Imaging performed should be limited to the appropriate spinal levels, based on history, physical examination, and clinical judgment. The ACR Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every three years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals, and the application of a well-established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.

  19. Quantitative image analysis of celiac disease.

    PubMed

    Ciaccio, Edward J; Bhagat, Govind; Lewis, Suzanne K; Green, Peter H

    2015-03-07

    We outline the use of quantitative techniques that are currently used for analysis of celiac disease. Image processing techniques can be useful to statistically analyze the pixular data of endoscopic images that is acquired with standard or videocapsule endoscopy. It is shown how current techniques have evolved to become more useful for gastroenterologists who seek to understand celiac disease and to screen for it in suspected patients. New directions for focus in the development of methodology for diagnosis and treatment of this disease are suggested. It is evident that there are yet broad areas where there is potential to expand the use of quantitative techniques for improved analysis in suspected or known celiac disease patients.

  20. Machine learning for medical images analysis.

    PubMed

    Criminisi, A

    2016-10-01

    This article discusses the application of machine learning for the analysis of medical images. Specifically: (i) We show how a special type of learning models can be thought of as automatically optimized, hierarchically-structured, rule-based algorithms, and (ii) We discuss how the issue of collecting large labelled datasets applies to both conventional algorithms as well as machine learning techniques. The size of the training database is a function of model complexity rather than a characteristic of machine learning methods.

  1. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    including the time for reviewing instructions , searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...thanks go to Pankaj who inspired me in research , to Prasad from whom I have learned so much, and to Ronie and Laureen, the memories of whose company...of images to determine egomotion and to extract information from the scene. Research in motion analysis has been focussed on the problems of

  2. Skin age testing criteria: characterization of human skin structures by 500 MHz MRI multiple contrast and image processing.

    PubMed

    Sharma, Rakesh

    2010-07-21

    Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.

  3. Skin age testing criteria: characterization of human skin structures by 500 MHz MRI multiple contrast and image processing

    NASA Astrophysics Data System (ADS)

    Sharma, Rakesh

    2010-07-01

    Ex vivo magnetic resonance microimaging (MRM) image characteristics are reported in human skin samples in different age groups. Human excised skin samples were imaged using a custom coil placed inside a 500 MHz NMR imager for high-resolution microimaging. Skin MRI images were processed for characterization of different skin structures. Contiguous cross-sectional T1-weighted 3D spin echo MRI, T2-weighted 3D spin echo MRI and proton density images were compared with skin histopathology and NMR peaks. In all skin specimens, epidermis and dermis thickening and hair follicle size were measured using MRM. Optimized parameters TE and TR and multicontrast enhancement generated better MRI visibility of different skin components. Within high MR signal regions near to the custom coil, MRI images with short echo time were comparable with digitized histological sections for skin structures of the epidermis, dermis and hair follicles in 6 (67%) of the nine specimens. Skin % tissue composition, measurement of the epidermis, dermis, sebaceous gland and hair follicle size, and skin NMR peaks were signatures of skin type. The image processing determined the dimensionality of skin tissue components and skin typing. The ex vivo MRI images and histopathology of the skin may be used to measure the skin structure and skin NMR peaks with image processing may be a tool for determining skin typing and skin composition.

  4. Optimal site selection for sitting a solar park using multi-criteria decision analysis and geographical information systems

    NASA Astrophysics Data System (ADS)

    Georgiou, Andreas; Skarlatos, Dimitrios

    2016-07-01

    Among the renewable power sources, solar power is rapidly becoming popular because it is inexhaustible, clean, and dependable. It has also become more efficient since the power conversion efficiency of photovoltaic solar cells has increased. Following these trends, solar power will become more affordable in years to come and considerable investments are to be expected. Despite the size of solar plants, the sitting procedure is a crucial factor for their efficiency and financial viability. Many aspects influence such a decision: legal, environmental, technical, and financial to name a few. This paper describes a general integrated framework to evaluate land suitability for the optimal placement of photovoltaic solar power plants, which is based on a combination of a geographic information system (GIS), remote sensing techniques, and multi-criteria decision-making methods. An application of the proposed framework for the Limassol district in Cyprus is further illustrated. The combination of a GIS and multi-criteria methods produces an excellent analysis tool that creates an extensive database of spatial and non-spatial data, which will be used to simplify problems as well as solve and promote the use of multiple criteria. A set of environmental, economic, social, and technical constrains, based on recent Cypriot legislation, European's Union policies, and expert advice, identifies the potential sites for solar park installation. The pairwise comparison method in the context of the analytic hierarchy process (AHP) is applied to estimate the criteria weights in order to establish their relative importance in site evaluation. In addition, four different methods to combine information layers and check their sensitivity were used. The first considered all the criteria as being equally important and assigned them equal weight, whereas the others grouped the criteria and graded them according to their objective perceived importance. The overall suitability of the study

  5. Tomographic spectral imaging: analysis of localized corrosion.

    SciTech Connect

    Michael, Joseph Richard; Kotula, Paul Gabriel; Keenan, Michael Robert

    2005-02-01

    Microanalysis is typically performed to analyze the near surface of materials. There are many instances where chemical information about the third spatial dimension is essential to the solution of materials analyses. The majority of 3D analyses however focus on limited spectral acquisition and/or analysis. For truly comprehensive 3D chemical characterization, 4D spectral images (a complete spectrum from each volume element of a region of a specimen) are needed. Furthermore, a robust statistical method is needed to extract the maximum amount of chemical information from that extremely large amount of data. In this paper, an example of the acquisition and multivariate statistical analysis of 4D (3-spatial and 1-spectral dimension) x-ray spectral images is described. The method of utilizing a single- or dual-beam FIB (w/o or w/SEM) to get at 3D chemistry has been described by others with respect to secondary-ion mass spectrometry. The basic methodology described in those works has been modified for comprehensive x-ray microanalysis in a dual-beam FIB/SEM (FEI Co. DB-235). In brief, the FIB is used to serially section a site-specific region of a sample and then the electron beam is rastered over the exposed surfaces with x-ray spectral images being acquired at each section. All this is performed without rotating or tilting the specimen between FIB cutting and SEM imaging/x-ray spectral image acquisition. The resultant 4D spectral image is then unfolded (number of volume elements by number of channels) and subjected to the same multivariate curve resolution (MCR) approach that has proven successful for the analysis of lower-dimension x-ray spectral images. The TSI data sets can be in excess of 4Gbytes. This problem has been overcome (for now) and images up to 6Gbytes have been analyzed in this work. The method for analyzing such large spectral images will be described in this presentation. A comprehensive 3D chemical analysis was performed on several corrosion specimens

  6. F-106 data summary and model results relative to threat criteria and protection design analysis

    NASA Technical Reports Server (NTRS)

    Pitts, F. L.; Finelli, G. B.; Perala, R. A.; Rudolph, T. H.

    1986-01-01

    The NASA F-106 has acquired considerable data on the rates-of-change of EM parameters on the aircraft surface during 690 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 to 40,000 feet. The data are presently being used in updating previous lightning criteria and standards. The new lightning standards will, therefore, be the first which reflect actual aircraft responses measured at flight altitudes.

  7. GIS analysis of the siting criteria for the Mixed and Low-Level Waste Treatment Facility and the Idaho Waste Processing Facility

    SciTech Connect

    Hoskinson, R.L.

    1994-01-01

    This report summarizes a study conducted using the Arc/Info{reg_sign} geographic information system (GIS) to analyze the criteria used for site selection for the Mixed and Low-Level Waste Treatment Facility (MLLWTF) and the Idaho Waste Processing Facility (IWPF). The purpose of the analyses was to determine, based on predefined criteria, the areas on the INEL that best satisfied the criteria. The coverages used in this study were produced by importing the AutoCAD files that produced the maps for a pre site selection draft report into the GIS. The files were then converted to Arc/Info{reg_sign} GIS format. The initial analysis was made by considering all of the criteria as having equal importance in determining the areas of the INEL that would best satisfy the requirements. Another analysis emphasized four of the criteria as ``must`` criteria which had to be satisfied. Additional analyses considered other criteria that were considered for, but not included in the predefined criteria. This GIS analysis of the siting criteria for the IWPF and MLLWTF provides a logical, repeatable, and defensible approach to the determination of candidate locations for the facilities. The results of the analyses support the location of the Candidate Locations.

  8. Image analysis from root system pictures

    NASA Astrophysics Data System (ADS)

    Casaroli, D.; Jong van Lier, Q.; Metselaar, K.

    2009-04-01

    Root research has been hampered by a lack of good methods and by the amount of time involved in making measurements. In general the studies from root system are made with either monolith or minirhizotron method which is used as a quantitative tool but requires comparison with conventional destructive methods. This work aimed to analyze roots systems images, obtained from a root atlas book, to different crops in order to find the root length and root length density and correlate them with the literature. Five crops images from Zea mays, Secale cereale, Triticum aestivum, Medicago sativa and Panicum miliaceum were divided in horizontal and vertical layers. Root length distribution was analyzed for horizontal as well as vertical layers. In order to obtain the root length density, a cuboidal volume was supposed to correspond to each part of the image. The results from regression analyses showed root length distributions according to horizontal or vertical layers. It was possible to find the root length distribution for single horizontal layers as a function of vertical layers, and also for single vertical layers as a function of horizontal layers. Regression analysis showed good fits when the root length distributions were grouped in horizontal layers according to the distance from the root center. When root length distributions were grouped according to soil horizons the fits worsened. The resulting root length density estimates were lower than those commonly found in literature, possibly due to (1) the fact that the crop images resulted from single plant situations, while the analyzed field experiments had more than one plant; (2) root overlapping may occur in the field; (3) root experiments, both in the field and image analyses as performed here, are subject to sampling errors; (4) the (hand drawn) images used in this study may have omitted some of the smallest roots.

  9. Image analysis applied to luminescence microscopy

    NASA Astrophysics Data System (ADS)

    Maire, Eric; Lelievre-Berna, Eddy; Fafeur, Veronique; Vandenbunder, Bernard

    1998-04-01

    We have developed a novel approach to study luminescent light emission during migration of living cells by low-light imaging techniques. The equipment consists in an anti-vibration table with a hole for a direct output under the frame of an inverted microscope. The image is directly captured by an ultra low- light level photon-counting camera equipped with an image intensifier coupled by an optical fiber to a CCD sensor. This installation is dedicated to measure in a dynamic manner the effect of SF/HGF (Scatter Factor/Hepatocyte Growth Factor) both on activation of gene promoter elements and on cell motility. Epithelial cells were stably transfected with promoter elements containing Ets transcription factor-binding sites driving a luciferase reporter gene. Luminescent light emitted by individual cells was measured by image analysis. Images of luminescent spots were acquired with a high aperture objective and time exposure of 10 - 30 min in photon-counting mode. The sensitivity of the camera was adjusted to a high value which required the use of a segmentation algorithm dedicated to eliminate the background noise. Hence, image segmentation and treatments by mathematical morphology were particularly indicated in these experimental conditions. In order to estimate the orientation of cells during their migration, we used a dedicated skeleton algorithm applied to the oblong spots of variable intensities emitted by the cells. Kinetic changes of luminescent sources, distance and speed of migration were recorded and then correlated with cellular morphological changes for each spot. Our results highlight the usefulness of the mathematical morphology to quantify kinetic changes in luminescence microscopy.

  10. Prevalence of restless legs syndrome in Ankara, Turkey: an analysis of diagnostic criteria and awareness.

    PubMed

    Yilmaz, Nesrin Helvaci; Akbostanci, Muhittin Cenk; Oto, Aycan; Aykac, Ozlem

    2013-09-01

    The aim of this study was threefold: (1) to investigate the prevalence of restless legs syndrome (RLS), in Ankara, Turkey; (2) to determine the predictive values of diagnostic criteria; and (3) to determine the frequency of physician referrals and the frequency of getting the correct diagnosis. A total of 815 individuals, from randomly selected addresses, above the age of 15, were reached using the questionnaire composed of the four diagnostic criteria. Individuals who responded by answering 'yes' for at least one question were interviewed by neurologists for the diagnosis of RLS. Frequency of physician referrals and frequency of getting the correct diagnosis of RLS were also determined for patients getting the final diagnoses of RLS. Prevalence of RLS in Ankara was 5.52 %; 41.0 % of the individuals diagnosed with RLS had replied 'yes' to either one, two or three questions asked by interviewers. However, only 21.3 % of individuals who replied 'yes' to all four questions received the diagnosis of RLS. Among the patients who had the final diagnosis of RLS, 25.7 % had referred to a physician for the symptoms and 22.2 % got the correct diagnosis. The RLS prevalence in Ankara was somewhere between Western and Far East countries compatible with the geographical location. Diagnostic criteria may not be fully predictive when applied by non-physician pollsters. Physician's probability of correctly diagnosing RLS is still low.

  11. Criteria and indicators for the assessment of community forestry outcomes: a comparative analysis from Canada.

    PubMed

    Teitelbaum, Sara

    2014-01-01

    In Canada, there are few structured evaluations of community forestry despite more than twenty years of practice. This article presents a criteria and indicator framework, designed to elicit descriptive information about the types of socio-economic results being achieved by community forests in the Canadian context. The criteria and indicators framework draws on themes proposed by other researchers both in the field of community forestry and related areas. The framework is oriented around three concepts described as amongst the underlying objectives of community forestry, namely participatory governance, local economic benefits and multiple forest use. This article also presents the results of a field-based application of the criteria and indicators framework, comparing four case studies in three Canadian provinces. All four are community forests with direct tenure rights to manage and benefit from forestry activities. Results reveal that in terms of governance, the case studies adhere to two different models, which we name 'interest group' vs. 'local government'. Stronger participatory dimensions are evident in two case studies. In the area of local economic benefits, the four case studies perform similarly, with some of the strongest benefits being in employment creation, especially for those case studies that offer non-timber activities such as recreation and education. Two of four cases have clearly adopted a multiple-use approach to management.

  12. A Multi-Criteria Decision Analysis based methodology for quantitatively scoring the reliability and relevance of ecotoxicological data.

    PubMed

    Isigonis, Panagiotis; Ciffroy, Philippe; Zabeo, Alex; Semenzin, Elena; Critto, Andrea; Giove, Silvio; Marcomini, Antonio

    2015-12-15

    Ecotoxicological data are highly important for risk assessment processes and are used for deriving environmental quality criteria, which are enacted for assuring the good quality of waters, soils or sediments and achieving desirable environmental quality objectives. Therefore, it is of significant importance the evaluation of the reliability of available data for analysing their possible use in the aforementioned processes. The thorough analysis of currently available frameworks for the assessment of ecotoxicological data has led to the identification of significant flaws but at the same time various opportunities for improvement. In this context, a new methodology, based on Multi-Criteria Decision Analysis (MCDA) techniques, has been developed with the aim of analysing the reliability and relevance of ecotoxicological data (which are produced through laboratory biotests for individual effects), in a transparent quantitative way, through the use of expert knowledge, multiple criteria and fuzzy logic. The proposed methodology can be used for the production of weighted Species Sensitivity Weighted Distributions (SSWD), as a component of the ecological risk assessment of chemicals in aquatic systems. The MCDA aggregation methodology is described in detail and demonstrated through examples in the article and the hierarchically structured framework that is used for the evaluation and classification of ecotoxicological data is shortly discussed. The methodology is demonstrated for the aquatic compartment but it can be easily tailored to other environmental compartments (soil, air, sediments).

  13. System analysis approach to deriving design criteria (Loads) for Space Shuttle and its payloads. Volume 2: Typical examples

    NASA Technical Reports Server (NTRS)

    Ryan, R. S.; Bullock, T.; Holland, W. B.; Kross, D. A.; Kiefling, L. A.

    1981-01-01

    The achievement of an optimized design from the system standpoint under the low cost, high risk constraints of the present day environment was analyzed. Space Shuttle illustrates the requirement for an analysis approach that considers all major disciplines (coupling between structures control, propulsion, thermal, aeroelastic, and performance), simultaneously. The Space Shuttle and certain payloads, Space Telescope and Spacelab, are examined. The requirements for system analysis approaches and criteria, including dynamic modeling requirements, test requirements, control requirements, and the resulting design verification approaches are illustrated. A survey of the problem, potential approaches available as solutions, implications for future systems, and projected technology development areas are addressed.

  14. System analysis approach to deriving design criteria (loads) for Space Shuttle and its payloads. Volume 1: General statement of approach

    NASA Technical Reports Server (NTRS)

    Ryan, R. S.; Bullock, T.; Holland, W. B.; Kross, D. A.; Kiefling, L. A.

    1981-01-01

    Space shuttle, the most complex transportation system designed to date, illustrates the requirement for an analysis approach that considers all major disciplines simultaneously. Its unique cross coupling and high sensitivity to aerodynamic uncertainties and high performance requirements dictated a less conservative approach than those taken in programs. Analyses performed for the space shuttle and certain payloads, Space Telescope and Spacelab, are used a examples. These illustrate the requirements for system analysis approaches and criteria, including dynamic modeling requirements, test requirements control requirements and the resulting design verification approaches. A survey of the problem, potential approaches available as solutions, implications for future systems, and projected technology development areas are addressed.

  15. Hyperspectral imaging technology for pharmaceutical analysis

    NASA Astrophysics Data System (ADS)

    Hamilton, Sara J.; Lodder, Robert A.

    2002-06-01

    The sensitivity and spatial resolution of hyperspectral imaging instruments are tested in this paper using pharmaceutical applications. The first experiment tested the hypothesis that a near-IR tunable diode-based remote sensing system is capable of monitoring degradation of hard gelatin capsules at a relatively long distance. Spectra from the capsules were used to differentiate among capsules exposed to an atmosphere containing imaging spectrometry of tablets permits the identification and composition of multiple individual tables to be determined simultaneously. A near-IR camera was used to collect thousands of spectra simultaneously from a field of blister-packaged tablets. The number of tablets that a typical near-IR camera can currently analyze simultaneously form a field of blister- packaged tablets. The number of tablets that a typical near- IR camera can currently analyze simultaneously was estimated to be approximately 1300. The bootstrap error-adjusted single-sample technique chemometric-imaging algorithm was used to draw probability-density contour plots that revealed tablet composition. The single-capsule analysis provides an indication of how far apart the sample and instrumentation can be and still maintain adequate S/N, while the multiple- sample imaging experiment gives an indication of how many samples can be analyzed simultaneously while maintaining an adequate S/N and pixel coverage on each sample.

  16. Image analysis of Renaissance copperplate prints

    NASA Astrophysics Data System (ADS)

    Hedges, S. Blair

    2008-02-01

    From the fifteenth to the nineteenth centuries, prints were a common form of visual communication, analogous to photographs. Copperplate prints have many finely engraved black lines which were used to create the illusion of continuous tone. Line densities generally are 100-2000 lines per square centimeter and a print can contain more than a million total engraved lines 20-300 micrometers in width. Because hundreds to thousands of prints were made from a single copperplate over decades, variation among prints can have historical value. The largest variation is plate-related, which is the thinning of lines over successive editions as a result of plate polishing to remove time-accumulated corrosion. Thinning can be quantified with image analysis and used to date undated prints and books containing prints. Print-related variation, such as over-inking of the print, is a smaller but significant source. Image-related variation can introduce bias if images were differentially illuminated or not in focus, but improved imaging technology can limit this variation. The Print Index, the percentage of an area composed of lines, is proposed as a primary measure of variation. Statistical methods also are proposed for comparing and identifying prints in the context of a print database.

  17. Quality assurance in dental radiography: intra-oral image quality analysis.

    PubMed

    Bolas, Andrew; Fitzgerald, Maurice

    With the introduction of criteria for clinical audit by the Irish Dental Council, and the statutory requirement on dentists to introduce this into their practice, this article will introduce the basic concepts of quality standards in intra-oral radiography and the subsequent application of these standards in an image quality audit cycle. Subjective image quality analysis is not a new concept, but its application can prove beneficial to both patient and dental practitioner. The ALARA (as low as reasonably achievable) principle is fundamental in radiation protection, and therefore the prevention of repeat exposures demonstrates one facet of this that the dental practitioner can employ within daily practice.

  18. Markov Random Fields, Stochastic Quantization and Image Analysis

    DTIC Science & Technology

    1990-01-01

    Markov random fields based on the lattice Z2 have been extensively used in image analysis in a Bayesian framework as a-priori models for the...of Image Analysis can be given some fundamental justification then there is a remarkable connection between Probabilistic Image Analysis , Statistical Mechanics and Lattice-based Euclidean Quantum Field Theory.

  19. Quantitative image analysis in the assessment of diffuse large B-cell lymphoma.

    PubMed

    Chabot-Richards, Devon S; Martin, David R; Myers, Orrin B; Czuchlewski, David R; Hunt, Kristin E

    2011-12-01

    Proliferation rates in diffuse large B-cell lymphoma have been associated with conflicting outcomes in the literature, more often with high proliferation associated with poor prognosis. In most studies, the proliferation rate was estimated by a pathologist using an immunohistochemical stain for the monoclonal antibody Ki-67. We hypothesized that a quantitative image analysis algorithm would give a more accurate estimate of the proliferation rate, leading to better associations with survival. In all, 84 cases of diffuse large B-cell lymphoma were selected according to the World Health Organization criteria. Ki-67 percentage positivity estimated by the pathologist was recorded from the original report. The same slides were then scanned using an Aperio ImageScope, and Ki-67 percentage positivity was calculated using a computer-based quantitative immunohistochemistry nuclear algorithm. In addition, chart review was performed and survival time was recorded. The Ki-67 percentage estimated by the pathologist from the original report versus quantitative image analysis was significantly correlated (P<0.001), but pathologist Ki-67 percentages were significantly higher than quantitative image analysis (P=0.021). There was less agreement at lower Ki-67 percentages. Comparison of Ki-67 percentage positivity versus survival did not show significant association either with pathologist estimate or quantitative image analysis. However, although not significant, there was a trend of worse survival at higher proliferation rates detected by the pathologist but not by quantitative image analysis. Interestingly, our data suggest that the Ki-67 percentage positivity as assessed by the pathologist may be more closely associated with survival outcome than that identified by quantitative image analysis. This may indicate that pathologists are better at selecting appropriate areas of the slide. More cases are needed to assess whether this finding would be statistically significant. Due to

  20. Simple Low Level Features for Image Analysis

    NASA Astrophysics Data System (ADS)

    Falcoz, Paolo

    As human beings, we perceive the world around us mainly through our eyes, and give what we see the status of “reality”; as such we historically tried to create ways of recording this reality so we could augment or extend our memory. From early attempts in photography like the image produced in 1826 by the French inventor Nicéphore Niépce (Figure 2.1) to the latest high definition camcorders, the number of recorded pieces of reality increased exponentially, posing the problem of managing all that information. Most of the raw video material produced today has lost its memory augmentation function, as it will hardly ever be viewed by any human; pervasive CCTVs are an example. They generate an enormous amount of data each day, but there is not enough “human processing power” to view them. Therefore the need for effective automatic image analysis tools is great, and a lot effort has been put in it, both from the academia and the industry. In this chapter, a review of some of the most important image analysis tools are presented.

  1. Nursing image: an evolutionary concept analysis.

    PubMed

    Rezaei-Adaryani, Morteza; Salsali, Mahvash; Mohammadi, Eesa

    2012-12-01

    A long-term challenge to the nursing profession is the concept of image. In this study, we used the Rodgers' evolutionary concept analysis approach to analyze the concept of nursing image (NI). The aim of this concept analysis was to clarify the attributes, antecedents, consequences, and implications associated with the concept. We performed an integrative internet-based literature review to retrieve English literature published from 1980-2011. Findings showed that NI is a multidimensional, all-inclusive, paradoxical, dynamic, and complex concept. The media, invisibility, clothing style, nurses' behaviors, gender issues, and professional organizations are the most important antecedents of the concept. We found that NI is pivotal in staff recruitment and nursing shortage, resource allocation to nursing, nurses' job performance, workload, burnout and job dissatisfaction, violence against nurses, public trust, and salaries available to nurses. An in-depth understanding of the NI concept would assist nurses to eliminate negative stereotypes and build a more professional image for the nurse and the profession.

  2. Analysis on enhanced depth of field for integral imaging microscope.

    PubMed

    Lim, Young-Tae; Park, Jae-Hyeung; Kwon, Ki-Chul; Kim, Nam

    2012-10-08

    Depth of field of the integral imaging microscope is studied. In the integral imaging microscope, 3-D information is encoded as a form of elemental images Distance between intermediate plane and object point decides the number of elemental image and depth of field of integral imaging microscope. From the analysis, it is found that depth of field of the reconstructed depth plane image by computational integral imaging reconstruction is longer than depth of field of optical microscope. From analyzed relationship, experiment using integral imaging microscopy and conventional microscopy is also performed to confirm enhanced depth of field of integral imaging microscopy.

  3. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  4. Uses of software in digital image analysis: a forensic report

    NASA Astrophysics Data System (ADS)

    Sharma, Mukesh; Jha, Shailendra

    2010-02-01

    Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.

  5. Machine Learning Interface for Medical Image Analysis.

    PubMed

    Zhang, Yi C; Kagen, Alexander C

    2016-10-11

    TensorFlow is a second-generation open-source machine learning software library with a built-in framework for implementing neural networks in wide variety of perceptual tasks. Although TensorFlow usage is well established with computer vision datasets, the TensorFlow interface with DICOM formats for medical imaging remains to be established. Our goal is to extend the TensorFlow API to accept raw DICOM images as input; 1513 DaTscan DICOM images were obtained from the Parkinson's Progression Markers Initiative (PPMI) database. DICOM pixel intensities were extracted and shaped into tensors, or n-dimensional arrays, to populate the training, validation, and test input datasets for machine learning. A simple neural network was constructed in TensorFlow to classify images into normal or Parkinson's disease groups. Training was executed over 1000 iterations for each cross-validation set. The gradient descent optimization and Adagrad optimization algorithms were used to minimize cross-entropy between the predicted and ground-truth labels. Cross-validation was performed ten times to produce a mean accuracy of 0.938 ± 0.047 (95 % CI 0.908-0.967). The mean sensitivity was 0.974 ± 0.043 (95 % CI 0.947-1.00) and mean specificity was 0.822 ± 0.207 (95 % CI 0.694-0.950). We extended the TensorFlow API to enable DICOM compatibility in the context of DaTscan image analysis. We implemented a neural network classifier that produces diagnostic accuracies on par with excellent results from previous machine learning models. These results indicate the potential role of TensorFlow as a useful adjunct diagnostic tool in the clinical setting.

  6. A new multi criteria classification approach in a multi agent system applied to SEEG analysis.

    PubMed

    Kinié, A; Ndiaye, M; Montois, J J; Jacquelet, Y

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals.

  7. Wavelet-based image analysis system for soil texture analysis

    NASA Astrophysics Data System (ADS)

    Sun, Yun; Long, Zhiling; Jang, Ping-Rey; Plodinec, M. John

    2003-05-01

    Soil texture is defined as the relative proportion of clay, silt and sand found in a given soil sample. It is an important physical property of soil that affects such phenomena as plant growth and agricultural fertility. Traditional methods used to determine soil texture are either time consuming (hydrometer), or subjective and experience-demanding (field tactile evaluation). Considering that textural patterns observed at soil surfaces are uniquely associated with soil textures, we propose an innovative approach to soil texture analysis, in which wavelet frames-based features representing texture contents of soil images are extracted and categorized by applying a maximum likelihood criterion. The soil texture analysis system has been tested successfully with an accuracy of 91% in classifying soil samples into one of three general categories of soil textures. In comparison with the common methods, this wavelet-based image analysis approach is convenient, efficient, fast, and objective.

  8. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis☆

    PubMed Central

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-01-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987

  9. Anthropogenic effects on the biota: towards a new system of principles and criteria for analysis of ecological hazards.

    PubMed

    Ostroumov, Sergei A

    2003-01-01

    The currently accepted system of criteria for evaluating environmental and ecological hazards of man-made chemicals (pollutants) is vulnerable to criticism. In this paper, a new concept of the system of approaches towards criteria for evaluating the ecological hazard from man-made impact is proposed. It is suggested to assess the man-made impacts (including effects of pollutants and xenobiotics) on the biota according to the following four levels of disturbance in biological and ecological systems: (1) the level of individual responses; (2) the level of aggregated responses of groups of organisms; (3) the level of stability and integrity of the ecosystem; (4) the level of contributions of the ecosystem to biospheric processes. On the basis of the author's experimental studies, an example is given of how to apply the proposed approach and the system of criteria to the analysis of concrete experimental data. To exemplify the efficiency of the proposed approach, it is shown how to use it to analyze new data on effects of a synthetic surfactant on water filtering by bivalves. It is concluded that the proposed approach will be helpful in better assessing environmental and ecological hazards from anthropogenic effects on biota, including effects of man-made chemicals polluting ecosystems.

  10. An analysis of the qualification criteria for small radioactive material shipping packages

    SciTech Connect

    McClure, J.D.

    1983-05-01

    The RAM package design certification process has two important elements, testing and acceptance. These terms sound very similar but they have specific meanings. Qualification testing in the context of this study is the imposition of simulated accident test conditions upon the candidate package design. (Normal transportation environments may also be included.) Following qualification testing, the acceptance criteria provide the performance levels which, if demonstrated, indicate the ability of the RAM package to sustain the severity of the qualification testing sequence and yet maintain specified levels of package integrity. This study has used Severities of Transportation Accidents as a data base to examine the regulatory test criteria which are required to be met by small packages containing Type B quantities of radioactive material (RAM). The basic findings indicate that the present regulatory test standards provide significantly higher levels of protection for the surface transportation modes (truck, rail) than for RAM packages shipped by aircraft. It should also be noted that various risk assessment studies have shown that the risk to the public due to severe transport accidents by surface and air transport modes is very low. A key element in this study was the quantification of the severity of the transportation accident environment and the severity of the present qualification test standards (called qualification test standards in this document) so that a direct comparison could be made between them to assess the effectiveness of the existing qualification test standards. The manner in which this was accomplished is described.

  11. Research on automatic human chromosome image analysis

    NASA Astrophysics Data System (ADS)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  12. Application of risk-based multiple criteria decision analysis for selection of the best agricultural scenario for effective watershed management.

    PubMed

    Javidi Sabbaghian, Reza; Zarghami, Mahdi; Nejadhashemi, A Pouyan; Sharifi, Mohammad Bagher; Herman, Matthew R; Daneshvar, Fariborz

    2016-03-01

    Effective watershed management requires the evaluation of agricultural best management practice (BMP) scenarios which carefully consider the relevant environmental, economic, and social criteria involved. In the Multiple Criteria Decision-Making (MCDM) process, scenarios are first evaluated and then ranked to determine the most desirable outcome for the particular watershed. The main challenge of this process is the accurate identification of the best solution for the watershed in question, despite the various risk attitudes presented by the associated decision-makers (DMs). This paper introduces a novel approach for implementation of the MCDM process based on a comparative neutral risk/risk-based decision analysis, which results in the selection of the most desirable scenario for use in the entire watershed. At the sub-basin level, each scenario includes multiple BMPs with scores that have been calculated using the criteria derived from two cases of neutral risk and risk-based decision-making. The simple additive weighting (SAW) operator is applied for use in neutral risk decision-making, while the ordered weighted averaging (OWA) and induced OWA (IOWA) operators are effective for risk-based decision-making. At the watershed level, the BMP scores of the sub-basins are aggregated to calculate each scenarios' combined goodness measurements; the most desirable scenario for the entire watershed is then selected based on the combined goodness measurements. Our final results illustrate the type of operator and risk attitudes needed to satisfy the relevant criteria within the number of sub-basins, and how they ultimately affect the final ranking of the given scenarios. The methodology proposed here has been successfully applied to the Honeyoey Creek-Pine Creek watershed in Michigan, USA to evaluate various BMP scenarios and determine the best solution for both the stakeholders and the overall stream health.

  13. ImageJ-MATLAB: a bidirectional framework for scientific image analysis interoperability.

    PubMed

    Hiner, Mark C; Rueden, Curtis T; Eliceiri, Kevin W

    2016-10-26

    ImageJ-MATLAB is a lightweight Java library facilitating bi-directional interoperability between MATLAB and ImageJ. By defining a standard for translation between matrix and image data structures, researchers are empowered to select the best tool for their image-analysis tasks.

  14. Dynamic and still microcirculatory image analysis for quantitative microcirculation research

    NASA Astrophysics Data System (ADS)

    Ying, Xiaoyou; Xiu, Rui-juan

    1994-05-01

    Based on analyses of various types of digital microcirculatory image (DMCI), we summed up the image features of DMCI, the digitizing demands for digital microcirculatory imaging, and the basic characteristics of the DMCI processing. A dynamic and still imaging separation processing (DSISP) mode was designed for developing a DMCI workstation and the DMCI processing. Original images in this study were clinical microcirculatory images from human finger nail-bed and conjunctiva microvasculature, and intravital microvascular network images from animal tissue or organs. A series of dynamic and still microcirculatory image analysis functions were developed in this study. The experimental results indicate most of the established analog video image analysis methods for microcirculatory measurement could be realized in a more flexible way based on the DMCI. More information can be rapidly extracted from the quality improved DMCI by employing intelligence digital image analysis methods. The DSISP mode is very suitable for building a DMCI workstation.

  15. Sparse Superpixel Unmixing for Hyperspectral Image Analysis

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Thompson, David R.; Gilmore, Martha

    2010-01-01

    Software was developed that automatically detects minerals that are present in each pixel of a hyperspectral image. An algorithm based on sparse spectral unmixing with Bayesian Positive Source Separation is used to produce mineral abundance maps from hyperspectral images. A superpixel segmentation strategy enables efficient unmixing in an interactive session. The algorithm computes statistically likely combinations of constituents based on a set of possible constituent minerals whose abundances are uncertain. A library of source spectra from laboratory experiments or previous remote observations is used. A superpixel segmentation strategy improves analysis time by orders of magnitude, permitting incorporation into an interactive user session (see figure). Mineralogical search strategies can be categorized as supervised or unsupervised. Supervised methods use a detection function, developed on previous data by hand or statistical techniques, to identify one or more specific target signals. Purely unsupervised results are not always physically meaningful, and may ignore subtle or localized mineralogy since they aim to minimize reconstruction error over the entire image. This algorithm offers advantages of both methods, providing meaningful physical interpretations and sensitivity to subtle or unexpected minerals.

  16. Soil Surface Roughness through Image Analysis

    NASA Astrophysics Data System (ADS)

    Tarquis, A. M.; Saa-Requejo, A.; Valencia, J. L.; Moratiel, R.; Paz-Gonzalez, A.; Agro-Environmental Modeling

    2011-12-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on several factors being one of them surface micro-topography, usually quantified trough soil surface roughness (SSR). SSR greatly affects surface sealing and runoff generation, yet little information is available about the effect of roughness on the spatial distribution of runoff and on flow concentration. The methods commonly used to measure SSR involve measuring point elevation using a pin roughness meter or laser, both of which are labor intensive and expensive. Lately a simple and inexpensive technique based on percentage of shadow in soil surface image has been developed to determine SSR in the field in order to obtain measurement for wide spread application. One of the first steps in this technique is image de-noising and thresholding to estimate the percentage of black pixels in the studied area. In this work, a series of soil surface images have been analyzed applying several de-noising wavelet analysis and thresholding algorithms to study the variation in percentage of shadows and the shadows size distribution. Funding provided by Spanish Ministerio de Ciencia e Innovación (MICINN) through project no. AGL2010- 21501/AGR and by Xunta de Galicia through project no INCITE08PXIB1621 are greatly appreciated.

  17. Monotonic correlation analysis of image quality measures for image fusion

    NASA Astrophysics Data System (ADS)

    Kaplan, Lance M.; Burks, Stephen D.; Moore, Richard K.; Nguyen, Quang

    2008-04-01

    The next generation of night vision goggles will fuse image intensified and long wave infra-red to create a hybrid image that will enable soldiers to better interpret their surroundings during nighttime missions. Paramount to the development of such goggles is the exploitation of image quality (IQ) measures to automatically determine the best image fusion algorithm for a particular task. This work introduces a novel monotonic correlation coefficient to investigate how well possible IQ features correlate to actual human performance, which is measured by a perception study. The paper will demonstrate how monotonic correlation can identify worthy features that could be overlooked by traditional correlation values.

  18. Correlative feature analysis of FFDM images

    NASA Astrophysics Data System (ADS)

    Yuan, Yading; Giger, Maryellen L.; Li, Hui; Sennett, Charlene

    2008-03-01

    Identifying the corresponding image pair of a lesion is an essential step for combining information from different views of the lesion to improve the diagnostic ability of both radiologists and CAD systems. Because of the non-rigidity of the breasts and the 2D projective property of mammograms, this task is not trivial. In this study, we present a computerized framework that differentiates the corresponding images from different views of a lesion from non-corresponding ones. A dual-stage segmentation method, which employs an initial radial gradient index(RGI) based segmentation and an active contour model, was initially applied to extract mass lesions from the surrounding tissues. Then various lesion features were automatically extracted from each of the two views of each lesion to quantify the characteristics of margin, shape, size, texture and context of the lesion, as well as its distance to nipple. We employed a two-step method to select an effective subset of features, and combined it with a BANN to obtain a discriminant score, which yielded an estimate of the probability that the two images are of the same physical lesion. ROC analysis was used to evaluate the performance of the individual features and the selected feature subset in the task of distinguishing between corresponding and non-corresponding pairs. By using a FFDM database with 124 corresponding image pairs and 35 non-corresponding pairs, the distance feature yielded an AUC (area under the ROC curve) of 0.8 with leave-one-out evaluation by lesion, and the feature subset, which includes distance feature, lesion size and lesion contrast, yielded an AUC of 0.86. The improvement by using multiple features was statistically significant as compared to single feature performance. (p<0.001)

  19. Analysis of internal and external validity criteria for a computerized visual search task: A pilot study.

    PubMed

    Richard's, María M; Introzzi, Isabel; Zamora, Eliana; Vernucci, Santiago

    2017-01-01

    Inhibition is one of the main executive functions, because of its fundamental role in cognitive and social development. Given the importance of reliable and computerized measurements to assessment inhibitory performance, this research intends to analyze the internal and external criteria of validity of a computerized conjunction search task, to evaluate the role of perceptual inhibition. A sample of 41 children (21 females and 20 males), aged between 6 and 11 years old (M = 8.49, SD = 1.47), intentionally selected from a private management school of Mar del Plata (Argentina), middle socio-economic level were assessed. The Conjunction Search Task from the TAC Battery, Coding and Symbol Search tasks from Wechsler Intelligence Scale for Children were used. Overall, results allow us to confirm that the perceptual inhibition task form TAC presents solid rates of internal and external validity that make a valid measurement instrument of this process.

  20. A new multi criteria classification approach in a multi agent system applied to SEEG analysis

    PubMed Central

    Kinie, Abel; Ndiaye, Mamadou Lamine L.; Montois, Jean-Jacques; Jacquelet, Yann

    2007-01-01

    This work is focused on the study of the organization of the SEEG signals during epileptic seizures with multi-agent system approach. This approach is based on cooperative mechanisms of auto-organization at the micro level and of emergence of a global function at the macro level. In order to evaluate this approach we propose a distributed collaborative approach for the classification of the interesting signals. This new multi-criteria classification method is able to provide a relevant brain area structures organisation and to bring out epileptogenic networks elements. The method is compared to another classification approach a fuzzy classification and gives better results when applied to SEEG signals. PMID:18002381

  1. Deciding on Science: An Analysis of Higher Education Science Student Major Choice Criteria

    NASA Astrophysics Data System (ADS)

    White, Stephen Wilson

    The number of college students choosing to major in science, technology, engineering, and math (STEM) in the United States affects the size and quality of the American workforce (Winters, 2009). The number of graduates in these academic fields has been on the decline in the United States since the 1960s, which, according to Lips and McNeil (2009), has resulted in a diminished ability of the United States to compete in science and engineering on the world stage. The purpose of this research was to learn why students chose a STEM major and determine what decision criteria influenced this decision. According to Ajzen's (1991) theory of planned behavior (TPB), the key components of decision-making can be quantified and used as predictors of behavior. In this study the STEM majors' decision criteria were compared between different institution types (two-year, public four-year, and private four-year), and between demographic groups (age and sex). Career, grade, intrinsic, self-efficacy, and self-determination were reported as motivational factors by a majority of science majors participating in this study. Few students reported being influenced by friends and family when deciding to major in science. Science students overwhelmingly attributed the desire to solve meaningful problems as central to their decision to major in science. A majority of students surveyed credited a teacher for influencing their desire to pursue science as a college major. This new information about the motivational construct of the studied group of science majors can be applied to the previously stated problem of not enough STEM majors in the American higher education system to provide workers required to fill the demand of a globally STEM-competitive United States (National Academy of Sciences, National Academy of Engineering, & Institute of Medicine, 2010).

  2. Nonlinear analysis for image stabilization in IR imaging system

    NASA Astrophysics Data System (ADS)

    Xie, Zhan-lei; Lu, Jin; Luo, Yong-hong; Zhang, Mei-sheng

    2009-07-01

    In order to acquire stabilization image for IR imaging system, an image stabilization system is required. Linear method is often used in current research on the system and a simple PID controller can meet the demands of common users. In fact, image stabilization system is a structure with nonlinear characters such as structural errors, friction and disturbances. In up-grade IR imaging system, although conventional PID controller is optimally designed, it cannot meet the demands of higher accuracy and fast responding speed when disturbances are present. To get high-quality stabilization image, nonlinear characters should be rejected. The friction and gear clearance are key factors and play an important role in the image stabilization system. The friction induces static error of system. When the system runs at low speed, stick-slip and creeping induced by friction not only decrease resolution and repeating accuracy, but also increase the tracking error and the steady state error. The accuracy of the system is also limited by gear clearance, and selfexcited vibration is brought on by serious clearance. In this paper, effects of different nonlinear on image stabilization precision are analyzed, including friction and gear clearance. After analyzing the characters and influence principle of the friction and gear clearance, a friction model is established with MATLAB Simulink toolbox, which is composed of static friction, Coulomb friction and viscous friction, and the gear clearance non-linearity model is built, providing theoretical basis for the future engineering practice.

  3. Percent area coverage through image analysis

    NASA Astrophysics Data System (ADS)

    Wong, Chung M.; Hong, Sung M.; Liu, De-Ling

    2016-09-01

    The notion of percent area coverage (PAC) has been used to characterize surface cleanliness levels in the spacecraft contamination control community. Due to the lack of detailed particle data, PAC has been conventionally calculated by multiplying the particle surface density in predetermined particle size bins by a set of coefficients per MIL-STD-1246C. In deriving the set of coefficients, the surface particle size distribution is assumed to follow a log-normal relation between particle density and particle size, while the cross-sectional area function is given as a combination of regular geometric shapes. For particles with irregular shapes, the cross-sectional area function cannot describe the true particle area and, therefore, may introduce error in the PAC calculation. Other errors may also be introduced by using the lognormal surface particle size distribution function that highly depends on the environmental cleanliness and cleaning process. In this paper, we present PAC measurements from silicon witness wafers that collected fallouts from a fabric material after vibration testing. PAC calculations were performed through analysis of microscope images and compare them to values derived through the MIL-STD-1246C method. Our results showed that the MIL-STD-1246C method does provide a reasonable upper bound to the PAC values determined through image analysis, in particular for PAC values below 0.1.

  4. The Scientific Image in Behavior Analysis.

    PubMed

    Keenan, Mickey

    2016-05-01

    Throughout the history of science, the scientific image has played a significant role in communication. With recent developments in computing technology, there has been an increase in the kinds of opportunities now available for scientists to communicate in more sophisticated ways. Within behavior analysis, though, we are only just beginning to appreciate the importance of going beyond the printing press to elucidate basic principles of behavior. The aim of this manuscript is to stimulate appreciation of both the role of the scientific image and the opportunities provided by a quick response code (QR code) for enhancing the functionality of the printed page. I discuss the limitations of imagery in behavior analysis ("Introduction"), and I show examples of what can be done with animations and multimedia for teaching philosophical issues that arise when teaching about private events ("Private Events 1 and 2"). Animations are also useful for bypassing ethical issues when showing examples of challenging behavior ("Challenging Behavior"). Each of these topics can be accessed only by scanning the QR code provided. This contingency has been arranged to help the reader embrace this new technology. In so doing, I hope to show its potential for going beyond the limitations of the printing press.

  5. Quantitative analysis of rib movement based on dynamic chest bone images: preliminary results

    NASA Astrophysics Data System (ADS)

    Tanaka, R.; Sanada, S.; Oda, M.; Mitsutaka, M.; Suzuki, K.; Sakuta, K.; Kawashima, H.

    2014-03-01

    Rib movement during respiration is one of the diagnostic criteria in pulmonary impairments. In general, the rib movement is assessed in fluoroscopy. However, the shadows of lung vessels and bronchi overlapping ribs prevent accurate quantitative analysis of rib movement. Recently, an image-processing technique for separating bones from soft tissue in static chest radiographs, called "bone suppression technique", has been developed. Our purpose in this study was to evaluate the usefulness of dynamic bone images created by the bone suppression technique in quantitative analysis of rib movement. Dynamic chest radiographs of 10 patients were obtained using a dynamic flat-panel detector (FPD). Bone suppression technique based on a massive-training artificial neural network (MTANN) was applied to the dynamic chest images to create bone images. Velocity vectors were measured in local areas on the dynamic bone images, which formed a map. The velocity maps obtained with bone and original images for scoliosis and normal cases were compared to assess the advantages of bone images. With dynamic bone images, we were able to quantify and distinguish movements of ribs from those of other lung structures accurately. Limited rib movements of scoliosis patients appeared as reduced rib velocity vectors. Vector maps in all normal cases exhibited left-right symmetric distributions, whereas those in abnormal cases showed nonuniform distributions. In conclusion, dynamic bone images were useful for accurate quantitative analysis of rib movements: Limited rib movements were indicated as a reduction of rib movement and left-right asymmetric distribution on vector maps. Thus, dynamic bone images can be a new diagnostic tool for quantitative analysis of rib movements without additional radiation dose.

  6. Effect of tree nuts on metabolic syndrome criteria: a systematic review and meta-analysis of randomised controlled trials

    PubMed Central

    Blanco Mejia, Sonia; Kendall, Cyril W C; Viguiliouk, Effie; Augustin, Livia S; Ha, Vanessa; Cozma, Adrian I; Mirrahimi, Arash; Maroleanu, Adriana; Chiavaroli, Laura; Leiter, Lawrence A; de Souza, Russell J; Jenkins, David J A; Sievenpiper, John L

    2014-01-01

    Objective To provide a broader evidence summary to inform dietary guidelines of the effect of tree nuts on criteria of the metabolic syndrome (MetS). Design We conducted a systematic review and meta-analysis of the effect of tree nuts on criteria of the MetS. Data sources We searched MEDLINE, EMBASE, CINAHL and the Cochrane Library (through 4 April 2014). Eligibility criteria for selecting studies We included relevant randomised controlled trials (RCTs) of ≥3 weeks reporting at least one criterion of the MetS. Data extraction Two or more independent reviewers extracted all relevant data. Data were pooled using the generic inverse variance method using random effects models and expressed as mean differences (MD) with 95% CIs. Heterogeneity was assessed by the Cochran Q statistic and quantified by the I2 statistic. Study quality and risk of bias were assessed. Results Eligibility criteria were met by 49 RCTs including 2226 participants who were otherwise healthy or had dyslipidaemia, MetS or type 2 diabetes mellitus. Tree nut interventions lowered triglycerides (MD=−0.06 mmol/L (95% CI −0.09 to −0.03 mmol/L)) and fasting blood glucose (MD=−0.08 mmol/L (95% CI −0.16 to −0.01 mmol/L)) compared with control diet interventions. There was no effect on waist circumference, high-density lipoprotein cholesterol or blood pressure with the direction of effect favouring tree nuts for waist circumference. There was evidence of significant unexplained heterogeneity in all analyses (p<0.05). Conclusions Pooled analyses show a MetS benefit of tree nuts through modest decreases in triglycerides and fasting blood glucose with no adverse effects on other criteria across nut types. As our conclusions are limited by the short duration and poor quality of the majority of trials, as well as significant unexplained between-study heterogeneity, there remains a need for larger, longer, high-quality trials. Trial registration number NCT01630980. PMID:25074070

  7. ACR Appropriateness Criteria Crohn Disease.

    PubMed

    Kim, David H; Carucci, Laura R; Baker, Mark E; Cash, Brooks D; Dillman, Jonathan R; Feig, Barry W; Fowler, Kathryn J; Gage, Kenneth L; Noto, Richard B; Smith, Martin P; Yaghmai, Vahid; Yee, Judy; Lalani, Tasneem

    2015-10-01

    Crohn disease is a chronic inflammatory disorder involving the gastrointestinal tract, characterized by episodic flares and times of remission. Underlying structural damage occurs progressively, with recurrent bouts of inflammation. The diagnosis and management of this disease process is dependent on several clinical, laboratory, imaging, endoscopic, and histologic factors. In recent years, with the maturation of CT enterography, and MR enterography, imaging has played an increasingly important role in relation to Crohn Disease. In addition to these specialized examination modalities, ultrasound and routine CT have potential uses. Fluoroscopy, radiography, and nuclear medicine may be less beneficial depending on the clinical scenario. The imaging modality best suited to evaluating this disease may change, depending on the target population, severity of presentation, and specific clinical situation. This document presents seven clinical scenarios (variants) in both the adult and pediatric populations and rates the appropriateness of the available imaging options. They are summarized in a consolidated table, and the underlying rationale and supporting literature are presented in the accompanying narrative. The ACR Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed every three years by a multidisciplinary expert panel. The guideline development and review include an extensive analysis of current medical literature from peer-reviewed journals and the application of a well established consensus methodology (modified Delphi) to rate the appropriateness of imaging and treatment procedures by the panel. In those instances in which evidence is lacking or not definitive, expert opinion may be used to recommend imaging or treatment.

  8. High speed image correlation for vibration analysis

    NASA Astrophysics Data System (ADS)

    Siebert, T.; Wood, R.; Splitthof, K.

    2009-08-01

    Digital speckle correlation techniques have already been successfully proven to be an accurate displacement analysis tool for a wide range of applications. With the use of two cameras, three dimensional measurements of contours and displacements can be carried out. With a simple setup it opens a wide range of applications. Rapid new developments in the field of digital imaging and computer technology opens further applications for these measurement methods to high speed deformation and strain analysis, e.g. in the fields of material testing, fracture mechanics, advanced materials and component testing. The high resolution of the deformation measurements in space and time opens a wide range of applications for vibration analysis of objects. Since the system determines the absolute position and displacements of the object in space, it is capable of measuring high amplitudes and even objects with rigid body movements. The absolute resolution depends on the field of view and is scalable. Calibration of the optical setup is a crucial point which will be discussed in detail. Examples of the analysis of harmonic vibration and transient events from material research and industrial applications are presented. The results show typical features of the system.

  9. Cellular Image Analysis and Imaging by Flow Cytometry

    PubMed Central

    Basiji, David A.; Ortyn, William E.; Liang, Luchuan; Venkatachalam, Vidya; Morrissey, Philip

    2007-01-01

    Synopsis Imaging flow cytometry combines the statistical power and fluorescence sensitivity of standard flow cytometry with the spatial resolution and quantitative morphology of digital microscopy. The technique is a good fit for clinical applications by providing a convenient means for imaging and analyzing cells directly in bodily fluids. Examples are provided of the discrimination of cancerous from normal mammary epithelial cells and the high throughput quantitation of FISH probes in human peripheral blood mononuclear cells. The FISH application will be further enhanced by the integration of extended depth of field imaging technology with the current optical system. PMID:17658411

  10. Vector processing enhancements for real-time image analysis.

    SciTech Connect

    Shoaf, S.; APS Engineering Support Division

    2008-01-01

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  11. Vision-sensing image analysis for GTAW process control

    SciTech Connect

    Long, D.D.

    1994-11-01

    Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.

  12. [Diffuse idiopathic skeletal hyperostosis. Review of diagnostic criteria and analysis of 915 cases].

    PubMed

    Scutellari, P N; Orzincolo, C; Princivalle, M; Franceschini, F

    1992-06-01

    DISH is a common systemic skeletal disease, probably of dysmetabolic and/or degenerative origin, yet of unknown etiology. It is observed in middle-aged or elderly patients of both sexes, and is characterized by ossification of the anterior longitudinal ligament on the antero-lateral aspect of the spine, and by ossifying enthesopathy, in both the central and the peripheral skeleton. Diagnosis is solely based on radiographic abnormalities, according to the so-called Resnick criteria. In the present study, the spines of 915 patients (414 males, 501 females, mean age: 65 years) were considered, and the peripheral entheses (heel, patella and elbow) of 494 of them (234 males and 260 females). The incidence of DISH was 14.09% (129 cases): 17.6% in males (73 cases) and 11.7% in females (56 cases). DISH strikes in the VI and VII decades of life most. The most affected sites of the spine were: the dorsal portion (100%), especially in the D7-D11 segment (93%); the lumbar spine in L1-L3 (81%), and the cervical spine, in the C5-C7 segment (69%). Peripheral areas of involvement were: pelvis (90%), heel (76%), elbow (46%) and knee (29%). The symptoms of DISH must be promptly detected: the disease is not asymptomatic, but presents with pain and stiffness in the spine, recurrent tendinitis and bursitis, and myelopathy.

  13. F-106 data summary and model results relative to threat criteria and protection design analysis

    NASA Technical Reports Server (NTRS)

    Pitts, F. L.; Finelli, G. B.; Perala, R. A.; Rudolph, T. H.

    1986-01-01

    The NASA F-106 has acquired considerable data on the rates-of-change of electromagnetic parameters on the aircraft surface during 690 direct lightning strikes while penetrating thunderstorms at altitudes ranging from 15,000 to 40,000 feet. These in-situ measurements have provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircrat appropriate for determining lightning indirect effects on aircraft. The data are presently being used in updating previous lightning criteria and standards developed over the years from ground-based measurements. The new lightning standards will, therefore, be the first which reflect actual aircraft responses measured at flight altitudes. The modeling technique developed to interpret and understand the direct strike electromagnetic data acquired on the F-106 provides a means to model the interaction of the lightning channel with the F-106. The reasonable results obtained with the model, compared to measured responses, yield confidence that the model may be credibly applied to other aircraft types and uses in the prediction of internal coupling effects in the design of lightning protection for new aircraft.

  14. DIAGNOSTIC IMAGING IN A DIRECT-ACCESS SPORTS PHYSICAL THERAPY CLINIC: A 2-YEAR RETROSPECTIVE PRACTICE ANALYSIS

    PubMed Central

    Dedekam, Erik A.; Johnson, Michael R.; Dembowski, Scott C.; Westrick, Richard B.; Goss, Donald L.

    2016-01-01

    Background While advanced diagnostic imaging is a large contributor to the growth in health care costs, direct-access to physical therapy is associated with decreased rates of diagnostic imaging. No study has systematically evaluated with evidence-based criteria the appropriateness of advanced diagnostic imaging, including magnetic resonance imaging (MRI), when ordered by physical therapists. The primary purpose of this study was to describe the appropriateness of magnetic resonance imaging (MRI) or magnetic resonance arthrogram (MRA) exams ordered by physical therapists in a direct-access sports physical therapy clinic. Study Design Retrospective observational study of practice. Hypothesis Greater than 80% of advanced diagnostic imaging orders would have an American College of Radiology (ACR) Appropriateness Criteria rating of greater than 6, indicating an imaging order that is usually appropriate. Methods A 2-year retrospective analysis identified 108 MRI/MRA examination orders from four physical therapists. A board-certified radiologist determined the appropriateness of each order based on ACR appropriateness criteria. The principal investigator and co-investigator radiologist assessed agreement between the clinical diagnosis and MRI/surgical findings. Results Knee (31%) and shoulder (25%) injuries were the most common. Overall, 55% of injuries were acute. The mean ACR rating was 7.7; scores from six to nine have been considered appropriate orders and higher ratings are better. The percentage of orders complying with ACR appropriateness criteria was 83.2%. Physical therapist's clinical diagnosis was confirmed by MRI/MRA findings in 64.8% of cases and was confirmed by surgical findings in 90% of cases. Conclusions Physical therapists providing musculoskeletal primary care in a direct-access sports physical therapy clinic appropriately ordered advanced diagnostic imaging in over 80% of cases. Future research should prospectively compare physical therapist

  15. Imaging Tests for the Diagnosis and Staging of Pancreatic Adenocarcinoma: A Meta-Analysis.

    PubMed

    Treadwell, Jonathan R; Zafar, Hanna M; Mitchell, Matthew D; Tipton, Kelley; Teitelbaum, Ursina; Jue, Jane

    2016-07-01

    Imaging tests are central to the diagnosis and staging of pancreatic adenocarcinoma. We performed a systematic review and meta-analysis of the pertinent evidence on 5 imaging tests (computed tomography (CT), magnetic resonance imaging, CT angiography, endoscopic ultrasound with fine-needle aspiration, and combined positron emission tomography with CT). Searches of several databases up to March 1, 2014, yielded 9776 articles, and 24 provided comparative effectiveness of 2 or more imaging tests. Multiple reviewers applied study inclusion criteria, extracted data from each study, rated the risk of bias, and graded the strength of evidence. Data included accuracy of diagnosis and resectability in primary untreated pancreatic adenocarcinoma, including tumor stage, nodal stage, metastases, and vascular involvement. Where possible, study results were combined using bivariate meta-analysis. Studies were at low or moderate risk of bias. Most comparisons between imaging tests were insufficient to permit conclusions, due to imprecision or inconsistency among study results. However, moderate-grade evidence revealed that CT and magnetic resonance imaging had similar sensitivities and specificities for both diagnosis and vascular involvement. Other conclusions were based on low-grade evidence. In general, more direct evidence is needed to inform decisions about imaging tests for pancreatic adenocarcinoma.

  16. Decerns: A framework for multi-criteria decision analysis

    SciTech Connect

    Yatsalo, Boris; Didenko, Vladimir; Gritsyuk, Sergey; Sullivan, Terry

    2015-02-27

    A new framework, Decerns, for multicriteria decision analysis (MCDA) of a wide range of practical problems on risk management is introduced. Decerns framework contains a library of modules that are the basis for two scalable systems: DecernsMCDA for analysis of multicriteria problems, and DecernsSDSS for multicriteria analysis of spatial options. DecernsMCDA includes well known MCDA methods and original methods for uncertainty treatment based on probabilistic approaches and fuzzy numbers. As a result, these MCDA methods are described along with a case study on analysis of multicriteria location problem.

  17. ST segment/heart rate slope as a predictor of coronary artery disease: comparison with quantitative thallium imaging and conventional ST segment criteria

    SciTech Connect

    Finkelhor, R.S.; Newhouse, K.E.; Vrobel, T.R.; Miron, S.D.; Bahler, R.C.

    1986-08-01

    The ST segment shift relative to exercise-induced increments in heart rate, the ST/heart rate slope (ST/HR slope), has been proposed as a more accurate ECG criterion for diagnosing significant coronary artery disease (CAD). Its clinical utility, with the use of a standard treadmill protocol, was compared with quantitative stress thallium (TI) and standard treadmill criteria in 64 unselected patients who underwent coronary angiography. The overall diagnostic accuracy of the ST/HR slope was an improvement over TI and conventional ST criteria (81%, 67%, and 69%). For patients failing to reach 85% of their age-predicted maximal heart rate, its diagnostic accuracy was comparable with TI (77% and 74%). Its sensitivity in patients without prior myocardial infarctions was equivalent to that of thallium (91% and 95%). The ST/HR slope was directly related to the angiographic severity (Gensini score) of CAD in patients without a prior infarction (r = 0.61, p less than 0.001). The ST/HR slope was an improved ECG criterion for diagnosing CAD and compared favorably with TI imaging.

  18. Image analysis of nucleated red blood cells.

    PubMed

    Zajicek, G; Shohat, M; Melnik, Y; Yeger, A

    1983-08-01

    Bone marrow smears stained with Giemsa were scanned with a video camera under computer control. Forty-two cells representing the six differentiation classes of the red bone marrow were sampled. Each cell was digitized into 70 X 70 pixels, each pixel representing a square area of 0.4 micron2 in the original image. The pixel gray values ranged between 0 and 255. Zero stood for white, 255 represented black, while the numbers in between stood for the various shades of gray. After separation and smoothing the images were processed with a Sobel operator outlining the points of steepest gray level change in the cell. These points constitute a closed curve denominated as inner cell boundary, separating the cell into an inner and an outer region. Two types of features were extracted from each cell: form features, e.g., area and length, and gray level features. Twenty-two features were tested for their discriminative merit. After selecting 16, the discriminant analysis program classified correctly all 42 cells into the 6 classes.

  19. Some selected quantitative methods of thermal image analysis in Matlab.

    PubMed

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image.

  20. APPLICATION OF PRINCIPAL COMPONENT ANALYSIS TO RELAXOGRAPHIC IMAGES

    SciTech Connect

    STOYANOVA,R.S.; OCHS,M.F.; BROWN,T.R.; ROONEY,W.D.; LI,X.; LEE,J.H.; SPRINGER,C.S.

    1999-05-22

    Standard analysis methods for processing inversion recovery MR images traditionally have used single pixel techniques. In these techniques each pixel is independently fit to an exponential recovery, and spatial correlations in the data set are ignored. By analyzing the image as a complete dataset, improved error analysis and automatic segmentation can be achieved. Here, the authors apply principal component analysis (PCA) to a series of relaxographic images. This procedure decomposes the 3-dimensional data set into three separate images and corresponding recovery times. They attribute the 3 images to be spatial representations of gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF) content.

  1. Geostatistical analysis of groundwater level using Euclidean and non-Euclidean distance metrics and variable variogram fitting criteria

    NASA Astrophysics Data System (ADS)

    Theodoridou, Panagiota G.; Karatzas, George P.; Varouchakis, Emmanouil A.; Corzo Perez, Gerald A.

    2015-04-01

    Groundwater level is an important information in hydrological modelling. Geostatistical methods are often employed to map the free surface of an aquifer. In geostatistical analysis using Kriging techniques the selection of the optimal variogram model is very important for the optimal method performance. This work compares three different criteria, the least squares sum method, the Akaike Information Criterion and the Cressie's Indicator, to assess the theoretical variogram that fits to the experimental one and investigates the impact on the prediction results. Moreover, five different distance functions (Euclidean, Minkowski, Manhattan, Canberra, and Bray-Curtis) are applied to calculate the distance between observations that affects both the variogram calculation and the Kriging estimator. Cross validation analysis in terms of Ordinary Kriging is applied by using sequentially a different distance metric and the above three variogram fitting criteria. The spatial dependence of the observations in the tested dataset is studied by fitting classical variogram models and the Matérn model. The proposed comparison analysis performed for a data set of two hundred fifty hydraulic head measurements distributed over an alluvial aquifer that covers an area of 210 km2. The study area is located in the Prefecture of Drama, which belongs to the Water District of East Macedonia (Greece). This area was selected in terms of hydro-geological data availability and geological homogeneity. The analysis showed that a combination of the Akaike information Criterion for the variogram fitting assessment and the Brays-Curtis distance metric provided the most accurate cross-validation results. The Power-law variogram model provided the best fit to the experimental data. The aforementioned approach for the specific dataset in terms of the Ordinary Kriging method improves the prediction efficiency in comparison to the classical Euclidean distance metric. Therefore, maps of the spatial

  2. Criteria for the use of regression analysis for remote sensing of sediment and pollutants

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R. (Principal Investigator)

    1982-01-01

    Data analysis procedures for quantification of water quality parameters that are already identified and are known to exist within the water body are considered. The liner multiple-regression technique was examined as a procedure for defining and calibrating data analysis algorithms for such instruments as spectrometers and multispectral scanners.

  3. Image analysis by integration of disparate information

    NASA Technical Reports Server (NTRS)

    Lemoigne, Jacqueline

    1993-01-01

    Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.

  4. Advanced digital image analysis method dedicated to the characterization of the morphology of filamentous fungus.

    PubMed

    Hardy, N; Moreaud, M; Guillaume, D; Augier, F; Nienow, A; Béal, C; Ben Chaabane, F

    2017-02-06

    Filamentous fungi have a complex morphology that induces fermentation process development issues, as a consequence of viscosity increase and diffusion limitations. In order to better understand the relationship between viscosity changes and fungus morphology during fermentations of Trichoderma reesei, an accurate image analysis method has been developed to provide quantitative and representative data for morphological analysis. This method consisted of a new algorithm called FACE that allowed sharp images to be created at all positions, segmentation of fungus, and morphological analysis using skeleton and topological approaches. It was applied and validated by characterizing samples of an industrial strain of Trichoderma reesei that had or had not been exposed to an extreme shear stress. This method allowed many morphological characteristics to be identified, among which nine relevant criteria were extracted, regarding the impact of shear stress on the fungus and on the viscosity of the fermentation medium.

  5. Quantitative Analysis of High-Resolution Microendoscopic Images for Diagnosis of Esophageal Squamous Cell Carcinoma

    PubMed Central

    Shin, Dongsuk; Protano, Marion-Anna; Polydorides, Alexandros D.; Dawsey, Sanford M.; Pierce, Mark C.; Kim, Michelle Kang; Schwarz, Richard A.; Quang, Timothy; Parikh, Neil; Bhutani, Manoop S.; Zhang, Fan; Wang, Guiqi; Xue, Liyan; Wang, Xueshan; Xu, Hong; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca R.

    2014-01-01

    Background & Aims High-resolution microendoscopy is an optical imaging technique with the potential to improve the accuracy of endoscopic screening for esophageal squamous neoplasia. Although these microscopic images can readily be interpreted by trained personnel, quantitative image analysis software could facilitate the use of this technology in low-resource settings. In this study we developed and evaluated quantitative image analysis criteria for the evaluation of neoplastic and non-neoplastic squamous esophageal mucosa. Methods We performed image analysis of 177 patients undergoing standard upper endoscopy for screening or surveillance of esophageal squamous neoplasia, using high-resolution microendoscopy, at 2 hospitals in China and 1 in the United States from May 2010 to October 2012. Biopsies were collected from imaged sites (n=375); a consensus diagnosis was provided by 2 expert gastrointestinal pathologists and used as the standard. Results Quantitative information from the high-resolution images was used to develop an algorithm to identify high-grade squamous dysplasia or invasive squamous cell cancer, based on histopathology findings. Optimal performance was obtained using mean nuclear area as the basis for classification, resulting in sensitivities and specificities of 93% and 92% in the training set, 87% and 97% in the test set, and 84% and 95% in an independent validation set, respectively. Conclusions High-resolution microendoscopy with quantitative image analysis can aid in the identification of esophageal squamous neoplasia. Use of software-based image guides may overcome issues of training and expertise in low-resource settings, allowing for widespread use of these optical biopsy technologies. PMID:25066838

  6. Exercise and ankylosing spondylitis with New York modified criteria: a systematic review of controlled trials with meta-analysis.

    PubMed

    Martins, N A; Furtado, Guilherme Eustáquio; Campos, Maria João; Leitão, José Carlos; Filaire, Edith; Ferreira, José Pedro

    2014-01-01

    Ankylosing spondylitis is a systemic rheumatic disease that affects the axial skeleton, causing inflammatory back pain, structural and functional changes which decrease quality of life. Several treatments for ankylosing spondylitis have been proposed and among them the use of exercise. The present study aims to synthesize information from the literature and identify the results of controlled clinical trials on exercise in patients with ankylosing spondylitis with the New York modified diagnostic criteria and to assess whether exercise is more effective than physical activity to reduce functional impairment. The sources of studies used were: LILACS, Pubmed, EBSCOhost, B-on, personal communication, manual research and lists of references. The criteria used for the studies selection was controlled clinical trials, participants with New York modified diagnostic criteria for ankylosing spondylitis, and with interventions through exercise. The variables studied were related to primary outcomes such as BASFI (Bath Ankylosing Spondylitis Functional Index) as a functional index, BASDAI (Bath Ankylosing Spondylitis Disease Activity Index) as an index of intensity of disease activity and BASMI (Bath Ankylosing Spondylitis Metrology Index) as a metrological index assessing patient's limitation on movement. From the 603 studies identified after screening only 37 articles were selected for eligibility, from which 18 studies were included. The methodological quality was assessed to select those with an high methodological expressiveness using the PEDro scale. A cumulative meta-analysis was subsequently performed to compare exercise versus usual level of physical activity. Exercise shows significant statistical outcomes for the BASFI, BASDAI and BASMI, higher than those found for usual level of physical activity.

  7. Prognostic Relevance of Objective Response According to EASL Criteria and mRECIST Criteria in Hepatocellular Carcinoma Patients Treated with Loco-Regional Therapies: A Literature-Based Meta-Analysis

    PubMed Central

    Vincenzi, Bruno; Di Maio, Massimo; Silletta, Marianna; D’Onofrio, Loretta; Spoto, Chiara; Piccirillo, Maria Carmela; Daniele, Gennaro; Comito, Francesca; Maci, Eliana; Bronte, Giuseppe; Russo, Antonio; Santini, Daniele; Perrone, Francesco; Tonini, Giuseppe

    2015-01-01

    Background The European Association for the Study of the Liver (EASL) criteria and the modified Response Evaluation Criteria in Solid Tumors (mRECIST) are currently adopted to evaluate radiological response in patients affected by HCC and treated with loco-regional procedures. Several studies explored the validity of these measurements in predicting survival but definitive data are still lacking. Aim To conduct a systematic review of studies exploring mRECIST and EASL criteria usefulness in predictive radiological response in HCC undergoing loco-regional therapies and their validity in predicting survival. Methods A comprehensive search of the literature was performed in electronic databases EMBASE, MEDLINE, COCHRANE LIBRARY, ASCO conferences and EASL conferences up to June 10, 2014. Our overall search strategy included terms for HCC, mRECIST, and EASL. Loco-regional procedures included transarterial embolization (TAE), transarterial chemoembolization (TACE) and cryoablation. Inter-method agreement between EASL and mRECIST was assessed using the k coefficient. For each criteria, overall survival was described in responders vs. non-responders patients, considering all target lesions response. Results Among 18 initially found publications, 7 reports including 1357 patients were considered eligible. All studies were published as full-text articles. Proportion of responders according to mRECIST and EASL criteria was 62.4% and 61.3%, respectively. In the pooled population, 1286 agreements were observed between the two methods (kappa statistics 0.928, 95% confidence interval 0.912–0.944). HR for overall survival (responders versus non responders) according to mRECIST and EASL was 0.39 (95% confidence interval 0.26–0.61, p<0.0001) and 0.38 (95% confidence interval 0.24–0.61, p<0.0001), respectively. Conclusion In this literature-based meta-analysis, mRECIST and EASL criteria showed very good concordance in HCC patients undergoing loco-regional treatments. Objective

  8. Wild Fire Risk Map in the Eastern Steppe of Mongolia Using Spatial Multi-Criteria Analysis

    NASA Astrophysics Data System (ADS)

    Nasanbat, Elbegjargal; Lkhamjav, Ochirkhuyag

    2016-06-01

    Grassland fire is a cause of major disturbance to ecosystems and economies throughout the world. This paper investigated to identify risk zone of wildfire distributions on the Eastern Steppe of Mongolia. The study selected variables for wildfire risk assessment using a combination of data collection, including Social Economic, Climate, Geographic Information Systems, Remotely sensed imagery, and statistical yearbook information. Moreover, an evaluation of the result is used field validation data and assessment. The data evaluation resulted divided by main three group factors Environmental, Social Economic factor, Climate factor and Fire information factor into eleven input variables, which were classified into five categories by risk levels important criteria and ranks. All of the explanatory variables were integrated into spatial a model and used to estimate the wildfire risk index. Within the index, five categories were created, based on spatial statistics, to adequately assess respective fire risk: very high risk, high risk, moderate risk, low and very low. Approximately more than half, 68 percent of the study area was predicted accuracy to good within the very high, high risk and moderate risk zones. The percentages of actual fires in each fire risk zone were as follows: very high risk, 42 percent; high risk, 26 percent; moderate risk, 13 percent; low risk, 8 percent; and very low risk, 11 percent. The main overall accuracy to correct prediction from the model was 62 percent. The model and results could be support in spatial decision making support system processes and in preventative wildfire management strategies. Also it could be help to improve ecological and biodiversity conservation management.

  9. Paediatric x-ray radiation dose reduction and image quality analysis.

    PubMed

    Martin, L; Ruddlesden, R; Makepeace, C; Robinson, L; Mistry, T; Starritt, H

    2013-09-01

    Collaboration of multiple staff groups has resulted in significant reduction in the risk of radiation-induced cancer from radiographic x-ray exposure during childhood. In this study at an acute NHS hospital trust, a preliminary audit identified initial exposure factors. These were compared with European and UK guidance, leading to the introduction of new factors that were in compliance with European guidance on x-ray tube potentials. Image quality was assessed using standard anatomical criteria scoring, and visual grading characteristics analysis assessed the impact on image quality of changes in exposure factors. This analysis determined the acceptability of gradual radiation dose reduction below the European and UK guidance levels. Chest and pelvis exposures were optimised, achieving dose reduction for each age group, with 7%-55% decrease in critical organ dose. Clinicians confirmed diagnostic image quality throughout the iterative process. Analysis of images acquired with preliminary and final exposure factors indicated an average visual grading analysis result of 0.5, demonstrating equivalent image quality. The optimisation process and final radiation doses are reported for Carestream computed radiography to aid other hospitals in minimising radiation risks to children.

  10. Method for Determining Language Objectives and Criteria. Volume II. Methodological Tools: Computer Analysis, Data Collection Instruments.

    DTIC Science & Technology

    1979-05-25

    This volume presents (1) Methods for computer and hand analysis of numerical language performance data (includes examples) (2) samples of interview, observation, and survey instruments used in collecting language data. (Author)

  11. A framework for joint image-and-shape analysis

    NASA Astrophysics Data System (ADS)

    Gao, Yi; Tannenbaum, Allen; Bouix, Sylvain

    2014-03-01

    Techniques in medical image analysis are many times used for the comparison or regression on the intensities of images. In general, the domain of the image is a given Cartesian grids. Shape analysis, on the other hand, studies the similarities and differences among spatial objects of arbitrary geometry and topology. Usually, there is no function defined on the domain of shapes. Recently, there has been a growing needs for defining and analyzing functions defined on the shape space, and a coupled analysis on both the shapes and the functions defined on them. Following this direction, in this work we present a coupled analysis for both images and shapes. As a result, the statistically significant discrepancies in both the image intensities as well as on the underlying shapes are detected. The method is applied on both brain images for the schizophrenia and heart images for atrial fibrillation patients.

  12. A multi-criteria analysis of options for energy recovery from municipal solid waste in India and the UK.

    PubMed

    Yap, H Y; Nixon, J D

    2015-12-01

    Energy recovery from municipal solid waste plays a key role in sustainable waste management and energy security. However, there are numerous technologies that vary in suitability for different economic and social climates. This study sets out to develop and apply a multi-criteria decision making methodology that can be used to evaluate the trade-offs between the benefits, opportunities, costs and risks of alternative energy from waste technologies in both developed and developing countries. The technologies considered are mass burn incineration, refuse derived fuel incineration, gasification, anaerobic digestion and landfill gas recovery. By incorporating qualitative and quantitative assessments, a preference ranking of the alternative technologies is produced. The effect of variations in decision criteria weightings are analysed in a sensitivity analysis. The methodology is applied principally to compare and assess energy recovery from waste options in the UK and India. These two countries have been selected as they could both benefit from further development of their waste-to-energy strategies, but have different technical and socio-economic challenges to consider. It is concluded that gasification is the preferred technology for the UK, whereas anaerobic digestion is the preferred technology for India. We believe that the presented methodology will be of particular value for waste-to-energy decision-makers in both developed and developing countries.

  13. Image pattern recognition supporting interactive analysis and graphical visualization

    NASA Technical Reports Server (NTRS)

    Coggins, James M.

    1992-01-01

    Image Pattern Recognition attempts to infer properties of the world from image data. Such capabilities are crucial for making measurements from satellite or telescope images related to Earth and space science problems. Such measurements can be the required product itself, or the measurements can be used as input to a computer graphics system for visualization purposes. At present, the field of image pattern recognition lacks a unified scientific structure for developing and evaluating image pattern recognition applications. The overall goal of this project is to begin developing such a structure. This report summarizes results of a 3-year research effort in image pattern recognition addressing the following three principal aims: (1) to create a software foundation for the research and identify image pattern recognition problems in Earth and space science; (2) to develop image measurement operations based on Artificial Visual Systems; and (3) to develop multiscale image descriptions for use in interactive image analysis.

  14. Three modality image registration of brain SPECT/CT and MR images for quantitative analysis of dopamine transporter imaging

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Yuzuho; Takeda, Yuta; Hara, Takeshi; Zhou, Xiangrong; Matsusako, Masaki; Tanaka, Yuki; Hosoya, Kazuhiko; Nihei, Tsutomu; Katafuchi, Tetsuro; Fujita, Hiroshi

    2016-03-01

    Important features in Parkinson's disease (PD) are degenerations and losses of dopamine neurons in corpus striatum. 123I-FP-CIT can visualize activities of the dopamine neurons. The activity radio of background to corpus striatum is used for diagnosis of PD and Dementia with Lewy Bodies (DLB). The specific activity can be observed in the corpus striatum on SPECT images, but the location and the shape of the corpus striatum on SPECT images only are often lost because of the low uptake. In contrast, MR images can visualize the locations of the corpus striatum. The purpose of this study was to realize a quantitative image analysis for the SPECT images by using image registration technique with brain MR images that can determine the region of corpus striatum. In this study, the image fusion technique was used to fuse SPECT and MR images by intervening CT image taken by SPECT/CT. The mutual information (MI) for image registration between CT and MR images was used for the registration. Six SPECT/CT and four MR scans of phantom materials are taken by changing the direction. As the results of the image registrations, 16 of 24 combinations were registered within 1.3mm. By applying the approach to 32 clinical SPECT/CT and MR cases, all of the cases were registered within 0.86mm. In conclusions, our registration method has a potential in superimposing MR images on SPECT images.

  15. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  16. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  17. SAR Image Texture Analysis of Oil Spill

    NASA Astrophysics Data System (ADS)

    Ma, Long; Li, Ying; Liu, Yu

    Oil spills are seriously affecting the marine ecosystem and cause political and scientific concern since they have serious affect on fragile marine and coastal ecosystem. In order to implement an emergency in case of oil spills, it is necessary to monitor oil spill using remote sensing. Spaceborne SAR is considered a promising method to monitor oil spill, which causes attention from many researchers. However, research in SAR image texture analysis of oil spill is rarely reported. On 7 December 2007, a crane-carrying barge hit the Hong Kong-registered tanker "Hebei Spirit", which released an estimated 10,500 metric tons of crude oil into the sea. The texture features on this oil spill were acquired based on extracted GLCM (Grey Level Co-occurrence Matrix) by using SAR as data source. The affected area was extracted successfully after evaluating capabilities of different texture features to monitor the oil spill. The results revealed that the texture is an important feature for oil spill monitoring. Key words: oil spill, texture analysis, SAR

  18. Ripening of salami: assessment of colour and aspect evolution using image analysis and multivariate image analysis.

    PubMed

    Fongaro, Lorenzo; Alamprese, Cristina; Casiraghi, Ernestina

    2015-03-01

    During ripening of salami, colour changes occur due to oxidation phenomena involving myoglobin. Moreover, shrinkage due to dehydration results in aspect modifications, mainly ascribable to fat aggregation. The aim of this work was the application of image analysis (IA) and multivariate image analysis (MIA) techniques to the study of colour and aspect changes occurring in salami during ripening. IA results showed that red, green, blue, and intensity parameters decreased due to the development of a global darker colour, while Heterogeneity increased due to fat aggregation. By applying MIA, different salami slice areas corresponding to fat and three different degrees of oxidised meat were identified and quantified. It was thus possible to study the trend of these different areas as a function of ripening, making objective an evaluation usually performed by subjective visual inspection.

  19. A Global Approach to Image Texture Analysis

    DTIC Science & Technology

    1990-03-01

    segmented images based on texture by convolution with small masks ranging from 3 x 3 to 7 x 7 pixels. The local approach is not optimal for the sea ice... image , then differences of texture will clearly be reflected in the two-dimensional po~wer spectrum of the image . To look at spectral distribution...resulting from convolutions with Laws’ masks are actually the vaiues of image energy falling in a series of spectral bins. Consider the seventh.-order

  20. Multi-criteria evaluation of CMIP5 GCMs for climate change impact analysis

    NASA Astrophysics Data System (ADS)

    Ahmadalipour, Ali; Rana, Arun; Moradkhani, Hamid; Sharma, Ashish

    2015-12-01

    Climate change is expected to have severe impacts on global hydrological cycle along with food-water-energy nexus. Currently, there are many climate models used in predicting important climatic variables. Though there have been advances in the field, there are still many problems to be resolved related to reliability, uncertainty, and computing needs, among many others. In the present work, we have analyzed performance of 20 different global climate models (GCMs) from Climate Model Intercomparison Project Phase 5 (CMIP5) dataset over the Columbia River Basin (CRB) in the Pacific Northwest USA. We demonstrate a statistical multicriteria approach, using univariate and multivariate techniques, for selecting suitable GCMs to be used for climate change impact analysis in the region. Univariate methods includes mean, standard deviation, coefficient of variation, relative change (variability), Mann-Kendall test, and Kolmogorov-Smirnov test (KS-test); whereas multivariate methods used were principal component analysis (PCA), singular value decomposition (SVD), canonical correlation analysis (CCA), and cluster analysis. The analysis is performed on raw GCM data, i.e., before bias correction, for precipitation and temperature climatic variables for all the 20 models to capture the reliability and nature of the particular model at regional scale. The analysis is based on spatially averaged datasets of GCMs and observation for the period of 1970 to 2000. Ranking is provided to each of the GCMs based on the performance evaluated against gridded observational data on various temporal scales (daily, monthly, and seasonal). Results have provided insight into each of the methods and various statistical properties addressed by them employed in ranking GCMs. Further; evaluation was also performed for raw GCM simulations against different sets of gridded observational dataset in the area.

  1. Image segmentation by iterative parallel region growing with application to data compression and image analysis

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    1988-01-01

    Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.

  2. Evolution of diagnostic criteria for multiple sclerosis.

    PubMed

    Przybek, Joanna; Gniatkowska, Inga; Mirowska-Guzel, Dagmara; Członkowska, Anna

    2015-01-01

    Multiple sclerosis is a chronic demyelinating disease of the central nervous system that occurs primarily in young adults. There is no single diagnostic test to recognize the disease. The diagnostic criteria, based on clinical examination and laboratory tests, have changed considerably over time. The first guidelines involved only the results of the patient's neurological examination. The diagnostic criteria developed by Poser in 1983 were based largely on the results of additional tests, including visual evoked potentials and analysis of cerebrospinal fluid. The McDonald criteria, developed in 2001 and updated in 2005 and 2010, reflected the diagnostic breakthrough caused by widespread use of magnetic resonance imaging (MRI). Currently, the diagnosis depends largely on the results of the MRI examination. An early diagnosis is particularly important for starting disease-modifying treatments.

  3. 30 CFR 250.1911 - What hazards analysis criteria must my SEMS program meet?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... criminal penalty; (iii) Control technology applicable to the operation your hazards analysis is evaluating; and (iv) A qualitative evaluation of the possible safety and health effects on employees, and potential impacts to the human and marine environments, which may result if the control technology fails....

  4. 30 CFR 250.1911 - What hazards analysis criteria must my SEMS program meet?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... criminal penalty; (iii) Control technology applicable to the operation your hazards analysis is evaluating; and (iv) A qualitative evaluation of the possible safety and health effects on employees, and potential impacts to the human and marine environments, which may result if the control technology fails....

  5. Chapter 1 Eligibility Factors and Weights: Using Probit Analysis To Determine Eligibility Criteria.

    ERIC Educational Resources Information Center

    Willis, John A.

    Kanawha County (West Virginia) schools use Z-scores to identify elementary students eligible for Chapter 1 services in reading and mathematics. A probit analysis of over 500 previously served students was used to determine the variables and weights in the Z-score equations. Independent variables were chosen from those commonly used to identify…

  6. Optimal design and evaluation criteria for acoustic emission pulse signature analysis

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.; Townsend, M. A.; Packman, P. F.

    1977-01-01

    Successful pulse recording and evaluation is strongly dependent on the instrumentation system selected and the method of analyzing the pulse signature. The paper studies system design, signal analysis techniques, and interdependences with a view toward defining optimal approaches to pulse signal analysis. For this purpose, the instrumentation system is modeled, and analytical pulses, representative of the types of acoustic emissions to be distinguished are passed through the system. Particular attention is given to comparing frequency spectrum analysis and deconvolution referred to as time domain reconstruction of the pulse or pulse train. The possibility of optimal transducer-filter system parameters is investigated. Deconvolution of a pulse is shown to be a superior approach for transient pulse analysis. Reshaping of a transducer output back to the original input pulse is possible and gives an accurate representation of the generating pulse in the time domain. Any definable transducer and filter system can be used for measurement of pulses by means of the deconvolution method. Selection of design variables for general usage is discussed.

  7. The Effects of Describing Antecedent Stimuli and Performance Criteria in Task Analysis Instruction for Graphing

    ERIC Educational Resources Information Center

    Tyner, Bryan C.; Fienup, Daniel M.

    2016-01-01

    Task analyses are ubiquitous to applied behavior analysis interventions, yet little is known about the factors that make them effective. Numerous task analyses have been published in behavior analytic journals for constructing single-subject design graphs; however, learner outcomes using these task analyses may fall short of what could be…

  8. Multi-criteria decision analysis for health technology assessment in Canada: insights from an expert panel discussion.

    PubMed

    Diaby, Vakaramoko; Goeree, Ron; Hoch, Jeffrey; Siebert, Uwe

    2015-02-01

    Multi-criteria decision analysis (MCDA), a decision-making tool, has received increasing attention in recent years, notably in the healthcare field. For Canada, it is unclear whether and how MCDA should be incorporated into the existing health technology assessment (HTA) decision-making process. To facilitate debate on improving HTA decision-making in Canada, a workshop was held in conjunction with the 8th World Congress on Health Economics of the International Health Economics Association in Toronto, Canada in July 2011. The objective of the workshop was to discuss the potential benefits and challenges related to the use of MCDA for HTA decision-making in Canada. This paper summarizes and discusses the recommendations of an expert panel convened at the workshop to discuss opportunities and concerns with reference to the implementation of MCDA in Canada.

  9. New methods for the analysis of invasion processes: multi-criteria evaluation of the invasion of Hydrilla verticillata in Guatemala.

    PubMed

    Monterroso, I; Binimelis, R; Rodríguez-Labajos, B

    2011-03-01

    The study described in this article incorporates stakeholders' views on aquatic invasion processes and combines expert analysis with information from field work into an evaluation exercise. Management scenarios are designed based on available technical data and stakeholders' perceptions. These scenarios are evaluated using the Social Multi-Criteria Evaluation framework employing the NAIADE model. Two evaluations are carried out, technical and social. Social acceptance of different management scenarios, distribution of costs and benefits, and attribution of responsibility are discussed. The case study was carried out on Lake Izabal, a body of water connected to the Caribbean Sea in Northeastern Guatemala. In 2000, local fishermen reported the presence of an alien species in the lake, the macrophyte Hydrilla verticillata. Two years later, this alien species was established around the entire lakeshore, damaging the ecosystem, endangering native species and the subsistence of local inhabitants through impacts on transportation, fishing practices, and tourism.

  10. Wave-Optics Analysis of Pupil Imaging

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H.; Bos, Brent J.

    2006-01-01

    Pupil imaging performance is analyzed from the perspective of physical optics. A multi-plane diffraction model is constructed by propagating the scalar electromagnetic field, surface by surface, along the optical path comprising the pupil imaging optical system. Modeling results are compared with pupil images collected in the laboratory. The experimental setup, although generic for pupil imaging systems in general, has application to the James Webb Space Telescope (JWST) optical system characterization where the pupil images are used as a constraint to the wavefront sensing and control process. Practical design considerations follow from the diffraction modeling which are discussed in the context of the JWST Observatory.

  11. Identification in residue analysis based on liquid chromatography with tandem mass spectrometry: Experimental evidence to update performance criteria.

    PubMed

    Mol, Hans G J; Zomer, Paul; García López, Mónica; Fussell, Richard J; Scholten, Jos; de Kok, Andre; Wolheim, Anne; Anastassiades, Michelangelo; Lozano, Ana; Fernandez Alba, Amadeo

    2015-05-11

    Liquid chromatography-tandem mass spectrometry (LC-MS/MS) is one of the most widely used techniques for identification (and quantification) of residues and contaminants across a number of different chemical domains. Although the same analytical technique is used, the parameters and criteria for identification vary depending on where in the world the analysis is performed and for what purpose (e.g. determination of pesticides, veterinary drugs, forensic toxicology, sports doping). The rationale for these differences is not clear and in most cases the criteria are essentially based on expert opinions rather than underpinned by experimental data. In the current study, the variability of the two key identification parameters, retention time and ion ratio, was assessed and compared against requirements set out in different legal and guidance documents. The study involved the analysis of 120 pesticides, representing various chemical classes, polarities, molecular weights, and detector response factors, in 21 different fruit and vegetable matrices of varying degrees of complexity. The samples were analysed non-fortified, and fortified at 10, 50 and 200 μg kg(-1), in five laboratories using different LC-MS/MS instruments and conditions. In total, over 135,000 extracted-ion chromatograms were manually verified to provide an extensive data set for the assessment. The experimental data do not support relative tolerances for retention time, or different tolerances for ion ratios depending on relative abundance of the two product ions measured. Retention times in today's chromatographic systems are sufficiently stable to justify an absolute tolerance of ±0.1 min. Ion ratios are stable as long as sufficient response is obtained for both product ions. Ion ratio deviations are typically within ±20% (relative), and within ±45% (relative) in case the response of product ions are close to the limit of detection. Ion ratio tolerances up to 50% did not result in false positives and

  12. Image analysis for dental bone quality assessment using CBCT imaging

    NASA Astrophysics Data System (ADS)

    Suprijanto; Epsilawati, L.; Hajarini, M. S.; Juliastuti, E.; Susanti, H.

    2016-03-01

    Cone beam computerized tomography (CBCT) is one of X-ray imaging modalities that are applied in dentistry. Its modality can visualize the oral region in 3D and in a high resolution. CBCT jaw image has potential information for the assessment of bone quality that often used for pre-operative implant planning. We propose comparison method based on normalized histogram (NH) on the region of inter-dental septum and premolar teeth. Furthermore, the NH characteristic from normal and abnormal bone condition are compared and analyzed. Four test parameters are proposed, i.e. the difference between teeth and bone average intensity (s), the ratio between bone and teeth average intensity (n) of NH, the difference between teeth and bone peak value (Δp) of NH, and the ratio between teeth and bone of NH range (r). The results showed that n, s, and Δp have potential to be the classification parameters of dental calcium density.

  13. Modified distance in average linkage based on M-estimator and MADn criteria in hierarchical cluster analysis

    NASA Astrophysics Data System (ADS)

    Muda, Nora; Othman, Abdul Rahman

    2015-10-01

    The process of grouping a set of objects into classes of similar objects is called clustering. It divides a large group of observations into smaller groups so that the observations within each group are relatively similar and the observations in different groups are relatively dissimilar. In this study, an agglomerative method in hierarchical cluster analysis is chosen and clusters were constructed by using an average linkage technique. An average linkage technique requires distance between clusters, which is calculated based on the average distance between all pairs of points, one group with another group. In calculating the average distance, the distance will not be robust when there is an outlier. Therefore, the average distance in average linkage needs to be modified in order to overcome the problem of outlier. Therefore, the criteria of outlier detection based on MADn criteria is used and the average distance is recalculated without the outlier. Next, the distance in average linkage is calculated based on a modified one step M-estimator (MOM). The groups of cluster are presented in dendrogram graph. To evaluate the goodness of a modified distance in the average linkage clustering, the bootstrap analysis is conducted on the dendrogram graph and the bootstrap value (BP) are assessed for each branch in dendrogram that formed the group, to ensure the reliability of the branches constructed. This study found that the average linkage technique with modified distance is significantly superior than the usual average linkage technique, if there is an outlier. Both of these techniques are said to be similar if there is no outlier.

  14. An optimal control approach to pilot/vehicle analysis and Neal-Smith criteria

    NASA Technical Reports Server (NTRS)

    Bacon, B. J.; Schmidt, D. K.

    1984-01-01

    The approach of Neal and Smith was merged with the advances in pilot modeling by means of optimal control techniques. While confirming the findings of Neal and Smith, a methodology that explicitly includes the pilot's objective in attitude tracking was developed. More importantly, the method yields the required system bandwidth along with a better pilot model directly applicable to closed-loop analysis of systems in any order.

  15. The Dynairship. [structural design criteria and feasibility analysis of an airplane - airship

    NASA Technical Reports Server (NTRS)

    Miller, W. M., Jr.

    1975-01-01

    A feasibility analysis for the construction and use of a combination airplane-airship named 'Dynairship' is undertaken. Payload capacities, fuel consumption, and the structural design of the craft are discussed and compared to a conventional commercial aircraft (a Boeing 747). Cost estimates of construction and operation of the craft are also discussed. The various uses of the craft are examined (i.e, in police work, materials handling, and ocean surveillance), and aerodynamic configurations and photographs are shown.

  16. Measurement and Analysis Infrastructure Diagnostic (MAID) Evaluation Criteria, Version 1.0

    DTIC Science & Technology

    2010-02-01

    model is developed (e.g., linear regression ), confidence intervals are calculated and displayed to illustrate the uncertainty associated with the...fitted regression line (the average dependent variable values). 3.10 When a statistical model is used (e.g., linear regression ) for prediction, a... Regression Diagnostics: An Introduction. Sage, 1991. [Frees 1996] Frees, Edward W. Data Analysis Using Regression Models . Prentice Hall, 1996

  17. Analysis of Anechoic Chamber Testing of the Hurricane Imaging Radiometer

    NASA Technical Reports Server (NTRS)

    Fenigstein, David; Ruf, Chris; James, Mark; Simmons, David; Miller, Timothy; Buckley, Courtney

    2010-01-01

    The Hurricane Imaging Radiometer System (HIRAD) is a new airborne passive microwave remote sensor developed to observe hurricanes. HIRAD incorporates synthetic thinned array radiometry technology, which use Fourier synthesis to reconstruct images from an array of correlated antenna elements. The HIRAD system response to a point emitter has been measured in an anechoic chamber. With this data, a Fourier inversion image reconstruction algorithm has been developed. Performance analysis of the apparatus is presented, along with an overview of the image reconstruction algorithm

  18. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for

  19. Antenna trajectory error analysis in backprojection-based SAR images

    NASA Astrophysics Data System (ADS)

    Wang, Ling; Yazıcı, Birsen; Yanik, H. Cagri

    2014-06-01

    We present an analysis of the positioning errors in Backprojection (BP)-based Synthetic Aperture Radar (SAR) images due to antenna trajectory errors for a monostatic SAR traversing a straight linear trajectory. Our analysis is developed using microlocal analysis, which can provide an explicit quantitative relationship between the trajectory error and the positioning error in BP-based SAR images. The analysis is applicable to arbitrary trajectory errors in the antenna and can be extended to arbitrary imaging geometries. We present numerical simulations to demonstrate our analysis.

  20. Slide Set: reproducible image analysis and batch processing with ImageJ

    PubMed Central

    Nanes, Benjamin A.

    2015-01-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets that are common in biology. This paper introduces Slide Set, a framework for reproducible image analysis and batch processing with ImageJ. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution. PMID:26554504

  1. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  2. Image and Data-analysis Tools For Paleoclimatic Reconstructions

    NASA Astrophysics Data System (ADS)

    Pozzi, M.

    It comes here proposed a directory of instruments and computer science resources chosen in order to resolve the problematic ones that regard the paleoclimatic recon- structions. They will come discussed in particular the following points: 1) Numerical analysis of paleo-data (fossils abundances, species analyses, isotopic signals, chemical-physical parameters, biological data): a) statistical analyses (uni- variate, diversity, rarefaction, correlation, ANOVA, F and T tests, Chi^2) b) multidi- mensional analyses (principal components, corrispondence, cluster analysis, seriation, discriminant, autocorrelation, spectral analysis) neural analyses (backpropagation net, kohonen feature map, hopfield net genetic algorithms) 2) Graphical analysis (visu- alization tools) of paleo-data (quantitative and qualitative fossils abundances, species analyses, isotopic signals, chemical-physical parameters): a) 2-D data analyses (graph, histogram, ternary, survivorship) b) 3-D data analyses (direct volume rendering, iso- surfaces, segmentation, surface reconstruction, surface simplification,generation of tetrahedral grids). 3) Quantitative and qualitative digital image analysis (macro and microfossils image analysis, Scanning Electron Microscope. and Optical Polarized Microscope images capture and analysis, morphometric data analysis, 3-D reconstruc- tions): a) 2D image analysis (correction of image defects, enhancement of image de- tail, converting texture and directionality to grey scale or colour differences, visual enhancement using pseudo-colour, pseudo-3D, thresholding of image features, binary image processing, measurements, stereological measurements, measuring features on a white background) b) 3D image analysis (basic stereological procedures, two dimen- sional structures; area fraction from the point count, volume fraction from the point count, three dimensional structures: surface area and the line intercept count, three dimensional microstructures; line length and the

  3. Analysis of the First NIF Neutron Images

    NASA Astrophysics Data System (ADS)

    Wilson, D. C.; Batha, S.; Grim, G. P.; Guler, N.; Kline, J. L.; Kyrala, G. A.; Merrill, F. E.; Morgan, G. L.; Vinyard, N. S.; Volegov, P. L.; Bradley, D. K.; Clark, D. S.; Dixit, S. N.; Fittinghoff, D. N.; Glenn, S. M.; Glenzer, S.; Izumi, N.; Jones, O. S.; Le Pape, S.; Ma, T.; MacKinnon, A. J.; Sepke, S. M.; Spears, B. K.; Tommasini, R.; McKenty, P.

    2011-10-01

    Neutron imaging at the National Igntion Facility obtained its first images from both directly laser driven and X-radiation driven implosions. A directly driven DT filled glass microballoon gave an oblate image (P2/P0 = -45%) whose size (P0 = 70 μm) fit within the X-ray images. Simulations using the polar direct drive laser pointing give a round image of P0 ~95 μm. However as the electron flux limiter is reduced from 0.06 to 0.03 the image becomes oblate. The observed asymmetry can be reproduced by transferring ~10% of the energy from the outer laser beams to the inner. Radiation driven implosions of ignition capsules with 20%D, and 50%D produced ~ 30 μm radius oblate images in 12-15 MeV neutrons. Images in 10-12 MeV neutrons, which have experienced one scattering in the fuel and number ~ 4% of the primaries, showed larger images (~44-56 μm). Image sizes indicate the compression of the fuel and are consistent with observed 10-12/13-15MeV yield ratios. Work funded by the USDOE at LANL, LLNL, NSTEC and LLE.

  4. Serum 5'nucleotidase activity in rats: a method for automated analysis and criteria for interpretation.

    PubMed

    Carakostas, Michael C.; Power, Richard J.; Banerjee, Asit K.

    1990-01-01

    A manual kit for determining serum 5'nucleotidase (5'NT, EC 3.1.3.5) activity was adapted for use with rat samples on a large discrete clinical chemistry analyzer. The precision of the method was good (within-run C.V. = 2.14%; between-run C.V. = 5.5%). A comparison of the new automated method with a manual and semi-automated method gave regression statistics of y = 1.18X -3.66 (Sy. x = 4.54), and y = 0.733X + 1.97 (Sy. x = 1.69), respectively. Temperature conversion factors provided by the kit manufacturer for human samples were determined to be inaccurate for converting results from rat samples. Analysis of components contributing to normal variation in rat serum 5'NT activity showed age and sex to be major factors. Increased serum 5'NT activity was observed in female rats when compared to male rats beginning at about 5 to 6 weeks of age. An analysis of variance of serum 5'NT, alkaline phosphatase, and GGT activities observed over a 9-week period in normal rats suggests several advantages for 5'NT as a predictor of biliary lesions in rats.

  5. Holographic Interferometry and Image Analysis for Aerodynamic Testing

    DTIC Science & Technology

    1980-09-01

    tunnels, (2) development of automated image analysis techniques for reducing quantitative flow-field data from holographic interferograms, and (3...investigation and development of software for the application of digital image analysis to other photographic techniques used in wind tunnel testing.

  6. Image analysis of neuropsychological test responses

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Hiller, Darren L.

    1996-04-01

    This paper reports recent advances in the development of an automated approach to neuropsychological testing. High performance image analysis algorithms have been developed as part of a convenient and non-invasive computer-based system to provide an objective assessment of patient responses to figure-copying tests. Tests of this type are important in determining the neurological function of patients following stroke through evaluation of their visuo-spatial performance. Many conventional neuropsychological tests suffer from the serious drawback that subjective judgement on the part of the tester is required in the measurement of the patient's response which leads to a qualitative neuropsychological assessment that can be both inconsistent and inaccurate. Results for this automated approach are presented for three clinical populations: patients suffering right hemisphere stroke are compared with adults with no known neurological disorder and a population comprising normal school children of 11 years is presented to demonstrate the sensitivity of the technique. As well as providing a more reliable and consistent diagnosis this technique is sufficiently sensitive to monitor a patient's progress over a period of time and will provide the neuropsychologist with a practical means of evaluating the effectiveness of therapy or medication administered as part of a rehabilitation program.

  7. Hierarchical manifold learning for regional image analysis.

    PubMed

    Bhatia, Kanwal K; Rao, Anil; Price, Anthony N; Wolz, Robin; Hajnal, Joseph V; Rueckert, Daniel

    2014-02-01

    We present a novel method of hierarchical manifold learning which aims to automatically discover regional properties of image datasets. While traditional manifold learning methods have become widely used for dimensionality reduction in medical imaging, they suffer from only being able to consider whole images as single data points. We extend conventional techniques by additionally examining local variations, in order to produce spatially-varying manifold embeddings that characterize a given dataset. This involves constructing manifolds in a hierarchy of image patches of increasing granularity, while ensuring consistency between hierarchy levels. We demonstrate the utility of our method in two very different settings: 1) to learn the regional correlations in motion within a sequence of time-resolved MR images of the thoracic cavity; 2) to find discriminative regions of 3-D brain MR images associated with neurodegenerative disease.

  8. Digital image processing and analysis for activated sludge wastewater treatment.

    PubMed

    Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed

    2015-01-01

    Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.

  9. Criteria for submitting photos.

    PubMed

    Vallarelli, Andrelou Fralete Ayres

    2011-01-01

    Dermatological photography is used as a supplement to dermatological examination with the function of providing additional knowledge and information. Its quality depends on the expertise of the photographer-dermatologist in recording the relevant elements present. Therefore, the dermatologist should know basic principles of photography and the journal editors should ensure that the articles have high-quality images. This article suggests criteria to improve the quality of photographs submitted to journals for publication.

  10. An investigation of model selection criteria for technical analysis of moving average

    NASA Astrophysics Data System (ADS)

    Jasemi, Milad; Kimiagari, Ali M.

    2012-05-01

    Moving averages are one of the most popular and easy-to-use tools available to a technical analyst, and they also form the building blocks for many other technical indicators and overlays. Building a moving average (MA) model needs determining four factors of (1) approach of issuing signals, (2) technique of calculating MA, (3) length of MA, and (4) band. After a literature review of technical analysis (TA) from the perspective of MA and some discussions about MA as a TA, this paper is structured to highlight the effects that each of the first three factors has on performance of MA as a TA. The results that based on some experiments with real data support the fact that deciding about the first and second factors is not much critical, and more attention should be paid to other factors.

  11. Performance analysis and parametric optimum criteria of a regeneration Bose-Otto engine

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Liu, Sanqiu; Du, Jianqiang

    2009-05-01

    A general regenerative model of the Otto engine cycle working with an ideal Bose gas is used to discuss the influence of quantum degeneracy, regeneration and finite rate heat transfer on the performance of the cycle. Based on the model, expressions for some important parameters, such as the power output and efficiency of the Bose-Otto engine cycle, are derived analytically. By means of numerical calculation and illustration, the influence of the compression ratio of the two isochoric processes and the regenerator effectiveness on the performance of the cycle are discussed and evaluated in detail. Moreover, the general optimal performance characteristics of the cycle are revealed. This analysis could provide a general theoretical tool for the optimal design and operation of real power plants.

  12. A modern approach to pilot/vehicle analysis and the Neal-Smith criteria

    NASA Technical Reports Server (NTRS)

    Bacon, B. J.; Schmidt, D. K.

    1982-01-01

    The present investigation is concerned with the development of a better pilot modelling technique via optimal control theory, taking into account concepts concerning 'pilot rating' considered by Neal and Smith (1970). The investigation conducted by Neal and Smith had the objective to provide data on the effects of Flight Control System dynamics and to develop a design criterion capable of pinpointing pilot problem areas encountered in performing a given task. Neal and Smith devised a 'pilot-in-the-loop' analysis capable of showing problem areas in pitch attitude tracking. Unfortunately the employed method has some drawbacks. The current investigation attempts, therefore, to provide an alternate approach which makes use of an optimal-control pilot model. An optimal control model (OCM) had been discussed by Kleinman et al. (1970). It is shown that the alternate approach, based on the OCM, offers some distinct advantages.

  13. Computer-based image analysis in breast pathology

    PubMed Central

    Gandomkar, Ziba; Brennan, Patrick C.; Mello-Thoms, Claudia

    2016-01-01

    Whole slide imaging (WSI) has the potential to be utilized in telepathology, teleconsultation, quality assurance, clinical education, and digital image analysis to aid pathologists. In this paper, the potential added benefits of computer-assisted image analysis in breast pathology are reviewed and discussed. One of the major advantages of WSI systems is the possibility of doing computer-based image analysis on the digital slides. The purpose of computer-assisted analysis of breast virtual slides can be (i) segmentation of desired regions or objects such as diagnostically relevant areas, epithelial nuclei, lymphocyte cells, tubules, and mitotic figures, (ii) classification of breast slides based on breast cancer (BCa) grades, the invasive potential of tumors, or cancer subtypes, (iii) prognosis of BCa, or (iv) immunohistochemical quantification. While encouraging results have been achieved in this area, further progress is still required to make computer-based image analysis of breast virtual slides acceptable for clinical practice. PMID:28066683

  14. The challenge of obtaining information necessary for multi-criteria decision analysis implementation: the case of physiotherapy services in Canada

    PubMed Central

    2013-01-01

    Background As fiscal constraints dominate health policy discussions across Canada and globally, priority-setting exercises are becoming more common to guide the difficult choices that must be made. In this context, it becomes highly desirable to have accurate estimates of the value of specific health care interventions. Economic evaluation is a well-accepted method to estimate the value of health care interventions. However, economic evaluation has significant limitations, which have lead to an increase in the use of Multi-Criteria Decision Analysis (MCDA). One key concern with MCDA is the availability of the information necessary for implementation. In the Fall 2011, the Canadian Physiotherapy Association embarked on a project aimed at providing a valuation of physiotherapy services that is both evidence-based and relevant to resource allocation decisions. The framework selected for this project was MCDA. We report on how we addressed the challenge of obtaining some of the information necessary for MCDA implementation. Methods MCDA criteria were selected and areas of physiotherapy practices were identified. The building up of the necessary information base was a three step process. First, there was a literature review for each practice area, on each criterion. The next step was to conduct interviews with experts in each of the practice areas to critique the results of the literature review and to fill in gaps where there was no or insufficient literature. Finally, the results of the individual interviews were validated by a national committee to ensure consistency across all practice areas and that a national level perspective is applied. Results Despite a lack of research evidence on many of the considerations relevant to the estimation of the value of physiotherapy services (the criteria), sufficient information was obtained to facilitate MCDA implementation at the local level. Conclusions The results of this research project serve two purposes: 1) a method to

  15. Multimodal digital color imaging system for facial skin lesion analysis

    NASA Astrophysics Data System (ADS)

    Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo

    2008-02-01

    In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.

  16. Analysis of Images from Experiments Investigating Fragmentation of Materials

    SciTech Connect

    Kamath, C; Hurricane, O

    2007-09-10

    Image processing techniques have been used extensively to identify objects of interest in image data and extract representative characteristics for these objects. However, this can be a challenge due to the presence of noise in the images and the variation across images in a dataset. When the number of images to be analyzed is large, the algorithms used must also be relatively insensitive to the choice of parameters and lend themselves to partial or full automation. This not only avoids manual analysis which can be time consuming and error-prone, but also makes the analysis reproducible, thus enabling comparisons between images which have been processed in an identical manner. In this paper, we describe our approach to extracting features for objects of interest in experimental images. Focusing on the specific problem of fragmentation of materials, we show how we can extract statistics for the fragments and the gaps between them.

  17. Comparison of sonochemiluminescence images using image analysis techniques and identification of acoustic pressure fields via simulation.

    PubMed

    Tiong, T Joyce; Chandesa, Tissa; Yap, Yeow Hong

    2017-05-01

    One common method to determine the existence of cavitational activity in power ultrasonics systems is by capturing images of sonoluminescence (SL) or sonochemiluminescence (SCL) in a dark environment. Conventionally, the light emitted from SL or SCL was detected based on the number of photons. Though this method is effective, it could not identify the sonochemical zones of an ultrasonic systems. SL/SCL images, on the other hand, enable identification of 'active' sonochemical zones. However, these images often provide just qualitative data as the harvesting of light intensity data from the images is tedious and require high resolution images. In this work, we propose a new image analysis technique using pseudo-colouring images to quantify the SCL zones based on the intensities of the SCL images and followed by comparison of the active SCL zones with COMSOL simulated acoustic pressure zones.

  18. Multi-criteria decision analysis of concentrated solar power with thermal energy storage and dry cooling.

    PubMed

    Klein, Sharon J W

    2013-12-17

    Decisions about energy backup and cooling options for parabolic trough (PT) concentrated solar power have technical, economic, and environmental implications. Although PT development has increased rapidly in recent years, energy policies do not address backup or cooling option requirements, and very few studies directly compare the diverse implications of these options. This is the first study to compare the annual capacity factor, levelized cost of energy (LCOE), water consumption, land use, and life cycle greenhouse gas (GHG) emissions of PT with different backup options (minimal backup (MB), thermal energy storage (TES), and fossil fuel backup (FF)) and different cooling options (wet (WC) and dry (DC). Multicriteria decision analysis was used with five preference scenarios to identify the highest-scoring energy backup-cooling combination for each preference scenario. MB-WC had the highest score in the Economic and Climate Change-Economy scenarios, while FF-DC and FF-WC had the highest scores in the Equal and Availability scenarios, respectively. TES-DC had the highest score for the Environmental scenario. DC was ranked 1-3 in all preference scenarios. Direct comparisons between GHG emissions and LCOE and between GHG emissions and land use suggest a preference for TES if backup is require for PT plants to compete with baseload generators.

  19. Design criteria for molecular mimics of fragments of the β-turn. 1. Cα atom analysis

    NASA Astrophysics Data System (ADS)

    Garland, S. L.; Dean, P. M.

    1999-09-01

    Peptides represent an extensive class of biologically active molecules. They may be used as leads in the development of novel therapeutic agents provided the pharmacophoric information present within them can be translated into non-peptide analogs that lack the peptide backbone and are stable to proteolysis. This is the rationale for peptidomimetic drug design. Frequently, the β-turn has been implicated as a conformation important for biological recognition of peptides. Empirical evidence from known peptidomimetics, coupled with a theoretical model of peptide binding and the observation that glycine and proline residues are common within the β-turn, has suggested the design of molecules to mimic placement of between two and four of the side-chains. The moderate number of different β-turn conformations, combined with the combinatoric nature of side-chain selection complicates the procedure. In this paper, cluster analysis has been used to classify the arrangement of C_α atoms about the various fragments of the β-turn. Recombination of the observed patterns provides a general model for the β-turn which may be used as an effective screen for potential peptidomimetic scaffolds in chemical databases.

  20. PIZZARO: Forensic analysis and restoration of image and video data.

    PubMed

    Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan

    2016-07-01

    This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences.

  1. Object-based image analysis using multiscale connectivity.

    PubMed

    Braga-Neto, Ulisses; Goutsias, John

    2005-06-01

    This paper introduces a novel approach for image analysis based on the notion of multiscale connectivity. We use the proposed approach to design several novel tools for object-based image representation and analysis which exploit the connectivity structure of images in a multiscale fashion. More specifically, we propose a nonlinear pyramidal image representation scheme, which decomposes an image at different scales by means of multiscale grain filters. These filters gradually remove connected components from an image that fail to satisfy a given criterion. We also use the concept of multiscale connectivity to design a hierarchical data partitioning tool. We employ this tool to construct another image representation scheme, based on the concept of component trees, which organizes partitions of an image in a hierarchical multiscale fashion. In addition, we propose a geometrically-oriented hierarchical clustering algorithm which generalizes the classical single-linkage algorithm. Finally, we propose two object-based multiscale image summaries, reminiscent of the well-known (morphological) pattern spectrum, which can be useful in image analysis and image understanding applications.

  2. Dehazing method through polarimetric imaging and multi-scale analysis

    NASA Astrophysics Data System (ADS)

    Cao, Lei; Shao, Xiaopeng; Liu, Fei; Wang, Lin

    2015-05-01

    An approach for haze removal utilizing polarimetric imaging and multi-scale analysis has been developed to solve one problem that haze weather weakens the interpretation of remote sensing because of the poor visibility and short detection distance of haze images. On the one hand, the polarization effects of the airlight and the object radiance in the imaging procedure has been considered. On the other hand, one fact that objects and haze possess different frequency distribution properties has been emphasized. So multi-scale analysis through wavelet transform has been employed to make it possible for low frequency components that haze presents and high frequency coefficients that image details or edges occupy are processed separately. According to the measure of the polarization feather by Stokes parameters, three linear polarized images (0°, 45°, and 90°) have been taken on haze weather, then the best polarized image min I and the worst one max I can be synthesized. Afterwards, those two polarized images contaminated by haze have been decomposed into different spatial layers with wavelet analysis, and the low frequency images have been processed via a polarization dehazing algorithm while high frequency components manipulated with a nonlinear transform. Then the ultimate haze-free image can be reconstructed by inverse wavelet reconstruction. Experimental results verify that the dehazing method proposed in this study can strongly promote image visibility and increase detection distance through haze for imaging warning and remote sensing systems.

  3. A linear mixture analysis-based compression for hyperspectral image analysis

    SciTech Connect

    C. I. Chang; I. W. Ginsberg

    2000-06-30

    In this paper, the authors present a fully constrained least squares linear spectral mixture analysis-based compression technique for hyperspectral image analysis, particularly, target detection and classification. Unlike most compression techniques that directly deal with image gray levels, the proposed compression approach generates the abundance fractional images of potential targets present in an image scene and then encodes these fractional images so as to achieve data compression. Since the vital information used for image analysis is generally preserved and retained in the abundance fractional images, the loss of information may have very little impact on image analysis. In some occasions, it even improves analysis performance. Airborne visible infrared imaging spectrometer (AVIRIS) data experiments demonstrate that it can effectively detect and classify targets while achieving very high compression ratios.

  4. Low-cost image analysis system

    SciTech Connect

    Lassahn, G.D.

    1995-01-01

    The author has developed an Automatic Target Recognition system based on parallel processing using transputers. This approach gives a powerful, fast image processing system at relatively low cost. This system scans multi-sensor (e.g., several infrared bands) image data to find any identifiable target, such as physical object or a type of vegetation.

  5. Whole-breast irradiation: a subgroup analysis of criteria to stratify for prone position treatment

    SciTech Connect

    Ramella, Sara; Trodella, Lucio; Ippolito, Edy; Fiore, Michele; Cellini, Francesco; Stimato, Gerardina; Gaudino, Diego; Greco, Carlo; Ramponi, Sara; Cammilluzzi, Eugenio; Cesarini, Claudio; Piermattei, Angelo; Cesario, Alfredo; D'Angelillo, Rolando Maria

    2012-07-01

    To select among breast cancer patients and according to breast volume size those who may benefit from 3D conformal radiotherapy after conservative surgery applied with prone-position technique. Thirty-eight patients with early-stage breast cancer were grouped according to the target volume (TV) measured in the supine position: small ({<=}400 mL), medium (400-700 mL), and large ({>=}700 ml). An ad-hoc designed and built device was used for prone set-up to displace the contralateral breast away from the tangential field borders. All patients underwent treatment planning computed tomography in both the supine and prone positions. Dosimetric data to explore dose distribution and volume of normal tissue irradiated were calculated for each patient in both positions. Homogeneity index, hot spot areas, the maximum dose, and the lung constraints were significantly reduced in the prone position (p < 0.05). The maximum heart distance and the V{sub 5Gy} did not vary consistently in the 2 positions (p = 0.06 and p = 0.7, respectively). The number of necessary monitor units was significantly higher in the supine position (312 vs. 232, p < 0.0001). The subgroups analysis pointed out the advantage in lung sparing in all TV groups (small, medium and large) for all the evaluated dosimetric constraints (central lung distance, maximum lung distance, and V{sub 5Gy}, p < 0.0001). In the small TV group, a dose reduction in nontarget areas of 22% in the prone position was detected (p = 0.056); in the medium and high TV groups, the difference was of about -10% (p = NS). The decrease in hot spot areas in nontarget tissues was 73%, 47%, and 80% for small, medium, and large TVs in the prone position, respectively. Although prone breast radiotherapy is normally proposed in patients with breasts of large dimensions, this study gives evidence of dosimetric benefit in all patient subgroups irrespective of breast volume size.

  6. Analysis of deaths in patients awaiting heart transplantation: impact on patient selection criteria.

    PubMed Central

    Haywood, G. A.; Rickenbacher, P. R.; Trindade, P. T.; Gullestad, L.; Jiang, J. P.; Schroeder, J. S.; Vagelos, R.; Oyer, P.; Fowler, M. B.

    1996-01-01

    OBJECTIVE: To analyse the clinical characteristics of patients who died on the Stanford heart transplant waiting list and to develop a method for risk stratifying status 2 patients (outpatients). METHODS: Data were reviewed from all patients over 18 years, excluding retransplants, who were accepted for heart transplantation over an eight year period from 1986 to 1994. RESULTS: 548 patients were accepted for heart transplantation; 53 died on the waiting list, and 52 survived on the waiting list for over one year. On multivariate analysis only peak oxygen consumption (peak VO2: 11.7 (SD 2.7) v 15.1 (5.2) ml/kg/min, P = 0.02) and cardiac output (3.97 (1.03) v 4.79 (1.06) litres/min, P = 0.04) were found to be independent prognostic risk factors. Peak VO2 and cardiac index (CI) were then analysed in the last 141 consecutive patients accepted for cardiac transplantation. All deaths and 88% of the deteriorations to status 1 on the waiting list occurred in patients with either a CI < 2.0 or a VO2 < 12. In those with a CI < 2.0 and a VO2 < 12, 38% died or deteriorated to status 1 in the first year on the waiting list. Patients with CI > or = 2.0 and a VO2 > or = 12 all survived throughout follow up. Using a Cox's proportional hazards model with CI and peak VO2 as covariates, tables were constructed predicting the chance of surviving for (a) 60 days and (b) 1 year on the waiting list. CONCLUSIONS: These data provide a basis for risk stratification of status 2 patients on the heart transplant waiting list. PMID:8665337

  7. Whole-slide imaging and automated image analysis: considerations and opportunities in the practice of pathology.

    PubMed

    Webster, J D; Dunstan, R W

    2014-01-01

    Digital pathology, the practice of pathology using digitized images of pathologic specimens, has been transformed in recent years by the development of whole-slide imaging systems, which allow for the evaluation and interpretation of digital images of entire histologic sections. Applications of whole-slide imaging include rapid transmission of pathologic data for consultations and collaborations, standardization and distribution of pathologic materials for education, tissue specimen archiving, and image analysis of histologic specimens. Histologic image analysis allows for the acquisition of objective measurements of histomorphologic, histochemical, and immunohistochemical properties of tissue sections, increasing both the quantity and quality of data obtained from histologic assessments. Currently, numerous histologic image analysis software solutions are commercially available. Choosing the appropriate solution is dependent on considerations of the investigative question, computer programming and image analysis expertise, and cost. However, all studies using histologic image analysis require careful consideration of preanalytical variables, such as tissue collection, fixation, and processing, and experimental design, including sample selection, controls, reference standards, and the variables being measured. The fields of digital pathology and histologic image analysis are continuing to evolve, and their potential impact on pathology is still growing. These methodologies will increasingly transform the practice of pathology, allowing it to mature toward a quantitative science. However, this maturation requires pathologists to be at the forefront of the process, ensuring their appropriate application and the validity of their results. Therefore, histologic image analysis and the field of pathology should co-evolve, creating a symbiotic relationship that results in high-quality reproducible, objective data.

  8. An image analysis system for near-infrared (NIR) fluorescence lymph imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.

    2011-03-01

    Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.

  9. Carotid plaque characterization using CT and MRI scans for synergistic image analysis

    NASA Astrophysics Data System (ADS)

    Getzin, Matthew; Xu, Yiqin; Rao, Arhant; Madi, Saaussan; Bahadur, Ali; Lennartz, Michelle R.; Wang, Ge

    2014-09-01

    Noninvasive determination of plaque vulnerability has been a holy grail of medical imaging. Despite advances in tomographic technologies , there is currently no effective way to identify vulnerable atherosclerotic plaques with high sensitivity and specificity. Computed tomography (CT) and magnetic resonance imaging (MRI) are widely used, but neither provides sufficient information of plaque properties. Thus, we are motivated to combine CT and MRI imaging to determine if the composite information can better reflect the histological determination of plaque vulnerability. Two human endarterectomy specimens (1 symptomatic carotid and 1 stable femoral) were imaged using Scanco Medical Viva CT40 and Bruker Pharmascan 16cm 7T Horizontal MRI / MRS systems. μCT scans were done at 55 kVp and tube current of 70 mA. Samples underwent RARE-VTR and MSME pulse sequences to measure T1, T2 values, and proton density. The specimens were processed for histology and scored for vulnerability using the American Heart Association criteria. Single modality-based analyses were performed through segmentation of key imaging biomarkers (i.e. calcification and lumen), image registration, measurement of fibrous capsule, and multi-component T1 and T2 decay modeling. Feature differences were analyzed between the unstable and stable controls, symptomatic carotid and femoral plaque, respectively. By building on the techniques used in this study, synergistic CT+MRI analysis may provide a promising solution for plaque characterization in vivo.

  10. Using Multi-Criteria Analysis for the Study of Human Impact on Agro-Forestry Ecosystem in the Region of Khenchela (algeria)

    NASA Astrophysics Data System (ADS)

    Bouzekri, A.; Benmessaoud, H.

    2016-06-01

    The objective of this work is to study and analyze the human impact on agro-forestry-pastoral ecosystem of Khenchela region through the application of multi-criteria analysis methods to integrate geographic information systems, our methodology is based on a weighted linear combination of information on four criteria chosen in our analysis representative in the vicinity of variables in relation to roads, urban areas, water resources and agricultural space, the results shows the effect of urbanization and socio-economic activity on the degradation of the physical environment and found that 32% of the total area are very sensitive to human impact.

  11. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease.

  12. Comparative analysis of image classification methods for automatic diagnosis of ophthalmic images

    PubMed Central

    Wang, Liming; Zhang, Kai; Liu, Xiyang; Long, Erping; Jiang, Jiewei; An, Yingying; Zhang, Jia; Liu, Zhenzhen; Lin, Zhuoling; Li, Xiaoyan; Chen, Jingjing; Cao, Qianzhong; Li, Jing; Wu, Xiaohang; Wang, Dongni; Li, Wangting; Lin, Haotian

    2017-01-01

    There are many image classification methods, but it remains unclear which methods are most helpful for analyzing and intelligently identifying ophthalmic images. We select representative slit-lamp images which show the complexity of ocular images as research material to compare image classification algorithms for diagnosing ophthalmic diseases. To facilitate this study, some feature extraction algorithms and classifiers are combined to automatic diagnose pediatric cataract with same dataset and then their performance are compared using multiple criteria. This comparative study reveals the general characteristics of the existing methods for automatic identification of ophthalmic images and provides new insights into the strengths and shortcomings of these methods. The relevant methods (local binary pattern +SVMs, wavelet transformation +SVMs) which achieve an average accuracy of 87% and can be adopted in specific situations to aid doctors in preliminarily disease screening. Furthermore, some methods requiring fewer computational resources and less time could be applied in remote places or mobile devices to assist individuals in understanding the condition of their body. In addition, it would be helpful to accelerate the development of innovative approaches and to apply these methods to assist doctors in diagnosing ophthalmic disease. PMID:28139688

  13. Cost-benefit analysis of selective screening criteria for Chlamydia trachomatis infection in women attending Colorado family planning clinics.

    PubMed

    Humphreys, J T; Henneberry, J F; Rickard, R S; Beebe, J L

    1992-01-01

    Women attending family planning clinics in Colorado during 1988 were screened for Chlamydia trachomatis infection by enzyme immunoassay (EIA, Chlamydiazyme, Abbott Laboratories; Abbott Park, IL). Cervical specimens from 11,793 women attending 22 family planning clinics were analyzed. Patient history and physical exams were used to assess risk factors for infection. A total of 913 individuals (7.7%) had positive culture results for C. trachomatis. Multivariate analysis showed that infection was significantly related to endocervical bleeding, cervical mucopurulent discharge, a new sexual partner in the last 3 months or multiple previous sexual partners (greater than 3) in the last year, pregnancy, the use of oral contraceptives, and age. Increased odd ratios were observed for the combination of endocervical bleeding and mucopurulent discharge and sexual history that included partners over the previous year as well as the most recent 3 months. A combination of these criteria was used to selectively screen women attending Colorado family planning clinics on an ongoing basis. A cost-benefit analysis employing a model reported previously showed a significant financial benefit associated with universal screening over either selective screening or no screening for C. trachomatis in this population.

  14. Detailed Analysis of Criteria and Particle Emissions from a Very Large Crude Carrier Using a Novel ECA Fuel.

    PubMed

    Gysel, Nicholas R; Welch, William A; Johnson, Kent; Miller, Wayne; Cocker, David R

    2017-02-07

    Ocean going vessels (OGVs) operating within emission control areas (ECA) are required to use fuels with ≤0.1 wt % sulfur. Up to now only distillate fuels could meet the sulfur limits. Recently refiners created a novel low-sulfur heavy-fuel oil (LSHFO) meeting the sulfur limits so questions were posed whether nitric oxide (NOx) and particulate matter (PM) emissions were the same for the two fuels. This project characterized criteria pollutants and undertook a detailed analysis of PM emissions from a very large crude oil carrier (VLCC) using a distillate ECA fuel (MGO) and novel LSHFO. Results showed emission factors of NOx were ∼5% higher with MGO than LSHFO. PM2.5 emission factors were ∼3 times higher with LSHFO than MGO, while both were below values reported by Lloyds, U.S. EPA and CARB. A detailed analysis of PM revealed it was >90% organic carbon (OC) for both fuels. Elemental carbon (EC) and soot measured with an AVL microsoot sensor (MSS) reflected black carbon. PM size distributions showed unimodal peaks for both MGO (20-30 nm) and LSHFO (30-50 nm). Particle number (PN) emissions were 28% and 17% higher with the PPS-M compared to the SMPS for LSHFO and MGO, respectively.

  15. Evaluating community investments in the mining sector using multi-criteria decision analysis to integrate SIA with business planning

    SciTech Connect

    Esteves, A.M.

    2008-05-15

    Gaining senior management's commitment to long-term social development projects, which are characterised by uncertainty and complexity, is made easier if projects are shown to benefit the site's strategic goals. However, even though the business case for community investment may have been accepted at a general level, as a strategy for competitive differentiation, risk mitigation and a desire to deliver - and to be seen to deliver - a 'net benefit' to affected communities, mining operations are still faced with implementation challenges. Case study research on mining companies, including interviews with social investment decision-makers, has assisted in developing the Social Investment Decision Analysis Tool (SIDAT), a decision model for evaluating social projects in order to create value for both the company and the community. Multi-criteria decision analysis techniques integrating business planning processes with social impact assessment have proved useful in assisting mining companies think beyond the traditional drivers (i.e. seeking access to required lands and peaceful relations with neighbours), to broader issues of how they can meet their business goals and contribute to sustainable development in the regions in which they operate.

  16. Bayesian principal geodesic analysis for estimating intrinsic diffeomorphic image variability.

    PubMed

    Zhang, Miaomiao; Fletcher, P Thomas

    2015-10-01

    In this paper, we present a generative Bayesian approach for estimating the low-dimensional latent space of diffeomorphic shape variability in a population of images. We develop a latent variable model for principal geodesic analysis (PGA) that provides a probabilistic framework for factor analysis in the space of diffeomorphisms. A sparsity prior in the model results in automatic selection of the number of relevant dimensions by driving unnecessary principal geodesics to zero. To infer model parameters, including the image atlas, principal geodesic deformations, and the effective dimensionality, we introduce an expectation maximization (EM) algorithm. We evaluate our proposed model on 2D synthetic data and the 3D OASIS brain database of magnetic resonance images, and show that the automatically selected latent dimensions from our model are able to reconstruct unobserved testing images with lower error than both linear principal component analysis (LPCA) in the image space and tangent space principal component analysis (TPCA) in the diffeomorphism space.

  17. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  18. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets.

  19. Autonomous image data reduction by analysis and interpretation

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi; Ritter, Niles

    1988-01-01

    Image data is a critical component of the scientific information acquired by space missions. Compression of image data is required due to the limited bandwidth of the data transmission channel and limited memory space on the acquisition vehicle. This need becomes more pressing when dealing with multispectral data where each pixel may comprise 300 or more bytes. An autonomous, real time, on-board image analysis system for an exploratory vehicle such as a Mars Rover is developed. The completed system will be capable of interpreting image data to produce reduced representations of the image, and of making decisions regarding the importance of data based on current scientific goals. Data from multiple sources, including stereo images, color images, and multispectral data, are fused into single image representations. Analysis techniques emphasize artificial neural networks. Clusters are described by their outlines and class values. These analysis and compression techniques are coupled with decision making capacity for determining importance of each image region. Areas determined to be noise or uninteresting can be discarded in favor of more important areas. Thus limited resources for data storage and transmission are allocated to the most significant images.

  20. Autonomous image data reduction by analysis and interpretation

    NASA Astrophysics Data System (ADS)

    Eberlein, Susan; Yates, Gigi; Ritter, Niles

    Image data is a critical component of the scientific information acquired by space missions. Compression of image data is required due to the limited bandwidth of the data transmission channel and limited memory space on the acquisition vehicle. This need becomes more pressing when dealing with multispectral data where each pixel may comprise 300 or more bytes. An autonomous, real time, on-board image analysis system for an exploratory vehicle such as a Mars Rover is developed. The completed system will be capable of interpreting image data to produce reduced representations of the image, and of making decisions regarding the importance of data based on current scientific goals. Data from multiple sources, including stereo images, color images, and multispectral data, are fused into single image representations. Analysis techniques emphasize artificial neural networks. Clusters are described by their outlines and class values. These analysis and compression techniques are coupled with decision-making capacity for determining importance of each image region. Areas determined to be noise or uninteresting can be discarded in favor of more important areas. Thus limited resources for data storage and transmission are allocated to the most significant images.

  1. Histology image analysis for carcinoma detection and grading

    PubMed Central

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.

    2012-01-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890

  2. An exploratory analysis of Indiana and Illinois biotic assemblage data in support of state nutrient criteria development

    EPA Science Inventory

    EPA recognizes the importance of nutrient criteria in protecting designated uses from eutrophication effects associated with elevated phosphorus and nitrogen in streams and has worked with states over the past 12 years to assist them in developing nutrient criteria. Towards that ...

  3. Identifying radiotherapy target volumes in brain cancer by image analysis.

    PubMed

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B; Erridge, Sara C; McLaughlin, Stephen; Nailon, William H

    2015-10-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required.

  4. Identifying radiotherapy target volumes in brain cancer by image analysis

    PubMed Central

    Cheng, Kun; Montgomery, Dean; Feng, Yang; Steel, Robin; Liao, Hanqing; McLaren, Duncan B.; Erridge, Sara C.; McLaughlin, Stephen

    2015-01-01

    To establish the optimal radiotherapy fields for treating brain cancer patients, the tumour volume is often outlined on magnetic resonance (MR) images, where the tumour is clearly visible, and mapped onto computerised tomography images used for radiotherapy planning. This process requires considerable clinical experience and is time consuming, which will continue to increase as more complex image sequences are used in this process. Here, the potential of image analysis techniques for automatically identifying the radiation target volume on MR images, and thereby assisting clinicians with this difficult task, was investigated. A gradient-based level set approach was applied on the MR images of five patients with grades II, III and IV malignant cerebral glioma. The relationship between the target volumes produced by image analysis and those produced by a radiation oncologist was also investigated. The contours produced by image analysis were compared with the contours produced by an oncologist and used for treatment. In 93% of cases, the Dice similarity coefficient was found to be between 60 and 80%. This feasibility study demonstrates that image analysis has the potential for automatic outlining in the management of brain cancer patients, however, more testing and validation on a much larger patient cohort is required. PMID:26609418

  5. Research of second harmonic generation images based on texture analysis

    NASA Astrophysics Data System (ADS)

    Liu, Yao; Li, Yan; Gong, Haiming; Zhu, Xiaoqin; Huang, Zufang; Chen, Guannan

    2014-09-01

    Texture analysis plays a crucial role in identifying objects or regions of interest in an image. It has been applied to a variety of medical image processing, ranging from the detection of disease and the segmentation of specific anatomical structures, to differentiation between healthy and pathological tissues. Second harmonic generation (SHG) microscopy as a potential noninvasive tool for imaging biological tissues has been widely used in medicine, with reduced phototoxicity and photobleaching. In this paper, we clarified the principles of texture analysis including statistical, transform, structural and model-based methods and gave examples of its applications, reviewing studies of the technique. Moreover, we tried to apply texture analysis to the SHG images for the differentiation of human skin scar tissues. Texture analysis method based on local binary pattern (LBP) and wavelet transform was used to extract texture features of SHG images from collagen in normal and abnormal scars, and then the scar SHG images were classified into normal or abnormal ones. Compared with other texture analysis methods with respect to the receiver operating characteristic analysis, LBP combined with wavelet transform was demonstrated to achieve higher accuracy. It can provide a new way for clinical diagnosis of scar types. At last, future development of texture analysis in SHG images were discussed.

  6. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1988-01-19

    approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban

  7. Pattern Recognition Software and Techniques for Biological Image Analysis

    PubMed Central

    Shamir, Lior; Delaney, John D.; Orlov, Nikita; Eckley, D. Mark; Goldberg, Ilya G.

    2010-01-01

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays. PMID:21124870

  8. Pattern recognition software and techniques for biological image analysis.

    PubMed

    Shamir, Lior; Delaney, John D; Orlov, Nikita; Eckley, D Mark; Goldberg, Ilya G

    2010-11-24

    The increasing prevalence of automated image acquisition systems is enabling new types of microscopy experiments that generate large image datasets. However, there is a perceived lack of robust image analysis systems required to process these diverse datasets. Most automated image analysis systems are tailored for specific types of microscopy, contrast methods, probes, and even cell types. This imposes significant constraints on experimental design, limiting their application to the narrow set of imaging methods for which they were designed. One of the approaches to address these limitations is pattern recognition, which was originally developed for remote sensing, and is increasingly being applied to the biology domain. This approach relies on training a computer to recognize patterns in images rather than developing algorithms or tuning parameters for specific image processing tasks. The generality of this approach promises to enable data mining in extensive image repositories, and provide objective and quantitative imaging assays for routine use. Here, we provide a brief overview of the technologies behind pattern recognition and its use in computer vision for biological and biomedical imaging. We list available software tools that can be used by biologists and suggest practical experimental considerations to make the best use of pattern recognition techniques for imaging assays.

  9. Trajectory analysis for magnetic particle imaging.

    PubMed

    Knopp, T; Biederer, S; Sattel, T; Weizenecker, J; Gleich, B; Borgert, J; Buzug, T M

    2009-01-21

    Recently a new imaging technique called magnetic particle imaging was proposed. The method uses the nonlinear response of magnetic nanoparticles when a time varying magnetic field is applied. Spatial encoding is achieved by moving a field-free point through an object of interest while the field strength in the vicinity of the point is high. A resolution in the submillimeter range is provided even for fast data acquisition sequences. In this paper, a simulation study is performed on different trajectories moving the field-free point through the field of view. The purpose is to provide mandatory information for the design of a magnetic particle imaging scanner. Trajectories are compared with respect to density, speed and image quality when applied in data acquisition. Since simulation of the involved physics is a time demanding task, moreover, an efficient implementation is presented utilizing caching techniques.

  10. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  11. Radar images analysis for scattering surfaces characterization

    NASA Astrophysics Data System (ADS)

    Piazza, Enrico

    1998-10-01

    According to the different problems and techniques related to the detection and recognition of airplanes and vehicles moving on the Airport surface, the present work mainly deals with the processing of images gathered by a high-resolution radar sensor. The radar images used to test the investigated algorithms are relative to sequence of images obtained in some field experiments carried out by the Electronic Engineering Department of the University of Florence. The radar is the Ka band radar operating in the'Leonardo da Vinci' Airport in Fiumicino (Rome). The images obtained from the radar scan converter are digitized and putted in x, y, (pixel) co- ordinates. For a correct matching of the images, these are corrected in true geometrical co-ordinates (meters) on the basis of fixed points on an airport map. Correlating the airplane 2-D multipoint template with actual radar images, the value of the signal in the points involved in the template can be extracted. Results for a lot of observation show a typical response for the main section of the fuselage and the wings. For the fuselage, the back-scattered echo is low at the prow, became larger near the center on the aircraft and than it decrease again toward the tail. For the wings the signal is growing with a pretty regular slope from the fuselage to the tips, where the signal is the strongest.

  12. A critical assessment of the performance criteria in confirmatory analysis for veterinary drug residue analysis using mass spectrometric detection in selected reaction monitoring mode.

    PubMed

    Berendsen, Bjorn J A; Meijer, Thijs; Wegh, Robin; Mol, Hans G J; Smyth, Wesley G; Armstrong Hewitt, S; van Ginkel, Leen; Nielen, Michel W F

    2016-05-01

    Besides the identification point system to assure adequate set-up of instrumentation, European Commission Decision 2002/657/EC includes performance criteria regarding relative ion abundances in mass spectrometry and chromatographic retention time. In confirmatory analysis, the relative abundance of two product ions, acquired in selected reaction monitoring mode, the ion ratio should be within certain ranges for confirmation of the identity of a substance. The acceptable tolerance of the ion ratio varies with the relative abundance of the two product ions and for retention time, CD 2002/657/EC allows a tolerance of 5%. Because of rapid technical advances in analytical instruments and new approaches applied in the field of contaminant testing in food products (multi-compound and multi-class methods) a critical assessment of these criteria is justified. In this study a large number of representative, though challenging sample extracts were prepared, including muscle, urine, milk and liver, spiked with 100 registered and banned veterinary drugs at levels ranging from 0.5 to 100 µg/kg. These extracts were analysed using SRM mode using different chromatographic conditions and mass spectrometers from different vendors. In the initial study, robust data was collected using four different instrumental set-ups. Based on a unique and highly relevant data set, consisting of over 39 000 data points, the ion ratio and retention time criteria for applicability in confirmatory analysis were assessed. The outcomes were verified based on a collaborative trial including laboratories from all over the world. It was concluded that the ion ratio deviation is not related to the value of the ion ratio, but rather to the intensity of the lowest product ion. Therefore a fixed ion ratio deviation tolerance of 50% (relative) is proposed, which also is applicable for compounds present at sub-ppb levels or having poor ionisation efficiency. Furthermore, it was observed that retention time

  13. Criteria for structural test

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The results of a study to define criteria and techniques of design, analysis and test which permit the use of a single major structural test article for performing dynamic, fatigue, and static testing are presented. The criteria developed is applicable to both space vehicles and aircraft structures operating in the subsonic or supersonic regime. The feasibility of such an approach was demonstrated by defining test interactions, compatibilities and incompatibilities between the three different types of tests. The results of the study indicate that the single test article concept is feasible with a testing sequence of dynamic test followed by a fatigue and static test.

  14. Unsupervised analysis of small animal dynamic Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Spinelli, Antonello E.; Boschi, Federico

    2011-12-01

    Clustering analysis (CA) and principal component analysis (PCA) were applied to dynamic Cerenkov luminescence images (dCLI). In order to investigate the performances of the proposed approaches, two distinct dynamic data sets obtained by injecting mice with 32P-ATP and 18F-FDG were acquired using the IVIS 200 optical imager. The k-means clustering algorithm has been applied to dCLI and was implemented using interactive data language 8.1. We show that cluster analysis allows us to obtain good agreement between the clustered and the corresponding emission regions like the bladder, the liver, and the tumor. We also show a good correspondence between the time activity curves of the different regions obtained by using CA and manual region of interest analysis on dCLIT and PCA images. We conclude that CA provides an automatic unsupervised method for the analysis of preclinical dynamic Cerenkov luminescence image data.

  15. Analysis of live cell images: Methods, tools and opportunities.

    PubMed

    Nketia, Thomas A; Sailem, Heba; Rohde, Gustavo; Machiraju, Raghu; Rittscher, Jens

    2017-02-15

    Advances in optical microscopy, biosensors and cell culturing technologies have transformed live cell imaging. Thanks to these advances live cell imaging plays an increasingly important role in basic biology research as well as at all stages of drug development. Image analysis methods are needed to extract quantitative information from these vast and complex data sets. The aim of this review is to provide an overview of available image analysis methods for live cell imaging, in particular required preprocessing image segmentation, cell tracking and data visualisation methods. The potential opportunities recent advances in machine learning, especially deep learning, and computer vision provide are being discussed. This review includes overview of the different available software packages and toolkits.

  16. Digital Image Analysis for DETCHIP(®) Code Determination.

    PubMed

    Lyon, Marcus; Wilson, Mark V; Rouhier, Kerry A; Symonsbergen, David J; Bastola, Kiran; Thapa, Ishwor; Holmes, Andrea E; Sikich, Sharmin M; Jackson, Abby

    2012-08-01

    DETECHIP(®) is a molecular sensing array used for identification of a large variety of substances. Previous methodology for the analysis of DETECHIP(®) used human vision to distinguish color changes induced by the presence of the analyte of interest. This paper describes several analysis techniques using digital images of DETECHIP(®). Both a digital camera and flatbed desktop photo scanner were used to obtain Jpeg images. Color information within these digital images was obtained through the measurement of red-green-blue (RGB) values using software such as GIMP, Photoshop and ImageJ. Several different techniques were used to evaluate these color changes. It was determined that the flatbed scanner produced in the clearest and more reproducible images. Furthermore, codes obtained using a macro written for use within ImageJ showed improved consistency versus pervious methods.

  17. Geostationary microwave imagers detection criteria

    NASA Technical Reports Server (NTRS)

    Stacey, J. M.

    1986-01-01

    Geostationary orbit is investigated as a vantage point from which to sense remotely the surface features of the planet and its atmosphere, with microwave sensors. The geometrical relationships associated with geostationary altitude are developed to produce an efficient search pattern for the detection of emitting media and metal objects. Power transfer equations are derived from the roots of first principles and explain the expected values of the signal-to-clutter ratios for the detection of aircraft, ships, and buoys and for the detection of natural features where they are manifested as cold and warm eddies. The transport of microwave power is described for modeled detection where the direction of power flow is explained by the Zeroth and Second Laws of Thermodynamics. Mathematical expressions are derived that elucidate the detectability of natural emitting media and metal objects. Signal-to-clutter ratio comparisons are drawn among detectable objects that show relative detectability with a thermodynamic sensor and with a short-pulse radar.

  18. Geopositioning Precision Analysis of Multiple Image Triangulation Using Lro Nac Lunar Images

    NASA Astrophysics Data System (ADS)

    Di, K.; Xu, B.; Liu, B.; Jia, M.; Liu, Z.

    2016-06-01

    This paper presents an empirical analysis of the geopositioning precision of multiple image triangulation using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images at the Chang'e-3(CE-3) landing site. Nine LROC NAC images are selected for comparative analysis of geopositioning precision. Rigorous sensor models of the images are established based on collinearity equations with interior and exterior orientation elements retrieved from the corresponding SPICE kernels. Rational polynomial coefficients (RPCs) of each image are derived by least squares fitting using vast number of virtual control points generated according to rigorous sensor models. Experiments of different combinations of images are performed for comparisons. The results demonstrate that the plane coordinates can achieve a precision of 0.54 m to 2.54 m, with a height precision of 0.71 m to 8.16 m when only two images are used for three-dimensional triangulation. There is a general trend that the geopositioning precision, especially the height precision, is improved with the convergent angle of the two images increasing from several degrees to about 50°. However, the image matching precision should also be taken into consideration when choosing image pairs for triangulation. The precisions of using all the 9 images are 0.60 m, 0.50 m, 1.23 m in along-track, cross-track, and height directions, which are better than most combinations of two or more images. However, triangulation with selected fewer images could produce better precision than that using all the images.

  19. Analysis of filtering techniques and image quality in pixel duplicated images

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; McLauchlan, Lifford

    2009-08-01

    When images undergo filtering operations, valuable information can be lost besides the intended noise or frequencies due to averaging of neighboring pixels. When the image is enlarged by duplicating pixels, such filtering effects can be reduced and more information retained, which could be critical when analyzing image content automatically. Analysis of retinal images could reveal many diseases at early stage as long as minor changes that depart from a normal retinal scan can be identified and enhanced. In this paper, typical filtering techniques are applied to an early stage diabetic retinopathy image which has undergone digital pixel duplication. The same techniques are applied to the original images for comparison. The effects of filtering are then demonstrated for both pixel duplicated and original images to show the information retention capability of pixel duplication. Image quality is computed based on published metrics. Our analysis shows that pixel duplication is effective in retaining information on smoothing operations such as mean filtering in the spatial domain, as well as lowpass and highpass filtering in the frequency domain, based on the filter window size. Blocking effects due to image compression and pixel duplication become apparent in frequency analysis.

  20. Value-Based Assessment of New Medical Technologies: Towards a Robust Methodological Framework for the Application of Multiple Criteria Decision Analysis in the Context of Health Technology Assessment.

    PubMed

    Angelis, Aris; Kanavos, Panos

    2016-05-01

    In recent years, multiple criteria decision analysis (MCDA) has emerged as a likely alternative to address shortcomings in health technology assessment (HTA) by offering a more holistic perspective to value assessment and acting as an alternative priority setting tool. In this paper, we argue that MCDA needs to subscribe to robust methodological processes related to the selection of objectives, criteria and attributes in order to be meaningful in the context of healthcare decision making and fulfil its role in value-based assessment (VBA). We propose a methodological process, based on multi-attribute value theory (MAVT) methods comprising five distinct phases, outline the stages involved in each phase and discuss their relevance in the HTA process. Importantly, criteria and attributes need to satisfy a set of desired properties, otherwise the outcome of the analysis can produce spurious results and misleading recommendations. Assuming the methodological process we propose is adhered to, the application of MCDA presents three very distinct advantages to decision makers in the context of HTA and VBA: first, it acts as an instrument for eliciting preferences on the performance of alternative options across a wider set of explicit criteria, leading to a more complete assessment of value; second, it allows the elicitation of preferences across the criteria themselves to reflect differences in their relative importance; and, third, the entire process of preference elicitation can be informed by direct stakeholder engagement, and can therefore reflect their own preferences. All features are fully transparent and facilitate decision making.

  1. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  2. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.

    1999-01-01

    Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.

  3. Fractal analysis for reduced reference image quality assessment.

    PubMed

    Xu, Yong; Liu, Delei; Quan, Yuhui; Le Callet, Patrick

    2015-07-01

    In this paper, multifractal analysis is adapted to reduced-reference image quality assessment (RR-IQA). A novel RR-QA approach is proposed, which measures the difference of spatial arrangement between the reference image and the distorted image in terms of spatial regularity measured by fractal dimension. An image is first expressed in Log-Gabor domain. Then, fractal dimensions are computed on each Log-Gabor subband and concatenated as a feature vector. Finally, the extracted features are pooled as the quality score of the distorted image using l1 distance. Compared with existing approaches, the proposed method measures image quality from the perspective of the spatial distribution of image patterns. The proposed method was evaluated on seven public benchmark data sets. Experimental results have demonstrated the excellent performance of the proposed method in comparison with state-of-the-art approaches.

  4. Basic research planning in mathematical pattern recognition and image analysis

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Guseman, L. F., Jr.

    1981-01-01

    Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.

  5. Four dimensional reconstruction and analysis of plume images

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.; Disimile, Peter J.; Peck, Charles, III

    Results of a time-history based three-dimensional reconstruction of cross-sectional images corresponding to a specific planar location of the jet structure are reported. The experimental set-up is described, and three-dimensional displays of time-history based reconstruction of the jet structure are presented. Future developments in image analysis, quantification and interpretation, and flow visualization of rocket engine plume images are expected to provide a tool for correlating engine diagnostic features with visible flow structures.

  6. An Analysis of the Magneto-Optic Imaging System

    NASA Technical Reports Server (NTRS)

    Nath, Shridhar

    1996-01-01

    The Magneto-Optic Imaging system is being used for the detection of defects in airframes and other aircraft structures. The system has been successfully applied to detecting surface cracks, but has difficulty in the detection of sub-surface defects such as corrosion. The intent of the grant was to understand the physics of the MOI better, in order to use it effectively for detecting corrosion and for classifying surface defects. Finite element analysis, image classification, and image processing are addressed.

  7. Independent component analysis based filtering for penumbral imaging

    SciTech Connect

    Chen Yenwei; Han Xianhua; Nozaki, Shinya

    2004-10-01

    We propose a filtering based on independent component analysis (ICA) for Poisson noise reduction. In the proposed filtering, the image is first transformed to ICA domain and then the noise components are removed by a soft thresholding (shrinkage). The proposed filter, which is used as a preprocessing of the reconstruction, has been successfully applied to penumbral imaging. Both simulation results and experimental results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters.

  8. Terahertz grayscale imaging using spatial frequency domain analysis

    NASA Astrophysics Data System (ADS)

    Lv, Zhihui; Sun, Lin; Zhang, Dongwen; Yuan, Jianmin

    2011-11-01

    We reported a technology of gray-scale imaging using broadband terahertz pulse. Utilizing the spatial distribution of different frequency content, image information can be acquired from the terahertz frequency domain analysis. Unlike CCDs(charge-coupled devices) or spot scanning technology are used in conversional method, a single-pixels detector with single measurement can meet the demand of our scheme. And high SNR terahertz imaging with fast velocity is believed.

  9. Terahertz grayscale imaging using spatial frequency domain analysis

    NASA Astrophysics Data System (ADS)

    Lv, Zhihui; Sun, Lin; Zhang, Dongwen; Yuan, Jianmin

    2012-03-01

    We reported a technology of gray-scale imaging using broadband terahertz pulse. Utilizing the spatial distribution of different frequency content, image information can be acquired from the terahertz frequency domain analysis. Unlike CCDs(charge-coupled devices) or spot scanning technology are used in conversional method, a single-pixels detector with single measurement can meet the demand of our scheme. And high SNR terahertz imaging with fast velocity is believed.

  10. "Multimodal Contrast" from the Multivariate Analysis of Hyperspectral CARS Images

    NASA Astrophysics Data System (ADS)

    Tabarangao, Joel T.

    The typical contrast mechanism employed in multimodal CARS microscopy involves the use of other nonlinear imaging modalities such as two-photon excitation fluorescence (TPEF) microscopy and second harmonic generation (SHG) microscopy to produce a molecule-specific pseudocolor image. In this work, I explore the use of unsupervised multivariate statistical analysis tools such as Principal Component Analysis (PCA) and Vertex Component Analysis (VCA) to provide better contrast using the hyperspectral CARS data alone. Using simulated CARS images, I investigate the effects of the quadratic dependence of CARS signal on concentration on the pixel clustering and classification and I find that a normalization step is necessary to improve pixel color assignment. Using an atherosclerotic rabbit aorta test image, I show that the VCA algorithm provides pseudocolor contrast that is comparable to multimodal imaging, thus showing that much of the information gleaned from a multimodal approach can be sufficiently extracted from the CARS hyperspectral stack itself.

  11. Computer Vision-Based Image Analysis of Bacteria.

    PubMed

    Danielsen, Jonas; Nordenfelt, Pontus

    2017-01-01

    Microscopy is an essential tool for studying bacteria, but is today mostly used in a qualitative or possibly semi-quantitative manner often involving time-consuming manual analysis. It also makes it difficult to assess the importance of individual bacterial phenotypes, especially when there are only subtle differences in features such as shape, size, or signal intensity, which is typically very difficult for the human eye to discern. With computer vision-based image analysis - where computer algorithms interpret image data - it is possible to achieve an objective and reproducible quantification of images in an automated fashion. Besides being a much more efficient and consistent way to analyze images, this can also reveal important information that was previously hard to extract with traditional methods. Here, we present basic concepts of automated image processing, segmentation and analysis that can be relatively easy implemented for use with bacterial research.

  12. Uncooled LWIR imaging: applications and market analysis

    NASA Astrophysics Data System (ADS)

    Takasawa, Satomi

    2015-05-01

    The evolution of infrared (IR) imaging sensor technology for defense market has played an important role in developing commercial market, as dual use of the technology has expanded. In particular, technologies of both reduction in pixel pitch and vacuum package have drastically evolved in the area of uncooled Long-Wave IR (LWIR; 8-14 μm wavelength region) imaging sensor, increasing opportunity to create new applications. From the macroscopic point of view, the uncooled LWIR imaging market is divided into two areas. One is a high-end market where uncooled LWIR imaging sensor with sensitivity as close to that of cooled one as possible is required, while the other is a low-end market which is promoted by miniaturization and reduction in price. Especially, in the latter case, approaches towards consumer market have recently appeared, such as applications of uncooled LWIR imaging sensors to night visions for automobiles and smart phones. The appearance of such a kind of commodity surely changes existing business models. Further technological innovation is necessary for creating consumer market, and there will be a room for other companies treating components and materials such as lens materials and getter materials and so on to enter into the consumer market.

  13. Tilted planes in 3D image analysis

    NASA Astrophysics Data System (ADS)

    Pargas, Roy P.; Staples, Nancy J.; Malloy, Brian F.; Cantrell, Ken; Chhatriwala, Murtuza

    1998-03-01

    Reliable 3D wholebody scanners which output digitized 3D images of a complete human body are now commercially available. This paper describes a software package, called 3DM, being developed by researchers at Clemson University and which manipulates and extracts measurements from such images. The focus of this paper is on tilted planes, a 3DM tool which allows a user to define a plane through a scanned image, tilt it in any direction, and effectively define three disjoint regions on the image: the points on the plane and the points on either side of the plane. With tilted planes, the user can accurately take measurements required in applications such as apparel manufacturing. The user can manually segment the body rather precisely. Tilted planes assist the user in analyzing the form of the body and classifying the body in terms of body shape. Finally, titled planes allow the user to eliminate extraneous and unwanted points often generated by a 3D scanner. This paper describes the user interface for tilted planes, the equations defining the plane as the user moves it through the scanned image, an overview of the algorithms, and the interaction of the tilted plane feature with other tools in 3DM.

  14. Computer-Assisted Digital Image Analysis of Plus Disease in Retinopathy of Prematurity.

    PubMed

    Kemp, Pavlina S; VanderVeen, Deborah K

    2016-01-01

    The objective of this study is to review the current state and role of computer-assisted analysis in diagnosis of plus disease in retinopathy of prematurity. Diagnosis and documentation of retinopathy of prematurity are increasingly being supplemented by digital imaging. The incorporation of computer-aided techniques has the potential to add valuable information and standardization regarding the presence of plus disease, an important criterion in deciding the necessity of treatment of vision-threatening retinopathy of prematurity. A review of literature found that several techniques have been published examining the process and role of computer aided analysis of plus disease in retinopathy of prematurity. These techniques use semiautomated image analysis techniques to evaluate retinal vascular dilation and tortuosity, using calculated parameters to evaluate presence or absence of plus disease. These values are then compared with expert consensus. The study concludes that computer-aided image analysis has the potential to use quantitative and objective criteria to act as a supplemental tool in evaluating for plus disease in the setting of retinopathy of prematurity.

  15. Towards Building Computerized Image Analysis Framework for Nucleus Discrimination in Microscopy Images of Diffuse Glioma

    PubMed Central

    Kong, Jun; Cooper, Lee; Kurc, Tahsin; Brat, Daniel; Saltz, Joel

    2012-01-01

    As an effort to build an automated and objective system for pathologic image analysis, we present, in this paper, a computerized image processing method for identifying nuclei, a basic biological unit of diagnostic utility, in microscopy images of glioma tissue samples. The complete analysis includes multiple processing steps, involving mode detection with color and spatial information for pixel clustering, background normalization leveraging morphological operations, boundary refinement with deformable models, and clumped nuclei separation using watershed. In aggregate, our validation dataset includes 220 nuclei from 11 distinct tissue regions selected at random by an experienced neuropathologist. Computerized nuclei detection results are in good concordance with human markups by both visual appraisement and quantitative measures. We compare the performance of the proposed analysis algorithm with that of CellProfiler, a classical analysis software for cell image process, and present the superiority of our method to CellProfiler. PMID:22255853

  16. Ringed impact craters on Venus: An analysis from Magellan images

    NASA Technical Reports Server (NTRS)

    Alexopoulos, Jim S.; Mckinnon, William B.

    1992-01-01

    We have analyzed cycle 1 Magellan images covering approximately 90 percent of the venusian surface and have identified 55 unequivocal peak-ring craters and multiringed impact basins. This comprehensive study (52 peak-ring craters and at least 3 multiringed impact basins) complements our earlier independent analysis of Arecibo and Venera images and initial Magellan data and that of the Magellan team.

  17. VIDA: an environment for multidimensional image display and analysis

    NASA Astrophysics Data System (ADS)

    Hoffman, Eric A.; Gnanaprakasam, Daniel; Gupta, Krishanu B.; Hoford, John D.; Kugelmass, Steven D.; Kulawiec, Richard S.

    1992-06-01

    Since the first dynamic volumetric studies were done in the early 1980s on the dynamic spatial reconstructor (DSR), there has been a surge of interest in volumetric and dynamic imaging using a number of tomographic techniques. Knowledge gained in handling DSR image data has readily transferred to the current use of a number of other volumetric and dynamic imaging modalities including cine and spiral CT, MR, and PET. This in turn has lead to our development of a new image display and quantitation package which we have named VIDATM (volumetric image display and analysis). VIDA is written in C, runs under the UNIXTM operating system, and uses the XView toolkit to conform to the Open LookTM graphical user interface specification. A shared memory structure has been designed which allows for the manipulation of multiple volumes simultaneously. VIDA utilizes a windowing environment and allows execution of multiple processes simultaneously. Available programs include: oblique sectioning, volume rendering, region of interest analysis, interactive image segmentation/editing, algebraic image manipulation, conventional cardiac mechanics analysis, homogeneous strain analysis, tissue blood flow evaluation, etc. VIDA is a built modularly, allowing new programs to be developed and integrated easily. An emphasis has been placed upon image quantitation for the purpose of physiological evaluation.

  18. Higher Education Institution Image: A Correspondence Analysis Approach.

    ERIC Educational Resources Information Center

    Ivy, Jonathan

    2001-01-01

    Investigated how marketing is used to convey higher education institution type image in the United Kingdom and South Africa. Using correspondence analysis, revealed the unique positionings created by old and new universities and technikons in these countries. Also identified which marketing tools they use in conveying their image. (EV)

  19. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  20. Disability in Physical Education Textbooks: An Analysis of Image Content

    ERIC Educational Resources Information Center

    Taboas-Pais, Maria Ines; Rey-Cao, Ana

    2012-01-01

    The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…

  1. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem.

  2. Image analysis for denoising full-field frequency-domain fluorescence lifetime images.

    PubMed

    Spring, B Q; Clegg, R M

    2009-08-01

    Video-rate fluorescence lifetime-resolved imaging microscopy (FLIM) is a quantitative imaging technique for measuring dynamic processes in biological specimens. FLIM offers valuable information in addition to simple fluorescence intensity imaging; for instance, the fluorescence lifetime is sensitive to the microenvironment of the fluorophore allowing reliable differentiation between concentration differences and dynamic quenching. Homodyne FLIM is a full-field frequency-domain technique for imaging fluorescence lifetimes at every pixel of a fluorescence image simultaneously. If a single modulation frequency is used, video-rate image acquisition is possible. Homodyne FLIM uses a gain-modulated image intensified charge-coupled device (ICCD) detector, which unfortunately is a major contribution to the noise of the measurement. Here we introduce image analysis for denoising homodyne FLIM data. The denoising routine is fast, improves the extraction of the fluorescence lifetime value(s) and increases the sensitivity and fluorescence lifetime resolving power of the FLIM instrument. The spatial resolution (especially the high spatial frequencies not related to noise) of the FLIM image is preserved, because the denoising routine does not blur or smooth the image. By eliminating the random noise known to be specific to photon noise and from the intensifier amplification, the fidelity of the spatial resolution is improved. The polar plot projection, a rapid FLIM analysis method, is used to demonstrate the effectiveness of the denoising routine with exemplary data from both physical and complex biological samples. We also suggest broader impacts of the image analysis for other fluorescence microscopy techniques (e.g. super-resolution imaging).

  3. System Matrix Analysis for Computed Tomography Imaging.

    PubMed

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data.

  4. System Matrix Analysis for Computed Tomography Imaging

    PubMed Central

    Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo

    2015-01-01

    In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482

  5. SLAR image interpretation keys for geographic analysis

    NASA Technical Reports Server (NTRS)

    Coiner, J. C.

    1972-01-01

    A means for side-looking airborne radar (SLAR) imagery to become a more widely used data source in geoscience and agriculture is suggested by providing interpretation keys as an easily implemented interpretation model. Interpretation problems faced by the researcher wishing to employ SLAR are specifically described, and the use of various types of image interpretation keys to overcome these problems is suggested. With examples drawn from agriculture and vegetation mapping, direct and associate dichotomous image interpretation keys are discussed and methods of constructing keys are outlined. Initial testing of the keys, key-based automated decision rules, and the role of the keys in an information system for agriculture are developed.

  6. Analysis of PETT images in psychiatric disorders

    SciTech Connect

    Brodie, J.D.; Gomez-Mont, F.; Volkow, N.D.; Corona, J.F.; Wolf, A.P.; Wolkin, A.; Russell, J.A.G.; Christman, D.; Jaeger, J.

    1983-01-01

    A quantitative method is presented for studying the pattern of metabolic activity in a set of Positron Emission Transaxial Tomography (PETT) images. Using complex Fourier coefficients as a feature vector for each image, cluster, principal components, and discriminant function analyses are used to empirically describe metabolic differences between control subjects and patients with DSM III diagnosis for schizophrenia or endogenous depression. We also present data on the effects of neuroleptic treatment on the local cerebral metabolic rate of glucose utilization (LCMRGI) in a group of chronic schizophrenics using the region of interest approach. 15 references, 4 figures, 3 tables.

  7. Electron Microscopy and Image Analysis for Selected Materials

    NASA Technical Reports Server (NTRS)

    Williams, George

    1999-01-01

    This particular project was completed in collaboration with the metallurgical diagnostics facility. The objective of this research had four major components. First, we required training in the operation of the environmental scanning electron microscope (ESEM) for imaging of selected materials including biological specimens. The types of materials range from cyanobacteria and diatoms to cloth, metals, sand, composites and other materials. Second, to obtain training in surface elemental analysis technology using energy dispersive x-ray (EDX) analysis, and in the preparation of x-ray maps of these same materials. Third, to provide training for the staff of the metallurgical diagnostics and failure analysis team in the area of image processing and image analysis technology using NIH Image software. Finally, we were to assist in the sample preparation, observing, imaging, and elemental analysis for Mr. Richard Hoover, one of NASA MSFC's solar physicists and Marshall's principal scientist for the agency-wide virtual Astrobiology Institute. These materials have been collected from various places around the world including the Fox Tunnel in Alaska, Siberia, Antarctica, ice core samples from near Lake Vostoc, thermal vents in the ocean floor, hot springs and many others. We were successful in our efforts to obtain high quality, high resolution images of various materials including selected biological ones. Surface analyses (EDX) and x-ray maps were easily prepared with this technology. We also discovered and used some applications for NIH Image software in the metallurgical diagnostics facility.

  8. Memory-Augmented Cellular Automata for Image Analysis.

    DTIC Science & Technology

    1978-11-01

    case in which each cell has memory size proportional to the logarithm of the input size, showing the increased capabilities of these machines for executing a variety of basic image analysis and recognition tasks. (Author)

  9. A performance analysis system for MEMS using automated imaging methods

    SciTech Connect

    LaVigne, G.F.; Miller, S.L.

    1998-08-01

    The ability to make in-situ performance measurements of MEMS operating at high speeds has been demonstrated using a new image analysis system. Significant improvements in performance and reliability have directly resulted from the use of this system.

  10. Simulation of radiographic images for quality and dose analysis

    NASA Astrophysics Data System (ADS)

    Winslow, Mark P.

    A software package, Virtual Photographic Radiographic Imaging Simulator (ViPRIS), has been developed for optimizing x-ray radiographic imaging. A tomographic phantom, VIP-Man, constructed from Visible Human anatomical color images is used to simulate the scattered portion of an x-ray system and to compute organ doses using the ESGnrc Monte Carlo code. The primary portion of an x-ray image is simulated using the projection ray-tracing method through the Visible Human CT data set. To produce a realistic image, the software simulates quantum noise, blurring effects, lesions, detector absorption efficiency, and other imaging artifacts. The primary and scattered portions of an x-ray chest image are combined to form a final image for observer studies using computerized simulated observers. Absorbed doses in organs and tissues of the segmented VIP-Man phantom were also obtained from the Monte Carlo simulations to derive effective dose, which is a radiation risk indicator. Approximately 2000 simulated images and 200,000 vectorized image data files were analyzed using ROC/AUC analysis. Results demonstrated the usefulness of this approach and the software for studying x-ray image qualify and radiation dose.

  11. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  12. Blind image analysis for the compositional and structural characterization of plant cell walls.

    PubMed

    Perera, Pradeep N; Schmidt, Martin; Schuck, P James; Adams, Paul D

    2011-09-30

    A new image analysis strategy is introduced to determine the composition and the structural characteristics of plant cell walls by combining Raman microspectroscopy and unsupervised data mining methods. The proposed method consists of three main steps: spectral preprocessing, spatial clustering of the image and finally estimation of spectral profiles of pure components and their weights. Point spectra of Raman maps of cell walls were preprocessed to remove noise and fluorescence contributions and compressed with PCA. Processed spectra were then subjected to k-means clustering to identify spatial segregations in the images. Cell wall images were reconstructed with cluster identities and each cluster was represented by the average spectrum of all the pixels in the cluster. Pure components spectra were estimated by spectral entropy minimization criteria with simulated annealing optimization. Two pure spectral estimates that represent lignin and carbohydrates were recovered and their spatial distributions were calculated. Our approach partitioned the cell walls into many sublayers, based on their composition, thus enabling composition analysis at subcellular levels. It also overcame the well known problem that native lignin spectra in lignocellulosics have high spectral overlap with contributions from cellulose and hemicelluloses, thus opening up new avenues for microanalyses of monolignol composition of native lignin and carbohydrates without chemical or mechanical extraction of the cell wall materials.

  13. Image Segmentation Analysis for NASA Earth Science Applications

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2010-01-01

    NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.

  14. Pathology imaging informatics for quantitative analysis of whole-slide images

    PubMed Central

    Kothari, Sonal; Phan, John H; Stokes, Todd H; Wang, May D

    2013-01-01

    Objectives With the objective of bringing clinical decision support systems to reality, this article reviews histopathological whole-slide imaging informatics methods, associated challenges, and future research opportunities. Target audience This review targets pathologists and informaticians who have a limited understanding of the key aspects of whole-slide image (WSI) analysis and/or a limited knowledge of state-of-the-art technologies and analysis methods. Scope First, we discuss the importance of imaging informatics in pathology and highlight the challenges posed by histopathological WSI. Next, we provide a thorough review of current methods for: quality control of histopathological images; feature extraction that captures image properties at the pixel, object, and semantic levels; predictive modeling that utilizes image features for diagnostic or prognostic applications; and data and information visualization that explores WSI for de novo discovery. In addition, we highlight future research directions and discuss the impact of large public repositories of histopathological data, such as the Cancer Genome Atlas, on the field of pathology informatics. Following the review, we present a case study to illustrate a clinical decision support system that begins with quality control and ends with predictive modeling for several cancer endpoints. Currently, state-of-the-art software tools only provide limited image processing capabilities instead of complete data analysis for clinical decision-making. We aim to inspire researchers to conduct more research in pathology imaging informatics so that clinical decision support can become a reality. PMID:23959844

  15. Non-Imaging Software/Data Analysis Requirements

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The analysis software needs of the non-imaging planetary data user are discussed. Assumptions as to the nature of the planetary science data centers where the data are physically stored are advanced, the scope of the non-imaging data is outlined, and facilities that users are likely to need to define and access data are identified. Data manipulation and analysis needs and display graphics are discussed.

  16. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1984-01-01

    Methods were developed for estimating point spread functions from image data. Roads and bridges in dark backgrounds are being examined as well as other smoothing methods for reducing noise in the estimated point spread function. Tomographic techniques were used to estimate two dimensional point spread functions. Reformatting software changes were implemented to handle formats for LANDSAT-5 data.

  17. Applying Image Matching to Video Analysis

    DTIC Science & Technology

    2010-09-01

    34American Classics VII: Don’t be a Chicken of Dumplings". The frames were extracted using the ffmpeg program [29]. The first two images from the set...F. " ffmpeg software". http://www.ffmpeg.org/. 30: Hess, R. "SIFT software". http://web.engr.oregonstate.edu/hess. 31: Bay, H., Van Gool, L. and

  18. Fiji - an Open Source platform for biological image analysis

    PubMed Central

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2013-01-01

    Fiji is a distribution of the popular Open Source software ImageJ focused on biological image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image processing algorithms. Fiji facilitates the transformation of novel algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities. PMID:22743772

  19. Independent component analysis applications on THz sensing and imaging

    NASA Astrophysics Data System (ADS)

    Balci, Soner; Maleski, Alexander; Nascimento, Matheus Mello; Philip, Elizabath; Kim, Ju-Hyung; Kung, Patrick; Kim, Seongsin M.

    2016-05-01

    We report Independent Component Analysis (ICA) technique applied to THz spectroscopy and imaging to achieve a blind source separation. A reference water vapor absorption spectrum was extracted via ICA, then ICA was utilized on a THz spectroscopic image in order to clean the absorption of water molecules from each pixel. For this purpose, silica gel was chosen as the material of interest for its strong water absorption. The resulting image clearly showed that ICA effectively removed the water content in the detected signal allowing us to image the silica gel beads distinctively even though it was totally embedded in water before ICA was applied.

  20. A collaborative biomedical image mining framework: application on the image analysis of microscopic kidney biopsies.

    PubMed

    Goudas, T; Doukas, C; Chatziioannou, A; Maglogiannis, I

    2013-01-01

    The analysis and characterization of biomedical image data is a complex procedure involving several processing phases, like data acquisition, preprocessing, segmentation, feature extraction and classification. The proper combination and parameterization of the utilized methods are heavily relying on the given image data set and experiment type. They may thus necessitate advanced image processing and classification knowledge and skills from the side of the biomedical expert. In this work, an application, exploiting web services and applying ontological modeling, is presented, to enable the intelligent creation of image mining workflows. The described tool can be directly integrated to the RapidMiner, Taverna or similar workflow management platforms. A case study dealing with the creation of a sample workflow for the analysis of kidney biopsy microscopy images is presented to demonstrate the functionality of the proposed framework.

  1. Automated fine structure image analysis method for discrimination of diabetic retinopathy stage using conjunctival microvasculature images

    PubMed Central

    Khansari, Maziyar M; O’Neill, William; Penn, Richard; Chau, Felix; Blair, Norman P; Shahidi, Mahnaz

    2016-01-01

    The conjunctiva is a densely vascularized mucus membrane covering the sclera of the eye with a unique advantage of accessibility for direct visualization and non-invasive imaging. The purpose of this study is to apply an automated quantitative method for discrimination of different stages of diabetic retinopathy (DR) using conjunctival microvasculature images. Fine structural analysis of conjunctival microvasculature images was performed by ordinary least square regression and Fisher linear discriminant analysis. Conjunctival images between groups of non-diabetic and diabetic subjects at different stages of DR were discriminated. The automated method’s discriminate rates were higher than those determined by human observers. The method allowed sensitive and rapid discrimination by assessment of conjunctival microvasculature images and can be potentially useful for DR screening and monitoring. PMID:27446692

  2. Comparison of the Direct Scoring Method and Multi-Criteria Decision Analysis for Dredged Material Management Decision Making

    DTIC Science & Technology

    2009-09-01

    regulations is required. There are three main concerns with this approach: lack of weighting, categorical scores, and subjectivity. An initial concern...provided as they may fit into the hypothetical upland disposal site selection for dredged material example below. Three main criteria were identified...cost, environmental, and social (Figure 2). Each main crite- rion is divided into sub-criteria. The cost criterion is divided into construction

  3. Analysis of Intrinsic Stability Criteria for Isotropic Third-Order Green Elastic and Compressible Neo-Hookean Solids

    DTIC Science & Technology

    2014-03-01

    requires positive definiteness of the tensor of elas- tic constants. In contrast, structural instability criteria are global rather than local...1975). Although early cal- culations (Born, 1940; Misra , 1940) focused on unstressed lattices, this criteria has often been posited for stressed...construed as an official Department of the Army position unless so designated by other authorized documents. Citation of manufacturer’s or trade names

  4. ATS-F notching criteria as determined by analysis and test. [dynamic structural model sinusoidal vibration tests

    NASA Technical Reports Server (NTRS)

    Swales, T. G.; Alexander, J.; Luzier, R.

    1974-01-01

    Description of the development of the notching criteria for the ATS-F structural model sinusoidal vibration test. A brief description of the ATS-F spacecraft is followed by a definition of the significant structural tests conducted as part of the structural model test program. The interrelationship of these tests with dynamic analyses in the development of the notching criteria for the qualification level vibration test is discussed in some detail. A brief summary of the vibration test results is also included.

  5. Analysis of Multipath Pixels in SAR Images

    NASA Astrophysics Data System (ADS)

    Zhao, J. W.; Wu, J. C.; Ding, X. L.; Zhang, L.; Hu, F. M.

    2016-06-01

    As the received radar signal is the sum of signal contributions overlaid in one single pixel regardless of the travel path, the multipath effect should be seriously tackled as the multiple bounce returns are added to direct scatter echoes which leads to ghost scatters. Most of the existing solution towards the multipath is to recover the signal propagation path. To facilitate the signal propagation simulation process, plenty of aspects such as sensor parameters, the geometry of the objects (shape, location, orientation, mutual position between adjacent buildings) and the physical parameters of the surface (roughness, correlation length, permittivity)which determine the strength of radar signal backscattered to the SAR sensor should be given in previous. However, it's not practical to obtain the highly detailed object model in unfamiliar area by field survey as it's a laborious work and time-consuming. In this paper, SAR imaging simulation based on RaySAR is conducted at first aiming at basic understanding of multipath effects and for further comparison. Besides of the pre-imaging simulation, the product of the after-imaging, which refers to radar images is also taken into consideration. Both Cosmo-SkyMed ascending and descending SAR images of Lupu Bridge in Shanghai are used for the experiment. As a result, the reflectivity map and signal distribution map of different bounce level are simulated and validated by 3D real model. The statistic indexes such as the phase stability, mean amplitude, amplitude dispersion, coherence and mean-sigma ratio in case of layover are analyzed with combination of the RaySAR output.

  6. Urban Vulnerability Assessment to Seismic Hazard through Spatial Multi-Criteria Analysis. Case Study: the Bucharest Municipality/Romania

    NASA Astrophysics Data System (ADS)

    Armas, Iuliana; Dumitrascu, Silvia; Bostenaru, Maria

    2010-05-01

    In the context of an explosive increase in value of the damage caused by natural disasters, an alarming challenge in the third millennium is the rapid growth of urban population in vulnerable areas. Cities are, by definition, very fragile socio-ecological systems with a high level of vulnerability when it comes to environmental changes and that are responsible for important transformations of the space, determining dysfunctions shown in the state of the natural variables (Parker and Mitchell, 1995, The OFDA/CRED International Disaster Database). A contributing factor is the demographic dynamic that affects urban areas. The aim of this study is to estimate the overall vulnerability of the urban area of Bucharest in the context of the seismic hazard, by using environmental, socio-economic, and physical measurable variables in the framework of a spatial multi-criteria analysis. For this approach the capital city of Romania was chosen based on its high vulnerability due to the explosive urban development and the advanced state of degradation of the buildings (most of the building stock being built between 1940 and 1977). Combining these attributes with the seismic hazard induced by the Vrancea source, Bucharest was ranked as the 10th capital city worldwide in the terms of seismic risk. Over 40 years of experience in the natural risk field shows that the only directly accessible way to reduce the natural risk is by reducing the vulnerability of the space (Adger et al., 2001, Turner et al., 2003; UN/ISDR, 2004, Dayton-Johnson, 2004, Kasperson et al., 2005; Birkmann, 2006 etc.). In effect, reducing the vulnerability of urban spaces would imply lower costs produced by natural disasters. By applying the SMCA method, the result reveals a circular pattern, signaling as hot spots the Bucharest historic centre (located on a river terrace and with aged building stock) and peripheral areas (isolated from the emergency centers and defined by precarious social and economic

  7. Diagnostic support for glaucoma using retinal images: a hybrid image analysis and data mining approach.

    PubMed

    Yu, Jin; Abidi, Syed Sibte Raza; Artes, Paul; McIntyre, Andy; Heywood, Malcolm

    2005-01-01

    The availability of modern imaging techniques such as Confocal Scanning Laser Tomography (CSLT) for capturing high-quality optic nerve images offer the potential for developing automatic and objective methods for diagnosing glaucoma. We present a hybrid approach that features the analysis of CSLT images using moment methods to derive abstract image defining features. The features are then used to train classifers for automatically distinguishing CSLT images of normal and glaucoma patient. As a first, in this paper, we present investigations in feature subset selction methods for reducing the relatively large input space produced by the moment methods. We use neural networks and support vector machines to determine a sub-set of moments that offer high classification accuracy. We demonstratee the efficacy of our methods to discriminate between healthy and glaucomatous optic disks based on shape information automatically derived from optic disk topography and reflectance images.

  8. Automated image analysis method for detecting and quantifying macrovesicular steatosis in hematoxylin and eosin-stained histology images of human livers.

    PubMed

    Nativ, Nir I; Chen, Alvin I; Yarmush, Gabriel; Henry, Scot D; Lefkowitch, Jay H; Klein, Kenneth M; Maguire, Timothy J; Schloss, Rene; Guarrera, James V; Berthiaume, Francois; Yarmush, Martin L

    2014-02-01

    Large-droplet macrovesicular steatosis (ld-MaS) in more than 30% of liver graft hepatocytes is a major risk factor for liver transplantation. An accurate assessment of the ld-MaS percentage is crucial for determining liver graft transplantability, which is currently based on pathologists' evaluations of hematoxylin and eosin (H&E)-stained liver histology specimens, with the predominant criteria being the relative size of the lipid droplets (LDs) and their propensity to displace a hepatocyte's nucleus to the cell periphery. Automated image analysis systems aimed at objectively and reproducibly quantifying ld-MaS do not accurately differentiate large LDs from small-droplet macrovesicular steatosis and do not take into account LD-mediated nuclear displacement; this leads to a poor correlation with pathologists' assessments. Here we present an improved image analysis method that incorporates nuclear displacement as a key image feature for segmenting and classifying ld-MaS from H&E-stained liver histology slides. 52,000 LDs in 54 digital images from 9 patients were analyzed, and the performance of the proposed method was compared against the performance of current image analysis methods and the ld-MaS percentage evaluations of 2 trained pathologists from different centers. We show that combining nuclear displacement and LD size information significantly improves the separation between large and small macrovesicular LDs (specificity = 93.7%, sensitivity = 99.3%) and the correlation with pathologists' ld-MaS percentage assessments (linear regression coefficient of determination = 0.97). This performance vastly exceeds that of other automated image analyzers, which typically underestimate or overestimate pathologists' ld-MaS scores. This work demonstrates the potential of automated ld-MaS analysis in monitoring the steatotic state of livers. The image analysis principles demonstrated here may help to standardize ld-MaS scores among centers and ultimately help

  9. Parameter-Based Performance Analysis of Object-Based Image Analysis Using Aerial and Quikbird-2 Images

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Yildiz, M.

    2014-09-01

    Opening new possibilities for research, very high resolution (VHR) imagery acquired by recent commercial satellites and aerial systems requires advanced approaches and techniques that can handle large volume of data with high local variance. Delineation of land use/cover information from VHR images is a hot research topic in remote sensing. In recent years, object-based image analysis (OBIA) has become a popular solution for image analysis tasks as it considers shape, texture and content information associated with the image objects. The most important stage of OBIA is the image segmentation process applied prior to classification. Determination of optimal segmentation parameters is of crucial importance for the performance of the selected classifier. In this study, effectiveness and applicability of the segmentation method in relation to its parameters was analysed using two VHR images, an aerial photo and a Quickbird-2 image. Multi-resolution segmentation technique was employed with its optimal parameters of scale, shape and compactness that were defined after an extensive trail process on the data sets. Nearest neighbour classifier was applied on the segmented images, and then the accuracy assessment was applied. Results show that segmentation parameters have a direct effect on the classification accuracy, and low values of scale-shape combinations produce the highest classification accuracies. Also, compactness parameter was found to be having minimal effect on the construction of image objects, hence it can be set to a constant value in image classification.

  10. Digital pathology and image analysis in tissue biomarker research.

    PubMed

    Hamilton, Peter W; Bankhead, Peter; Wang, Yinhai; Hutchinson, Ryan; Kieran, Declan; McArt, Darragh G; James, Jacqueline; Salto-Tellez, Manuel

    2014-11-01

    Digital pathology and the adoption of image analysis have grown rapidly in the last few years. This is largely due to the implementation of whole slide scanning, advances in software and computer processing capacity and the increasing importance of tissue-based research for biomarker discovery and stratified medicine. This review sets out the key application areas for digital pathology and image analysis, with a particular focus on research and biomarker discovery. A variety of image analysis applications are reviewed including nuclear morphometry and tissue architecture analysis, but with emphasis on immunohistochemistry and fluorescence analysis of tissue biomarkers. Digital pathology and image analysis have important roles across the drug/companion diagnostic development pipeline including biobanking, molecular pathology, tissue microarray analysis, molecular profiling of tissue and these important developments are reviewed. Underpinning all of these important developments is the need for high quality tissue samples and the impact of pre-analytical variables on tissue research is discussed. This requirement is combined with practical advice on setting up and running a digital pathology laboratory. Finally, we discuss the need to integrate digital image analysis data with epidemiological, clinical and genomic data in order to fully understand the relationship between genotype and phenotype and to drive discovery and the delivery of personalized medicine.

  11. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  12. Guidance on priority setting in health care (GPS-Health): the inclusion of equity criteria not captured by cost-effectiveness analysis

    PubMed Central

    2014-01-01

    This Guidance for Priority Setting in Health Care (GPS-Health), initiated by the World Health Organization, offers a comprehensive map of equity criteria that are relevant to health care priority setting and should be considered in addition to cost-effectiveness analysis. The guidance, in the form of a checklist, is especially targeted at decision makers who set priorities at national and sub-national levels, and those who interpret findings from cost-effectiveness analysis. It is also targeted at researchers conducting cost-effectiveness analysis to improve reporting of their results in the light of these other criteria. The guidance was develop through a series of expert consultation meetings and involved three steps: i) methods and normative concepts were identified through a systematic review; ii) the review findings were critically assessed in the expert consultation meetings which resulted in a draft checklist of normative criteria; iii) the checklist was validated though an extensive hearing process with input from a range of relevant stakeholders. The GPS-Health incorporates criteria related to the disease an intervention targets (severity of disease, capacity to benefit, and past health loss); characteristics of social groups an intervention targets (socioeconomic status, area of living, gender; race, ethnicity, religion and sexual orientation); and non-health consequences of an intervention (financial protection, economic productivity, and care for others). PMID:25246855

  13. Eliciting and Combining Decision Criteria Using a Limited Palette of Utility Functions and Uncertainty Distributions: Illustrated by Application to Pest Risk Analysis.

    PubMed

    Holt, Johnson; Leach, Adrian W; Schrader, Gritta; Petter, Françoise; MacLeod, Alan; van der Gaag, Dirk Jan; Baker, Richard H A; Mumford, John D

    2014-01-01

    Utility functions in the form of tables or matrices have often been used to combine discretely rated decision-making criteria. Matrix elements are usually specified individually, so no one rule or principle can be easily stated for the utility function as a whole. A series of five matrices are presented that aggregate criteria two at a time using simple rules that express a varying degree of constraint of the lower rating over the higher. A further nine possible matrices were obtained by using a different rule either side of the main axis of the matrix to describe situations where the criteria have a differential influence on the outcome. Uncertainties in the criteria are represented by three alternative frequency distributions from which the assessors select the most appropriate. The output of the utility function is a distribution of rating frequencies that is dependent on the distributions of the input criteria. In pest risk analysis (PRA), seven of these utility functions were required to mimic the logic by which assessors for the European and Mediterranean Plant Protection Organization arrive at an overall rating of pest risk. The framework enables the development of PRAs that are consistent and easy to understand, criticize, compare, and change. When tested in workshops, PRA practitioners thought that the approach accorded with both the logic and the level of resolution that they used in the risk assessments.

  14. Analysis of imaging quality under the systematic parameters for thermal imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Jin, Weiqi

    2009-07-01

    The integration of thermal imaging system and radar system could increase the range of target identification as well as strengthen the accuracy and reliability of detection, which is a state-of-the-art and mainstream integrated system to search any invasive target and guard homeland security. When it works, there is, however, one defect existing of what the thermal imaging system would produce affected images which could cause serious consequences when searching and detecting. In this paper, we study and reveal the reason why and how the affected images would occur utilizing the principle of lightwave before establishing mathematical imaging model which could meet the course of ray transmitting. In the further analysis, we give special attentions to the systematic parameters of the model, and analyse in detail all parameters which could possibly affect the imaging process and the function how it does respectively. With comprehensive research, we obtain detailed information about the regulation of diffractive phenomena shaped by these parameters. Analytical results have been convinced through the comparison between experimental images and MATLAB simulated images, while simulated images based on the parameters we revised to judge our expectation have good comparability with images acquired in reality.

  15. A Fluorescent Live Imaging Screening Assay Based on Translocation Criteria Identifies Novel Cytoplasmic Proteins Implicated in G Protein-coupled Receptor Signaling Pathways.

    PubMed

    Lecat, Sandra; Matthes, Hans W D; Pepperkok, Rainer; Simpson, Jeremy C; Galzi, Jean-Luc

    2015-05-01

    Several cytoplasmic proteins that are involved in G protein-coupled receptor signaling cascades are known to translocate to the plasma membrane upon receptor activation, such as beta-arrestin2. Based on this example and in order to identify new cytoplasmic proteins implicated in the ON-and-OFF cycle of G protein-coupled receptor, a live-imaging screen of fluorescently labeled cytoplasmic proteins was performed using translocation criteria. The screening of 193 fluorescently tagged human proteins identified eight proteins that responded to activation of the tachykinin NK2 receptor by a change in their intracellular localization. Previously we have presented the functional characterization of one of these proteins, REDD1, that translocates to the plasma membrane. Here we report the results of the entire screening. The process of cell activation was recorded on videos at different time points and all the videos can be visualized on a dedicated website. The proteins BAIAP3 and BIN1, partially translocated to the plasma membrane upon activation of NK2 receptors. Proteins ARHGAP12 and PKM2 translocated toward membrane blebs. Three proteins that associate with the cytoskeleton were of particular interest : PLEKHH2 rearranged from individual dots located near the cell-substrate adhesion surface into lines of dots. The speriolin-like protein, SPATC1L, redistributed to cell-cell junctions. The Chloride intracellular Channel protein, CLIC2, translocated from actin-enriched plasma membrane bundles to cell-cell junctions upon activation of NK2 receptors. CLIC2, and one of its close paralogs, CLIC4, were further shown to respond with the same translocation pattern to muscarinic M3 and lysophosphatidic LPA receptors. This screen allowed us to identify potential actors in signaling pathways downstream of G protein-coupled receptors and could be scaled-up for high-content screening.

  16. Analysis of radar images by means of digital terrain models

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.; Kobrick, M.

    1984-01-01

    It is pointed out that the importance of digital terrain models in the processing, analysis, and interpretation of remote sensing data is increasing. In investigations related to the study of radar images, digital terrain models can have a particular significance, because radar reflection is a function of the terrain characteristics. A procedure for the analysis and interpretation of radar images is discussed. The procedure is based on a utilization of computer simulation which makes it possible to produce simulated radar images on the basis of a digital terrain model. The simulated radar images are used for the geometric and radiometric rectification of real radar images. A description of the employed procedures is provided, and the obtained results are discussed, taking into account a test area in Northern California.

  17. The Land Analysis System (LAS) for multispectral image processing

    USGS Publications Warehouse

    Wharton, S. W.; Lu, Y. C.; Quirk, Bruce K.; Oleson, Lyndon R.; Newcomer, J. A.; Irani, Frederick M.

    1988-01-01

    The Land Analysis System (LAS) is an interactive software system available in the public domain for the analysis, display, and management of multispectral and other digital image data. LAS provides over 240 applications functions and utilities, a flexible user interface, complete online and hard-copy documentation, extensive image-data file management, reformatting, conversion utilities, and high-level device independent access to image display hardware. The authors summarize the capabilities of the current release of LAS (version 4.0) and discuss plans for future development. Particular emphasis is given to the issue of system portability and the importance of removing and/or isolating hardware and software dependencies.

  18. Visualization and analysis of 3D microscopic images.

    PubMed

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain.

  19. Visualization and Analysis of 3D Microscopic Images

    PubMed Central

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  20. Person identification using fractal analysis of retina images

    NASA Astrophysics Data System (ADS)

    Ungureanu, Constantin; Corniencu, Felicia

    2004-10-01

    Biometric is automated method of recognizing a person based on physiological or behavior characteristics. Among the features measured are retina scan, voice, and fingerprint. A retina-based biometric involves the analysis of the blood vessels situated at the back of the eye. In this paper we present a method, which uses the fractal analysis to characterize the retina images. The Fractal Dimension (FD) of retina vessels was measured for a number of 20 images and have been obtained different values of FD for each image. This algorithm provides a good accuracy is cheap and easy to implement.