Sample records for image processing methodology

  1. Currently available methodologies for the processing of intravascular ultrasound and optical coherence tomography images.

    PubMed

    Athanasiou, Lambros; Sakellarios, Antonis I; Bourantas, Christos V; Tsirka, Georgia; Siogkas, Panagiotis; Exarchos, Themis P; Naka, Katerina K; Michalis, Lampros K; Fotiadis, Dimitrios I

    2014-07-01

    Optical coherence tomography and intravascular ultrasound are the most widely used methodologies in clinical practice as they provide high resolution cross-sectional images that allow comprehensive visualization of the lumen and plaque morphology. Several methods have been developed in recent years to process the output of these imaging modalities, which allow fast, reliable and reproducible detection of the luminal borders and characterization of plaque composition. These methods have proven useful in the study of the atherosclerotic process as they have facilitated analysis of a vast amount of data. This review presents currently available intravascular ultrasound and optical coherence tomography processing methodologies for segmenting and characterizing the plaque area, highlighting their advantages and disadvantages, and discusses the future trends in intravascular imaging.

  2. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  3. Imaging Girls: Visual Methodologies and Messages for Girls' Education

    ERIC Educational Resources Information Center

    Magno, Cathryn; Kirk, Jackie

    2008-01-01

    This article describes the use of visual methodologies to examine images of girls used by development agencies to portray and promote their work in girls' education, and provides a detailed discussion of three report cover images. It details the processes of methodology and tool development for the visual analysis and presents initial 'readings'…

  4. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  5. Effects of image processing on the detective quantum efficiency

    NASA Astrophysics Data System (ADS)

    Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na

    2010-04-01

    Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.

  6. Development and Current Status of Skull-Image Superimposition - Methodology and Instrumentation.

    PubMed

    Lan, Y

    1992-12-01

    This article presents a review of the literature and an evaluation on the development and application of skull-image superimposition technology - both instrumentation and methodology - contributed by a number of scholars since 1935. Along with a comparison of the methodologies involved in the two superimposition techniques - photographic and video - the author characterized the techniques in action and the recent advances in computer image superimposition processing technology. The major disadvantage of conventional approaches is its relying on subjective interpretation. Through painstaking comparison and analysis, computer image processing technology can make more conclusive identifications by direct testing and evaluating the various programmed indices. Copyright © 1992 Central Police University.

  7. a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications

    NASA Astrophysics Data System (ADS)

    Dhanda, A.; Remondino, F.; Santana Quintero, M.

    2018-05-01

    This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.

  8. Methodology for the Elimination of Reflection and System Vibration Effects in Particle Image Velocimetry Data Processing

    NASA Technical Reports Server (NTRS)

    Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.

    2005-01-01

    A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.

  9. Fast and Accurate Cell Tracking by a Novel Optical-Digital Hybrid Method

    NASA Astrophysics Data System (ADS)

    Torres-Cisneros, M.; Aviña-Cervantes, J. G.; Pérez-Careta, E.; Ambriz-Colín, F.; Tinoco, Verónica; Ibarra-Manzano, O. G.; Plascencia-Mora, H.; Aguilera-Gómez, E.; Ibarra-Manzano, M. A.; Guzman-Cabrera, R.; Debeir, Olivier; Sánchez-Mondragón, J. J.

    2013-09-01

    An innovative methodology to detect and track cells using microscope images enhanced by optical cross-correlation techniques is proposed in this paper. In order to increase the tracking sensibility, image pre-processing has been implemented as a morphological operator on the microscope image. Results show that the pre-processing process allows for additional frames of cell tracking, therefore increasing its robustness. The proposed methodology can be used in analyzing different problems such as mitosis, cell collisions, and cell overlapping, ultimately designed to identify and treat illnesses and malignancies.

  10. A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.

    PubMed

    Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt

    2017-01-01

    This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.

  11. Image processing in digital pathology: an opportunity to solve inter-batch variability of immunohistochemical staining

    NASA Astrophysics Data System (ADS)

    van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2017-02-01

    Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns.

  12. Image processing in digital pathology: an opportunity to solve inter-batch variability of immunohistochemical staining

    PubMed Central

    Van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2017-01-01

    Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns. PMID:28220842

  13. Linear- and Repetitive Feature Detection Within Remotely Sensed Imagery

    DTIC Science & Technology

    2017-04-01

    applicable to Python or other pro- gramming languages with image- processing capabilities. 4.1 Classification machine learning The first methodology uses...remotely sensed images that are in panchromatic or true-color formats. Image- processing techniques, in- cluding Hough transforms, machine learning, and...data fusion .................................................................................................... 44 6.3 Context-based processing

  14. Population-based imaging biobanks as source of big data.

    PubMed

    Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian

    2017-06-01

    Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.

  15. Possibilities of Processing Archival Photogrammetric Images Captured by Rollei 6006 Metric Camera Using Current Method

    NASA Astrophysics Data System (ADS)

    Dlesk, A.; Raeva, P.; Vach, K.

    2018-05-01

    Processing of analog photogrammetric negatives using current methods brings new challenges and possibilities, for example, creation of a 3D model from archival images which enables the comparison of historical state and current state of cultural heritage objects. The main purpose of this paper is to present possibilities of processing archival analog images captured by photogrammetric camera Rollei 6006 metric. In 1994, the Czech company EuroGV s.r.o. carried out photogrammetric measurements of former limestone quarry the Great America located in the Central Bohemian Region in the Czech Republic. All the negatives of photogrammetric images, complete documentation, coordinates of geodetically measured ground control points, calibration reports and external orientation of images calculated in the Combined Adjustment Program are preserved and were available for the current processing. Negatives of images were scanned and processed using structure from motion method (SfM). The result of the research is a statement of what accuracy is possible to expect from the proposed methodology using Rollei metric images originally obtained for terrestrial intersection photogrammetry while adhering to the proposed methodology.

  16. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  17. Measuring the complexity of design in real-time imaging software

    NASA Astrophysics Data System (ADS)

    Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.

    2007-02-01

    Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.

  18. An image-processing methodology for extracting bloodstain pattern features.

    PubMed

    Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G

    2017-08-01

    There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A Methodology and Implementation for Annotating Digital Images for Context-appropriate Use in an Academic Health Care Environment

    PubMed Central

    Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.

    2004-01-01

    Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971

  20. Multiattribute selection of acute stroke imaging software platform for Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) clinical trial.

    PubMed

    Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A

    2013-04-01

    The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  1. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.

    PubMed

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué

    2015-10-01

    In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.

  2. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis

    PubMed Central

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué

    2015-01-01

    In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638

  3. A methodology to event reconstruction from trace images.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre

    2015-03-01

    The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Automated Recognition of Vegetation and Water Bodies on the Territory of Megacities in Satellite Images of Visible and IR Bands

    NASA Astrophysics Data System (ADS)

    Mozgovoy, Dmitry k.; Hnatushenko, Volodymyr V.; Vasyliev, Volodymyr V.

    2018-04-01

    Vegetation and water bodies are a fundamental element of urban ecosystems, and water mapping is critical for urban and landscape planning and management. A methodology of automated recognition of vegetation and water bodies on the territory of megacities in satellite images of sub-meter spatial resolution of the visible and IR bands is proposed. By processing multispectral images from the satellite SuperView-1A, vector layers of recognized plant and water objects were obtained. Analysis of the results of image processing showed a sufficiently high accuracy of the delineation of the boundaries of recognized objects and a good separation of classes. The developed methodology provides a significant increase of the efficiency and reliability of updating maps of large cities while reducing financial costs. Due to the high degree of automation, the proposed methodology can be implemented in the form of a geo-information web service functioning in the interests of a wide range of public services and commercial institutions.

  5. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  6. Unsupervised Detection of Planetary Craters by a Marked Point Process

    NASA Technical Reports Server (NTRS)

    Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.

    2011-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.

  7. An innovative and shared methodology for event reconstruction using images in forensic science.

    PubMed

    Milliet, Quentin; Jendly, Manon; Delémont, Olivier

    2015-09-01

    This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Automatic Reacquisition of Satellite Positions by Detecting Their Expected Streaks in Astronomical Images

    NASA Astrophysics Data System (ADS)

    Levesque, M.

    Artificial satellites, and particularly space junk, drift continuously from their known orbits. In the surveillance-of-space context, they must be observed frequently to ensure that the corresponding orbital parameter database entries are up-to-date. Autonomous ground-based optical systems are periodically tasked to observe these objects, calculate the difference between their predicted and real positions and update object orbital parameters. The real satellite positions are provided by the detection of the satellite streaks in the astronomical images specifically acquired for this purpose. This paper presents the image processing techniques used to detect and extract the satellite positions. The methodology includes several processing steps including: image background estimation and removal, star detection and removal, an iterative matched filter for streak detection, and finally false alarm rejection algorithms. This detection methodology is able to detect very faint objects. Simulated data were used to evaluate the methodology's performance and determine the sensitivity limits where the algorithm can perform detection without false alarm, which is essential to avoid corruption of the orbital parameter database.

  9. Development of a Methodology for Predicting Forest Area for Large-Area Resource Monitoring

    Treesearch

    William H. Cooke

    2001-01-01

    The U.S. Department of Agriculture, Forest Service, Southcm Research Station, appointed a remote-sensing team to develop an image-processing methodology for mapping forest lands over large geographic areds. The team has presented a repeatable methodology, which is based on regression modeling of Advanced Very High Resolution Radiometer (AVHRR) and Landsat Thematic...

  10. Short Project-Based Learning with MATLAB Applications to Support the Learning of Video-Image Processing

    ERIC Educational Resources Information Center

    Gil, Pablo

    2017-01-01

    University courses concerning Computer Vision and Image Processing are generally taught using a traditional methodology that is focused on the teacher rather than on the students. This approach is consequently not effective when teachers seek to attain cognitive objectives involving their students' critical thinking. This manuscript covers the…

  11. Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.

    PubMed

    Arganda-Carreras, Ignacio; Andrey, Philippe

    2017-01-01

    With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.

  12. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  13. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  14. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  15. The PICWidget

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2007-01-01

    The Plug-in Image Component Widget (PICWidget) is a software component for building digital imaging applications. The component is part of a methodology described in GIS Methodology for Planning Planetary-Rover Operations (NPO-41812), which appears elsewhere in this issue of NASA Tech Briefs. Planetary rover missions return a large number and wide variety of image data products that vary in complexity in many ways. Supported by a powerful, flexible image-data-processing pipeline, the PICWidget can process and render many types of imagery, including (but not limited to) thumbnail, subframed, downsampled, stereoscopic, and mosaic images; images coregistred with orbital data; and synthetic red/green/blue images. The PICWidget is capable of efficiently rendering images from data representing many more pixels than are available at a computer workstation where the images are to be displayed. The PICWidget is implemented as an Eclipse plug-in using the Standard Widget Toolkit, which provides a straightforward interface for re-use of the PICWidget in any number of application programs built upon the Eclipse application framework. Because the PICWidget is tile-based and performs aggressive tile caching, it has flexibility to perform faster or slower, depending whether more or less memory is available.

  16. A New Vegetation Segmentation Approach for Cropped Fields Based on Threshold Detection from Hue Histograms

    PubMed Central

    Hassanein, Mohamed; El-Sheimy, Naser

    2018-01-01

    Over the last decade, the use of unmanned aerial vehicle (UAV) technology has evolved significantly in different applications as it provides a special platform capable of combining the benefits of terrestrial and aerial remote sensing. Therefore, such technology has been established as an important source of data collection for different precision agriculture (PA) applications such as crop health monitoring and weed management. Generally, these PA applications depend on performing a vegetation segmentation process as an initial step, which aims to detect the vegetation objects in collected agriculture fields’ images. The main result of the vegetation segmentation process is a binary image, where vegetations are presented in white color and the remaining objects are presented in black. Such process could easily be performed using different vegetation indexes derived from multispectral imagery. Recently, to expand the use of UAV imagery systems for PA applications, it was important to reduce the cost of such systems through using low-cost RGB cameras Thus, developing vegetation segmentation techniques for RGB images is a challenging problem. The proposed paper introduces a new vegetation segmentation methodology for low-cost UAV RGB images, which depends on using Hue color channel. The proposed methodology follows the assumption that the colors in any agriculture field image can be distributed into vegetation and non-vegetations colors. Therefore, four main steps are developed to detect five different threshold values using the hue histogram of the RGB image, these thresholds are capable to discriminate the dominant color, either vegetation or non-vegetation, within the agriculture field image. The achieved results for implementing the proposed methodology showed its ability to generate accurate and stable vegetation segmentation performance with mean accuracy equal to 87.29% and standard deviation as 12.5%. PMID:29670055

  17. Three-dimensional imaging of flat natural and cultural heritage objects by a Compton scattering modality

    NASA Astrophysics Data System (ADS)

    Guerrero Prado, Patricio; Nguyen, Mai K.; Dumas, Laurent; Cohen, Serge X.

    2017-01-01

    Characterization and interpretation of flat ancient material objects, such as those found in archaeology, paleoenvironments, paleontology, and cultural heritage, have remained a challenging task to perform by means of conventional x-ray tomography methods due to their anisotropic morphology and flattened geometry. To overcome the limitations of the mentioned methodologies for such samples, an imaging modality based on Compton scattering is proposed in this work. Classical x-ray tomography treats Compton scattering data as noise in the image formation process, while in Compton scattering tomography the conditions are set such that Compton data become the principal image contrasting agent. Under these conditions, we are able, first, to avoid relative rotations between the sample and the imaging setup, and second, to obtain three-dimensional data even when the object is supported by a dense material by exploiting backscattered photons. Mathematically this problem is addressed by means of a conical Radon transform and its inversion. The image formation process and object reconstruction model are presented. The feasibility of this methodology is supported by numerical simulations.

  18. Automated motion artifact removal for intravital microscopy, without a priori information.

    PubMed

    Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph

    2014-03-28

    Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.

  19. Automated motion artifact removal for intravital microscopy, without a priori information

    PubMed Central

    Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph

    2014-01-01

    Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021

  20. Digital-image processing and image analysis of glacier ice

    USGS Publications Warehouse

    Fitzpatrick, Joan J.

    2013-01-01

    This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.

  1. Sensor fusion for synthetic vision

    NASA Technical Reports Server (NTRS)

    Pavel, M.; Larimer, J.; Ahumada, A.

    1991-01-01

    Display methodologies are explored for fusing images gathered by millimeter wave sensors with images rendered from an on-board terrain data base to facilitate visually guided flight and ground operations in low visibility conditions. An approach to fusion based on multiresolution image representation and processing is described which facilitates fusion of images differing in resolution within and between images. To investigate possible fusion methods, a workstation-based simulation environment is being developed.

  2. An evaluation of the directed flow graph methodology

    NASA Technical Reports Server (NTRS)

    Snyder, W. E.; Rajala, S. A.

    1984-01-01

    The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.

  3. Image gathering, coding, and processing: End-to-end optimization for efficient and robust acquisition of visual information

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.; Fales, Carl L.

    1990-01-01

    Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.

  4. On a methodology for robust segmentation of nonideal iris images.

    PubMed

    Schmid, Natalia A; Zuo, Jinyu

    2010-06-01

    Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.

  5. A method to perform a fast fourier transform with primitive image transformations.

    PubMed

    Sheridan, Phil

    2007-05-01

    The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.

  6. On the use of EEG or MEG brain imaging tools in neuromarketing research.

    PubMed

    Vecchiato, Giovanni; Astolfi, Laura; De Vico Fallani, Fabrizio; Toppi, Jlenia; Aloise, Fabio; Bez, Francesco; Wei, Daming; Kong, Wanzeng; Dai, Jounging; Cincotti, Febo; Mattia, Donatella; Babiloni, Fabio

    2011-01-01

    Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries.

  7. On the Use of EEG or MEG Brain Imaging Tools in Neuromarketing Research

    PubMed Central

    Vecchiato, Giovanni; Astolfi, Laura; De Vico Fallani, Fabrizio; Toppi, Jlenia; Aloise, Fabio; Bez, Francesco; Wei, Daming; Kong, Wanzeng; Dai, Jounging; Cincotti, Febo; Mattia, Donatella; Babiloni, Fabio

    2011-01-01

    Here we present an overview of some published papers of interest for the marketing research employing electroencephalogram (EEG) and magnetoencephalogram (MEG) methods. The interest for these methodologies relies in their high-temporal resolution as opposed to the investigation of such problem with the functional Magnetic Resonance Imaging (fMRI) methodology, also largely used in the marketing research. In addition, EEG and MEG technologies have greatly improved their spatial resolution in the last decades with the introduction of advanced signal processing methodologies. By presenting data gathered through MEG and high resolution EEG we will show which kind of information it is possible to gather with these methodologies while the persons are watching marketing relevant stimuli. Such information will be related to the memorization and pleasantness related to such stimuli. We noted that temporal and frequency patterns of brain signals are able to provide possible descriptors conveying information about the cognitive and emotional processes in subjects observing commercial advertisements. These information could be unobtainable through common tools used in standard marketing research. We also show an example of how an EEG methodology could be used to analyze cultural differences between fruition of video commercials of carbonated beverages in Western and Eastern countries. PMID:21960996

  8. Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.

    2016-02-15

    Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less

  9. Applying industrial engineering practices to radiology.

    PubMed

    Rosen, Len

    2004-01-01

    Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to other factors contributing to case cost. It has started to analyze its service contracts with equipment vendors. The department also is accumulating data to measure room, equipment, and labor utilization. The hospital now has a true picture of the real cost associated with each patient encounter in medical imaging. It can now begin to manage case costs, perform better capacity planning, create more effective relationships with its material suppliers, and optimize scheduling of patients and staff.

  10. Supervised restoration of degraded medical images using multiple-point geostatistics.

    PubMed

    Pham, Tuan D

    2012-06-01

    Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Classification of malignant and benign lung nodules using taxonomic diversity index and phylogenetic distance.

    PubMed

    de Sousa Costa, Robherson Wector; da Silva, Giovanni Lucca França; de Carvalho Filho, Antonio Oseas; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo

    2018-05-23

    Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. Therefore, this study proposes a methodology for diagnosis of lung nodules in benign and malignant tumors based on image processing and pattern recognition techniques. Mean phylogenetic distance (MPD) and taxonomic diversity index (Δ) were used as texture descriptors. Finally, the genetic algorithm in conjunction with the support vector machine were applied to select the best training model. The proposed methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 93.42%, specificity of 91.21%, accuracy of 91.81%, and area under the ROC curve of 0.94. The results demonstrate the promising performance of texture extraction techniques using mean phylogenetic distance and taxonomic diversity index combined with phylogenetic trees. Graphical Abstract Stages of the proposed methodology.

  12. Image-guided endobronchial ultrasound

    NASA Astrophysics Data System (ADS)

    Higgins, William E.; Zang, Xiaonan; Cheirsilp, Ronnarit; Byrnes, Patrick; Kuhlengel, Trevor; Bascom, Rebecca; Toth, Jennifer

    2016-03-01

    Endobronchial ultrasound (EBUS) is now recommended as a standard procedure for in vivo verification of extraluminal diagnostic sites during cancer-staging bronchoscopy. Yet, physicians vary considerably in their skills at using EBUS effectively. Regarding existing bronchoscopy guidance systems, studies have shown their effectiveness in the lung-cancer management process. With such a system, a patient's X-ray computed tomography (CT) scan is used to plan a procedure to regions of interest (ROIs). This plan is then used during follow-on guided bronchoscopy. Recent clinical guidelines for lung cancer, however, also dictate using positron emission tomography (PET) imaging for identifying suspicious ROIs and aiding in the cancer-staging process. While researchers have attempted to use guided bronchoscopy systems in tandem with PET imaging and EBUS, no true EBUS-centric guidance system exists. We now propose a full multimodal image-based methodology for guiding EBUS. The complete methodology involves two components: 1) a procedure planning protocol that gives bronchoscope movements appropriate for live EBUS positioning; and 2) a guidance strategy and associated system graphical user interface (GUI) designed for image-guided EBUS. We present results demonstrating the operation of the system.

  13. Context-specific selection of algorithms for recursive feature tracking in endoscopic image using a new methodology.

    PubMed

    Selka, F; Nicolau, S; Agnus, V; Bessaid, A; Marescaux, J; Soler, L

    2015-03-01

    In minimally invasive surgery, the tracking of deformable tissue is a critical component for image-guided applications. Deformation of the tissue can be recovered by tracking features using tissue surface information (texture, color,...). Recent work in this field has shown success in acquiring tissue motion. However, the performance evaluation of detection and tracking algorithms on such images are still difficult and are not standardized. This is mainly due to the lack of ground truth data on real data. Moreover, in order to avoid supplementary techniques to remove outliers, no quantitative work has been undertaken to evaluate the benefit of a pre-process based on image filtering, which can improve feature tracking robustness. In this paper, we propose a methodology to validate detection and feature tracking algorithms, using a trick based on forward-backward tracking that provides an artificial ground truth data. We describe a clear and complete methodology to evaluate and compare different detection and tracking algorithms. In addition, we extend our framework to propose a strategy to identify the best combinations from a set of detector, tracker and pre-process algorithms, according to the live intra-operative data. Experimental results have been performed on in vivo datasets and show that pre-process can have a strong influence on tracking performance and that our strategy to find the best combinations is relevant for a reasonable computation cost. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. CT radiation profile width measurement using CR imaging plate raw data

    PubMed Central

    Yang, Chang‐Ying Joseph

    2015-01-01

    This technical note demonstrates computed tomography (CT) radiation profile measurement using computed radiography (CR) imaging plate raw data showing it is possible to perform the CT collimation width measurement using a single scan without saturating the imaging plate. Previously described methods require careful adjustments to the CR reader settings in order to avoid signal clipping in the CR processed image. CT radiation profile measurements were taken as part of routine quality control on 14 CT scanners from four vendors. CR cassettes were placed on the CT scanner bed, raised to isocenter, and leveled. Axial scans were taken at all available collimations, advancing the cassette for each scan. The CR plates were processed and raw CR data were analyzed using MATLAB scripts to measure collimation widths. The raw data approach was compared with previously established methodology. The quality control analysis scripts are released as open source using creative commons licensing. A log‐linear relationship was found between raw pixel value and air kerma, and raw data collimation width measurements were in agreement with CR‐processed, bit‐reduced data, using previously described methodology. The raw data approach, with intrinsically wider dynamic range, allows improved measurement flexibility and precision. As a result, we demonstrate a methodology for CT collimation width measurements using a single CT scan and without the need for CR scanning parameter adjustments which is more convenient for routine quality control work. PACS numbers: 87.57.Q‐, 87.59.bd, 87.57.uq PMID:26699559

  15. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  16. Initial Results from Fitting Resolved Modes using HMI Intensity Observations

    NASA Astrophysics Data System (ADS)

    Korzennik, Sylvain G.

    2017-08-01

    The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.

  17. Utilizing remote sensing of thematic mapper data to improve our understanding of estuarine processes and their influence on the productivity of estuarine-dependent fisheries

    NASA Technical Reports Server (NTRS)

    Browder, Joan A.; May, L. Nelson, Jr.; Rosenthal, Alan; Baumann, Robert H.; Gosselink, James G.

    1987-01-01

    A stochastic spatial computer model addressing coastal resource problems in Lousiana is being refined and validated using thematic mapper (TM) imagery. The TM images of brackish marsh sites were processed and data were tabulated on spatial parameters from TM images of the salt marsh sites. The Fisheries Image Processing Systems (FIPS) was used to analyze the TM scene. Activities were concentrated on improving the structure of the model and developing a structure and methodology for calibrating the model with spatial-pattern data from the TM imagery.

  18. Evaluation of the visual performance of image processing pipes: information value of subjective image attributes

    NASA Astrophysics Data System (ADS)

    Nyman, G.; Häkkinen, J.; Koivisto, E.-M.; Leisti, T.; Lindroos, P.; Orenius, O.; Virtanen, T.; Vuori, T.

    2010-01-01

    Subjective image quality data for 9 image processing pipes and 8 image contents (taken with mobile phone camera, 72 natural scene test images altogether) from 14 test subjects were collected. A triplet comparison setup and a hybrid qualitative/quantitative methodology were applied. MOS data and spontaneous, subjective image quality attributes to each test image were recorded. The use of positive and negative image quality attributes by the experimental subjects suggested a significant difference between the subjective spaces of low and high image quality. The robustness of the attribute data was shown by correlating DMOS data of the test images against their corresponding, average subjective attribute vector length data. The findings demonstrate the information value of spontaneous, subjective image quality attributes in evaluating image quality at variable quality levels. We discuss the implications of these findings for the development of sensitive performance measures and methods in profiling image processing systems and their components, especially at high image quality levels.

  19. 7T MRI subthalamic nucleus atlas for use with 3T MRI.

    PubMed

    Milchenko, Mikhail; Norris, Scott A; Poston, Kathleen; Campbell, Meghan C; Ushe, Mwiza; Perlmutter, Joel S; Snyder, Abraham Z

    2018-01-01

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) reduces motor symptoms in most patients with Parkinson disease (PD), yet may produce untoward effects. Investigation of DBS effects requires accurate localization of the STN, which can be difficult to identify on magnetic resonance images collected with clinically available 3T scanners. The goal of this study is to develop a high-quality STN atlas that can be applied to standard 3T images. We created a high-definition STN atlas derived from seven older participants imaged at 7T. This atlas was nonlinearly registered to a standard template representing 56 patients with PD imaged at 3T. This process required development of methodology for nonlinear multimodal image registration. We demonstrate mm-scale STN localization accuracy by comparison of our 3T atlas with a publicly available 7T atlas. We also demonstrate less agreement with an earlier histological atlas. STN localization error in the 56 patients imaged at 3T was less than 1 mm on average. Our methodology enables accurate STN localization in individuals imaged at 3T. The STN atlas and underlying 3T average template in MNI space are freely available to the research community. The image registration methodology developed in the course of this work may be generally applicable to other datasets.

  20. Image-Guided Abdominal Surgery and Therapy Delivery

    PubMed Central

    Galloway, Robert L.; Herrell, S. Duke; Miga, Michael I.

    2013-01-01

    Image-Guided Surgery has become the standard of care in intracranial neurosurgery providing more exact resections while minimizing damage to healthy tissue. Moving that process to abdominal organs presents additional challenges in the form of image segmentation, image to physical space registration, organ motion and deformation. In this paper, we present methodologies and results for addressing these challenges in two specific organs: the liver and the kidney. PMID:25077012

  1. A methodology for automated CPA extraction using liver biopsy image analysis and machine learning techniques.

    PubMed

    Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas

    2017-03-01

    Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Cellular neural network-based hybrid approach toward automatic image registration

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar

    2013-01-01

    Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.

  3. Clinical Applications of a CT Window Blending Algorithm: RADIO (Relative Attenuation-Dependent Image Overlay).

    PubMed

    Mandell, Jacob C; Khurana, Bharti; Folio, Les R; Hyun, Hyewon; Smith, Stacy E; Dunne, Ruth M; Andriole, Katherine P

    2017-06-01

    A methodology is described using Adobe Photoshop and Adobe Extendscript to process DICOM images with a Relative Attenuation-Dependent Image Overlay (RADIO) algorithm to visualize the full dynamic range of CT in one view, without requiring a change in window and level settings. The potential clinical uses for such an algorithm are described in a pictorial overview, including applications in emergency radiology, oncologic imaging, and nuclear medicine and molecular imaging.

  4. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  5. Analyzing Media: Metaphors as Methodologies.

    ERIC Educational Resources Information Center

    Meyrowitz, Joshua

    Students have little intuitive insight into the process of thinking and structuring ideas. The image of metaphor for a phenomenon acts as a kind of methodology for the study of the phenomenon by (1) defining the key issues or problems; (2) shaping the type of research questions that are asked; (3) defining the type of data that are searched out;…

  6. A Robust Actin Filaments Image Analysis Framework

    PubMed Central

    Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem

    2016-01-01

    The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746

  7. Evolutionary Computing Methods for Spectral Retrieval

    NASA Technical Reports Server (NTRS)

    Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna

    2009-01-01

    A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.

  8. Application of Six Sigma methodology to a diagnostic imaging process.

    PubMed

    Taner, Mehmet Tolga; Sezen, Bulent; Atwat, Kamal M

    2012-01-01

    This paper aims to apply the Six Sigma methodology to improve workflow by eliminating the causes of failure in the medical imaging department of a private Turkish hospital. Implementation of the design, measure, analyse, improve and control (DMAIC) improvement cycle, workflow chart, fishbone diagrams and Pareto charts were employed, together with rigorous data collection in the department. The identification of root causes of repeat sessions and delays was followed by failure, mode and effect analysis, hazard analysis and decision tree analysis. The most frequent causes of failure were malfunction of the RIS/PACS system and improper positioning of patients. Subsequent to extensive training of professionals, the sigma level was increased from 3.5 to 4.2. The data were collected over only four months. Six Sigma's data measurement and process improvement methodology is the impetus for health care organisations to rethink their workflow and reduce malpractice. It involves measuring, recording and reporting data on a regular basis. This enables the administration to monitor workflow continuously. The improvements in the workflow under study, made by determining the failures and potential risks associated with radiologic care, will have a positive impact on society in terms of patient safety. Having eliminated repeat examinations, the risk of being exposed to more radiation was also minimised. This paper supports the need to apply Six Sigma and present an evaluation of the process in an imaging department.

  9. Image Processing for Planetary Limb/Terminator Extraction

    NASA Technical Reports Server (NTRS)

    Udomkesmalee, S.; Zhu, D. Q.; Chu, C. -C.

    1995-01-01

    A novel image segmentation technique for extracting limb and terminator of planetary bodies is proposed. Conventional edge- based histogramming approaches are used to trace object boundaries. The limb and terminator bifurcation is achieved by locating the harmonized segment in the two equations representing the 2-D parameterized boundary curve. Real planetary images from Voyager 1 and 2 served as representative test cases to verify the proposed methodology.

  10. Hybrid Discrete Wavelet Transform and Gabor Filter Banks Processing for Features Extraction from Biomedical Images

    PubMed Central

    Lahmiri, Salim; Boukadoum, Mounir

    2013-01-01

    A new methodology for automatic feature extraction from biomedical images and subsequent classification is presented. The approach exploits the spatial orientation of high-frequency textural features of the processed image as determined by a two-step process. First, the two-dimensional discrete wavelet transform (DWT) is applied to obtain the HH high-frequency subband image. Then, a Gabor filter bank is applied to the latter at different frequencies and spatial orientations to obtain new Gabor-filtered image whose entropy and uniformity are computed. Finally, the obtained statistics are fed to a support vector machine (SVM) binary classifier. The approach was validated on mammograms, retina, and brain magnetic resonance (MR) images. The obtained classification accuracies show better performance in comparison to common approaches that use only the DWT or Gabor filter banks for feature extraction. PMID:27006906

  11. 3D Texture Features Mining for MRI Brain Tumor Identification

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra

    2014-03-01

    Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.

  12. Human perception testing methodology for evaluating EO/IR imaging systems

    NASA Astrophysics Data System (ADS)

    Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.

    2018-04-01

    The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.

  13. Retinal imaging analysis based on vessel detection.

    PubMed

    Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila

    2017-07-01

    With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.

  14. Central and Divided Visual Field Presentation of Emotional Images to Measure Hemispheric Differences in Motivated Attention.

    PubMed

    O'Hare, Aminda J; Atchley, Ruth Ann; Young, Keith M

    2017-11-16

    Two dominant theories on lateralized processing of emotional information exist in the literature. One theory posits that unpleasant emotions are processed by right frontal regions, while pleasant emotions are processed by left frontal regions. The other theory posits that the right hemisphere is more specialized for the processing of emotional information overall, particularly in posterior regions. Assessing the different roles of the cerebral hemispheres in processing emotional information can be difficult without the use of neuroimaging methodologies, which are not accessible or affordable to all scientists. Divided visual field presentation of stimuli can allow for the investigation of lateralized processing of information without the use of neuroimaging technology. This study compared central versus divided visual field presentations of emotional images to assess differences in motivated attention between the two hemispheres. The late positive potential (LPP) was recorded using electroencephalography (EEG) and event-related potentials (ERPs) methodologies to assess motivated attention. Future work will pair this paradigm with a more active behavioral task to explore the behavioral impacts on the attentional differences found.

  15. Near ground level sensing for spatial analysis of vegetation

    NASA Technical Reports Server (NTRS)

    Sauer, Tom; Rasure, John; Gage, Charlie

    1991-01-01

    Measured changes in vegetation indicate the dynamics of ecological processes and can identify the impacts from disturbances. Traditional methods of vegetation analysis tend to be slow because they are labor intensive; as a result, these methods are often confined to small local area measurements. Scientists need new algorithms and instruments that will allow them to efficiently study environmental dynamics across a range of different spatial scales. A new methodology that addresses this problem is presented. This methodology includes the acquisition, processing, and presentation of near ground level image data and its corresponding spatial characteristics. The systematic approach taken encompasses a feature extraction process, a supervised and unsupervised classification process, and a region labeling process yielding spatial information.

  16. Performance evaluation methodology for historical document image binarization.

    PubMed

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  17. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface.

    PubMed

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-06-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging.

  18. A method to calibrate channel friction and bathymetry parameters of a Sub-Grid hydraulic model using SAR flood images

    NASA Astrophysics Data System (ADS)

    Wood, M.; Neal, J. C.; Hostache, R.; Corato, G.; Chini, M.; Giustarini, L.; Matgen, P.; Wagener, T.; Bates, P. D.

    2015-12-01

    Synthetic Aperture Radar (SAR) satellites are capable of all-weather day and night observations that can discriminate between land and smooth open water surfaces over large scales. Because of this there has been much interest in the use of SAR satellite data to improve our understanding of water processes, in particular for fluvial flood inundation mechanisms. Past studies prove that integrating SAR derived data with hydraulic models can improve simulations of flooding. However while much of this work focusses on improving model channel roughness values or inflows in ungauged catchments, improvement of model bathymetry is often overlooked. The provision of good bathymetric data is critical to the performance of hydraulic models but there are only a small number of ways to obtain bathymetry information where no direct measurements exist. Spatially distributed river depths are also rarely available. We present a methodology for calibration of model average channel depth and roughness parameters concurrently using SAR images of flood extent and a Sub-Grid model utilising hydraulic geometry concepts. The methodology uses real data from the European Space Agency's archive of ENVISAT[1] Wide Swath Mode images of the River Severn between Worcester and Tewkesbury during flood peaks between 2007 and 2010. Historic ENVISAT WSM images are currently free and easy to access from archive but the methodology can be applied with any available SAR data. The approach makes use of the SAR image processing algorithm of Giustarini[2] et al. (2013) to generate binary flood maps. A unique feature of the calibration methodology is to also use parameter 'identifiability' to locate the parameters with higher accuracy from a pre-assigned range (adopting the DYNIA method proposed by Wagener[3] et al., 2003). [1] https://gpod.eo.esa.int/services/ [2] Giustarini. 2013. 'A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X'. IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4. [3] Wagener. 2003. 'Towards reduced uncertainty in conceptual rainfall-runoff modelling: Dynamic identifiability analysis'. Hydrol. Process. 17, 455-476.

  19. Recent advances in live cell imaging of hepatoma cells

    PubMed Central

    2014-01-01

    Live cell imaging enables the study of dynamic processes of living cells in real time by use of suitable reporter proteins and the staining of specific cellular structures and/or organelles. With the availability of advanced optical devices and improved cell culture protocols it has become a rapidly growing research methodology. The success of this technique relies mainly on the selection of suitable reporter proteins, construction of recombinant plasmids possessing cell type specific promoters as well as reliable methods of gene transfer. This review aims to provide an overview of the recent developments in the field of marker proteins (bioluminescence and fluorescent) and methodologies (fluorescent resonance energy transfer, fluorescent recovery after photobleaching and proximity ligation assay) employed as to achieve an improved imaging of biological processes in hepatoma cells. Moreover, different expression systems of marker proteins and the modes of gene transfer are discussed with emphasis on the study of lipid droplet formation in hepatocytes as an example. PMID:25005127

  20. Detection of differential viewing patterns to erotic and non-erotic stimuli using eye-tracking methodology.

    PubMed

    Lykins, Amy D; Meana, Marta; Kambe, Gretchen

    2006-10-01

    As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.

  1. An efficient approach to integrated MeV ion imaging.

    PubMed

    Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M

    2018-03-01

    An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A methodology for evaluation of an interactive multispectral image processing system

    NASA Technical Reports Server (NTRS)

    Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.

    1987-01-01

    Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.

  3. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  4. An Efficient Computational Framework for the Analysis of Whole Slide Images: Application to Follicular Lymphoma Immunohistochemistry

    PubMed Central

    Samsi, Siddharth; Krishnamurthy, Ashok K.; Gurcan, Metin N.

    2012-01-01

    Follicular Lymphoma (FL) is one of the most common non-Hodgkin Lymphoma in the United States. Diagnosis and grading of FL is based on the review of histopathological tissue sections under a microscope and is influenced by human factors such as fatigue and reader bias. Computer-aided image analysis tools can help improve the accuracy of diagnosis and grading and act as another tool at the pathologist’s disposal. Our group has been developing algorithms for identifying follicles in immunohistochemical images. These algorithms have been tested and validated on small images extracted from whole slide images. However, the use of these algorithms for analyzing the entire whole slide image requires significant changes to the processing methodology since the images are relatively large (on the order of 100k × 100k pixels). In this paper we discuss the challenges involved in analyzing whole slide images and propose potential computational methodologies for addressing these challenges. We discuss the use of parallel computing tools on commodity clusters and compare performance of the serial and parallel implementations of our approach. PMID:22962572

  5. Efficient testing methodologies for microcameras in a gigapixel imaging system

    NASA Astrophysics Data System (ADS)

    Youn, Seo Ho; Marks, Daniel L.; McLaughlin, Paul O.; Brady, David J.; Kim, Jungsang

    2013-04-01

    Multiscale parallel imaging--based on a monocentric optical design--promises revolutionary advances in diverse imaging applications by enabling high resolution, real-time image capture over a wide field-of-view (FOV), including sport broadcast, wide-field microscopy, astronomy, and security surveillance. Recently demonstrated AWARE-2 is a gigapixel camera consisting of an objective lens and 98 microcameras spherically arranged to capture an image over FOV of 120° by 50°, using computational image processing to form a composite image of 0.96 gigapixels. Since microcameras are capable of individually adjusting exposure, gain, and focus, true parallel imaging is achieved with a high dynamic range. From the integration perspective, manufacturing and verifying consistent quality of microcameras is a key to successful realization of AWARE cameras. We have developed an efficient testing methodology that utilizes a precisely fabricated dot grid chart as a calibration target to extract critical optical properties such as optical distortion, veiling glare index, and modulation transfer function to validate imaging performance of microcameras. This approach utilizes an AWARE objective lens simulator which mimics the actual objective lens but operates with a short object distance, suitable for a laboratory environment. Here we describe the principles of the methodologies developed for AWARE microcameras and discuss the experimental results with our prototype microcameras. Reference Brady, D. J., Gehm, M. E., Stack, R. A., Marks, D. L., Kittle, D. S., Golish, D. R., Vera, E. M., and Feller, S. D., "Multiscale gigapixel photography," Nature 486, 386--389 (2012).

  6. A novel mesh processing based technique for 3D plant analysis

    PubMed Central

    2012-01-01

    Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969

  7. An Imager Gaussian Process Machine Learning Methodology for Cloud Thermodynamic Phase classification

    NASA Astrophysics Data System (ADS)

    Marchant, B.; Platnick, S. E.; Meyer, K.

    2017-12-01

    The determination of cloud thermodynamic phase from MODIS and VIIRS instruments is an important first step in cloud optical retrievals, since ice and liquid clouds have different optical properties. To continue improving the cloud thermodynamic phase classification algorithm, a machine-learning approach, based on Gaussian processes, has been developed. The new proposed methodology provides cloud phase uncertainty quantification and improves the algorithm portability between MODIS and VIIRS. We will present new results, through comparisons between MODIS and CALIOP v4, and for VIIRS as well.

  8. Advances in medical image computing.

    PubMed

    Tolxdorff, T; Deserno, T M; Handels, H; Meinzer, H-P

    2009-01-01

    Medical image computing has become a key technology in high-tech applications in medicine and an ubiquitous part of modern imaging systems and the related processes of clinical diagnosis and intervention. Over the past years significant progress has been made in the field, both on methodological and on application level. Despite this progress there are still big challenges to meet in order to establish image processing routinely in health care. In this issue, selected contributions of the German Conference on Medical Image Processing (BVM) are assembled to present latest advances in the field of medical image computing. The winners of scientific awards of the German Conference on Medical Image Processing (BVM) 2008 were invited to submit a manuscript on their latest developments and results for possible publication in Methods of Information in Medicine. Finally, seven excellent papers were selected to describe important aspects of recent advances in the field of medical image processing. The selected papers give an impression of the breadth and heterogeneity of new developments. New methods for improved image segmentation, non-linear image registration and modeling of organs are presented together with applications of image analysis methods in different medical disciplines. Furthermore, state-of-the-art tools and techniques to support the development and evaluation of medical image processing systems in practice are described. The selected articles describe different aspects of the intense development in medical image computing. The image processing methods presented enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.

  9. New Windows based Color Morphological Operators for Biomedical Image Processing

    NASA Astrophysics Data System (ADS)

    Pastore, Juan; Bouchet, Agustina; Brun, Marcel; Ballarin, Virginia

    2016-04-01

    Morphological image processing is well known as an efficient methodology for image processing and computer vision. With the wide use of color in many areas, the interest on the color perception and processing has been growing rapidly. Many models have been proposed to extend morphological operators to the field of color images, dealing with some new problems not present previously in the binary and gray level contexts. These solutions usually deal with the lattice structure of the color space, or provide it with total orders, to be able to define basic operators with required properties. In this work we propose a new locally defined ordering, in the context of window based morphological operators, for the definition of erosions-like and dilation-like operators, which provides the same desired properties expected from color morphology, avoiding some of the drawbacks of the prior approaches. Experimental results show that the proposed color operators can be efficiently used for color image processing.

  10. [Quantitative data analysis for live imaging of bone.

    PubMed

    Seno, Shigeto

    Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.

  11. Movement measurement of isolated skeletal muscle using imaging microscopy

    NASA Astrophysics Data System (ADS)

    Elias, David; Zepeda, Hugo; Leija, Lorenzo S.; Sossa, Humberto; de la Rosa, Jose I.

    1997-05-01

    An imaging-microscopy methodology to measure contraction movement in chemically stimulated crustacean skeletal muscle, whose movement speed is about 0.02 mm/s is presented. For this, a CCD camera coupled to a microscope and a high speed digital image acquisition system, allowing us to capture 960 images per second are used. The images are digitally processed in a PC and displayed in a video monitor. A maximal field of 0.198 X 0.198 mm2 and a spatial resolution of 3.5 micrometers are obtained.

  12. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  13. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  14. Content-based quality evaluation of color images: overview and proposals

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine

    2003-12-01

    The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.

  15. Near-infrared hyperspectral imaging for quality analysis of agricultural and food products

    NASA Astrophysics Data System (ADS)

    Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.

    2010-04-01

    Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.

  16. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  17. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  18. Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images

    NASA Astrophysics Data System (ADS)

    Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.

    2011-12-01

    We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.

  19. Single-shot and single-sensor high/super-resolution microwave imaging based on metasurface

    PubMed Central

    Wang, Libo; Li, Lianlin; Li, Yunbo; Zhang, Hao Chi; Cui, Tie Jun

    2016-01-01

    Real-time high-resolution (including super-resolution) imaging with low-cost hardware is a long sought-after goal in various imaging applications. Here, we propose broadband single-shot and single-sensor high-/super-resolution imaging by using a spatio-temporal dispersive metasurface and an imaging reconstruction algorithm. The metasurface with spatio-temporal dispersive property ensures the feasibility of the single-shot and single-sensor imager for super- and high-resolution imaging, since it can convert efficiently the detailed spatial information of the probed object into one-dimensional time- or frequency-dependent signal acquired by a single sensor fixed in the far-field region. The imaging quality can be improved by applying a feature-enhanced reconstruction algorithm in post-processing, and the desired imaging resolution is related to the distance between the object and metasurface. When the object is placed in the vicinity of the metasurface, the super-resolution imaging can be realized. The proposed imaging methodology provides a unique means to perform real-time data acquisition, high-/super-resolution images without employing expensive hardware (e.g. mechanical scanner, antenna array, etc.). We expect that this methodology could make potential breakthroughs in the areas of microwave, terahertz, optical, and even ultrasound imaging. PMID:27246668

  20. Immersion and dry lithography monitoring for flash memories (after develop inspection and photo cell monitor) using a darkfield imaging inspector with advanced binning technology

    NASA Astrophysics Data System (ADS)

    Parisi, P.; Mani, A.; Perry-Sullivan, C.; Kopp, J.; Simpson, G.; Renis, M.; Padovani, M.; Severgnini, C.; Piacentini, P.; Piazza, P.; Beccalli, A.

    2009-12-01

    After-develop inspection (ADI) and photo-cell monitoring (PM) are part of a comprehensive lithography process monitoring strategy. Capturing defects of interest (DOI) in the lithography cell rather than at later process steps shortens the cycle time and allows for wafer re-work, reducing overall cost and improving yield. Low contrast DOI and multiple noise sources make litho inspection challenging. Broadband brightfield inspectors provide the highest sensitivity to litho DOI and are traditionally used for ADI and PM. However, a darkfield imaging inspector has shown sufficient sensitivity to litho DOI, providing a high-throughput option for litho defect monitoring. On the darkfield imaging inspector, a very high sensitivity inspection is used in conjunction with advanced defect binning to detect pattern issues and other DOI and minimize nuisance defects. For ADI, this darkfield inspection methodology enables the separation and tracking of 'color variation' defects that correlate directly to CD variations allowing a high-sampling monitor for focus excursions, thereby reducing scanner re-qualification time. For PM, the darkfield imaging inspector provides sensitivity to critical immersion litho defects at a lower cost-of-ownership. This paper describes litho monitoring methodologies developed and implemented for flash devices for 65nm production and 45nm development using the darkfield imaging inspector.

  1. Rhetorical Approaches to Crisis Communication: The Research, Development, and Validation of an Image Repair Situational Theory for Educational Leaders

    ERIC Educational Resources Information Center

    Vogelaar, Robert J.

    2005-01-01

    In this project a product to aid educational leaders in the process of communicating in crisis situations is presented. The product was created and received a formative evaluation using an educational research and development methodology. Ultimately, an administrative training course that utilized an Image Repair Situational Theory was developed.…

  2. Obtaining optic disc center and pixel region by automatic thresholding methods on morphologically processed fundus images.

    PubMed

    Marin, Diego; Gegundez-Arias, Manuel E; Suero, Angel; Bravo, Jose M

    2015-02-01

    Development of automatic retinal disease diagnosis systems based on retinal image computer analysis can provide remarkably quicker screening programs for early detection. Such systems are mainly focused on the detection of the earliest ophthalmic signs of illness and require previous identification of fundal landmark features such as optic disc (OD), fovea or blood vessels. A methodology for accurate center-position location and OD retinal region segmentation on digital fundus images is presented in this paper. The methodology performs a set of iterative opening-closing morphological operations on the original retinography intensity channel to produce a bright region-enhanced image. Taking blood vessel confluence at the OD into account, a 2-step automatic thresholding procedure is then applied to obtain a reduced region of interest, where the center and the OD pixel region are finally obtained by performing the circular Hough transform on a set of OD boundary candidates generated through the application of the Prewitt edge detector. The methodology was evaluated on 1200 and 1748 fundus images from the publicly available MESSIDOR and MESSIDOR-2 databases, acquired from diabetic patients and thus being clinical cases of interest within the framework of automated diagnosis of retinal diseases associated to diabetes mellitus. This methodology proved highly accurate in OD-center location: average Euclidean distance between the methodology-provided and actual OD-center position was 6.08, 9.22 and 9.72 pixels for retinas of 910, 1380 and 1455 pixels in size, respectively. On the other hand, OD segmentation evaluation was performed in terms of Jaccard and Dice coefficients, as well as the mean average distance between estimated and actual OD boundaries. Comparison with the results reported by other reviewed OD segmentation methodologies shows our proposal renders better overall performance. Its effectiveness and robustness make this proposed automated OD location and segmentation method a suitable tool to be integrated into a complete prescreening system for early diagnosis of retinal diseases. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  3. High Throughput Multispectral Image Processing with Applications in Food Science.

    PubMed

    Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John

    2015-01-01

    Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.

  4. Digital image comparison by subtracting contextual transformations—percentile rank order differentiation

    USGS Publications Warehouse

    Wehde, M. E.

    1995-01-01

    The common method of digital image comparison by subtraction imposes various constraints on the image contents. Precise registration of images is required to assure proper evaluation of surface locations. The attribute being measured and the calibration and scaling of the sensor are also important to the validity and interpretability of the subtraction result. Influences of sensor gains and offsets complicate the subtraction process. The presence of any uniform systematic transformation component in one of two images to be compared distorts the subtraction results and requires analyst intervention to interpret or remove it. A new technique has been developed to overcome these constraints. Images to be compared are first transformed using the cumulative relative frequency as a transfer function. The transformed images represent the contextual relationship of each surface location with respect to all others within the image. The process of differentiating between the transformed images results in a percentile rank ordered difference. This process produces consistent terrain-change information even when the above requirements necessary for subtraction are relaxed. This technique may be valuable to an appropriately designed hierarchical terrain-monitoring methodology because it does not require human participation in the process.

  5. Advanced imaging techniques for the study of plant growth and development.

    PubMed

    Sozzani, Rosangela; Busch, Wolfgang; Spalding, Edgar P; Benfey, Philip N

    2014-05-01

    A variety of imaging methodologies are being used to collect data for quantitative studies of plant growth and development from living plants. Multi-level data, from macroscopic to molecular, and from weeks to seconds, can be acquired. Furthermore, advances in parallelized and automated image acquisition enable the throughput to capture images from large populations of plants under specific growth conditions. Image-processing capabilities allow for 3D or 4D reconstruction of image data and automated quantification of biological features. These advances facilitate the integration of imaging data with genome-wide molecular data to enable systems-level modeling. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Processing Satellite Imagery To Detect Waste Tire Piles

    NASA Technical Reports Server (NTRS)

    Skiles, Joseph; Schmidt, Cynthia; Wuinlan, Becky; Huybrechts, Catherine

    2007-01-01

    A methodology for processing commercially available satellite spectral imagery has been developed to enable identification and mapping of waste tire piles in California. The California Integrated Waste Management Board initiated the project and provided funding for the method s development. The methodology includes the use of a combination of previously commercially available image-processing and georeferencing software used to develop a model that specifically distinguishes between tire piles and other objects. The methodology reduces the time that must be spent to initially survey a region for tire sites, thereby increasing inspectors and managers time available for remediation of the sites. Remediation is needed because millions of used tires are discarded every year, waste tire piles pose fire hazards, and mosquitoes often breed in water trapped in tires. It should be possible to adapt the methodology to regions outside California by modifying some of the algorithms implemented in the software to account for geographic differences in spectral characteristics associated with terrain and climate. The task of identifying tire piles in satellite imagery is uniquely challenging because of their low reflectance levels: Tires tend to be spectrally confused with shadows and deep water, both of which reflect little light to satellite-borne imaging systems. In this methodology, the challenge is met, in part, by use of software that implements the Tire Identification from Reflectance (TIRe) model. The development of the TIRe model included incorporation of lessons learned in previous research on the detection and mapping of tire piles by use of manual/ visual and/or computational analysis of aerial and satellite imagery. The TIRe model is a computational model for identifying tire piles and discriminating between tire piles and other objects. The input to the TIRe model is the georeferenced but otherwise raw satellite spectral images of a geographic region to be surveyed. The TIRe model identifies the darkest objects in the images and, on the basis of spatial and spectral image characteristics, discriminates against other dark objects, which can include vegetation, some bodies of water, and dark soils. The TIRe model can identify piles of as few as 100 tires. The output of the TIRe model is a binary mask showing areas containing suspected tire piles and spectrally similar features. This mask is overlaid on the original satellite imagery and examined by a trained image analyst, who strives to further discriminate against non-tire objects that the TIRe model tentatively identified as tire piles. After the analyst has made adjustments, the mask is used to create a synoptic, geographically accurate tire-pile survey map, which can be overlaid with a road map and/or any other map or set of georeferenced data, according to a customer s preferences.

  7. Towards a Unified Approach to Information Integration - A review paper on data/information fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitney, Paul D.; Posse, Christian; Lei, Xingye C.

    2005-10-14

    Information or data fusion of data from different sources are ubiquitous in many applications, from epidemiology, medical, biological, political, and intelligence to military applications. Data fusion involves integration of spectral, imaging, text, and many other sensor data. For example, in epidemiology, information is often obtained based on many studies conducted by different researchers at different regions with different protocols. In the medical field, the diagnosis of a disease is often based on imaging (MRI, X-Ray, CT), clinical examination, and lab results. In the biological field, information is obtained based on studies conducted on many different species. In military field, informationmore » is obtained based on data from radar sensors, text messages, chemical biological sensor, acoustic sensor, optical warning and many other sources. Many methodologies are used in the data integration process, from classical, Bayesian, to evidence based expert systems. The implementation of the data integration ranges from pure software design to a mixture of software and hardware. In this review we summarize the methodologies and implementations of data fusion process, and illustrate in more detail the methodologies involved in three examples. We propose a unified multi-stage and multi-path mapping approach to the data fusion process, and point out future prospects and challenges.« less

  8. Space images processing methodology for assessment of atmosphere pollution impact on forest-swamp territories

    NASA Astrophysics Data System (ADS)

    Polichtchouk, Yuri; Tokareva, Olga; Bulgakova, Irina V.

    2003-03-01

    Methodical problems of space images processing for assessment of atmosphere pollution impact on forest ecosystems using geoinformation systems are developed. An approach to quantitative assessment of atmosphere pollution impact on forest ecosystems is based on calculating relative squares of forest landscapes which are inside atmosphere pollution zones. Landscape structure of forested territories in the southern part of Western Siberia are determined on the basis of procession of middle resolution space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches on territories of oil fields are considered. Pollution zones were revealed by modeling of contaminants dispersal in atmosphere with standard models. Polluted landscapes squares are calculated depending on atmosphere pollution level.

  9. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  10. Finding glenoid surface on scapula in 3D medical images for shoulder joint implant operation planning: 3D OCR

    NASA Astrophysics Data System (ADS)

    Mohammad Sadeghi, Majid; Kececi, Emin Faruk; Bilsel, Kerem; Aralasmak, Ayse

    2017-03-01

    Medical imaging has great importance in earlier detection, better treatment and follow-up of diseases. 3D Medical image analysis with CT Scan and MRI images has also been used to aid surgeries by enabling patient specific implant fabrication, where having a precise three dimensional model of associated body parts is essential. In this paper, a 3D image processing methodology for finding the plane on which the glenoid surface has a maximum surface area is proposed. Finding this surface is the first step in designing patient specific shoulder joint implant.

  11. Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope

    PubMed Central

    Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.

    2013-01-01

    In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668

  12. A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data.

    PubMed

    Vanegas, Fernando; Bratanov, Dmitry; Powell, Kevin; Weiss, John; Gonzalez, Felipe

    2018-01-17

    Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used-the sensors, the UAV, and the flight operations-the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications.

  13. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  14. Meta-analysis of the technical performance of an imaging procedure: Guidelines and statistical methodology

    PubMed Central

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2017-01-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353

  15. Image analysis of skin color heterogeneity focusing on skin chromophores and the age-related changes in facial skin.

    PubMed

    Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji

    2015-05-01

    Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. A validated methodology for the 3D reconstruction of cochlea geometries using human microCT images

    NASA Astrophysics Data System (ADS)

    Sakellarios, A. I.; Tachos, N. S.; Rigas, G.; Bibas, T.; Ni, G.; Böhnke, F.; Fotiadis, D. I.

    2017-05-01

    Accurate reconstruction of the inner ear is a prerequisite for the modelling and understanding of the inner ear mechanics. In this study, we present a semi-automated methodology for accurate reconstruction of the major inner ear structures (scalae, basilar membrane, stapes and semicircular canals). For this purpose, high resolution microCT images of a human specimen were used. The segmentation methodology is based on an iterative level set algorithm which provides the borders of the structures of interest. An enhanced coupled level set method which allows the simultaneous multiple image labeling without any overlapping regions has been developed for this purpose. The marching cube algorithm was applied in order to extract the surface from the segmented volume. The reconstructed geometries are then post-processed to improve the basilar membrane geometry to realistically represent physiologic dimensions. The final reconstructed model is compared to the available data from the literature. The results show that our generated inner ear structures are in good agreement with the published ones, while our approach is the most realistic in terms of the basilar membrane thickness and width reconstruction.

  17. Container Surface Evaluation by Function Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, James G.

    Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.

  18. Implementation of Canny and Isotropic Operator with Power Law Transformation to Identify Cervical Cancer

    NASA Astrophysics Data System (ADS)

    Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.

    2018-03-01

    Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.

  19. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; hide

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  20. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application workflows, is identified to improve the system in the coming years.

  1. Patient-centered outcomes research in radiology: trends in funding and methodology.

    PubMed

    Lee, Christoph I; Jarvik, Jeffrey G

    2014-09-01

    The creation of the Patient-Centered Outcomes Research Trust Fund and the Patient-Centered Outcomes Research Institute (PCORI) through the Patient Protection and Affordable Care Act of 2010 presents new opportunities for funding patient-centered comparative effectiveness research (CER) in radiology. We provide an overview of the evolution of federal funding and priorities for CER with a focus on radiology-related priority topics over the last two decades, and discuss the funding processes and methodological standards outlined by PCORI. We introduce key paradigm shifts in research methodology that will be required on the part of radiology health services researchers to obtain competitive federal grant funding in patient-centered outcomes research. These paradigm shifts include direct engagement of patients and other stakeholders at every stage of the research process, from initial conception to dissemination of results. We will also discuss the increasing use of mixed methods and novel trial designs. One of these trial designs, the pragmatic trial, has the potential to be readily applied to evaluating the effectiveness of diagnostic imaging procedures and imaging-based interventions among diverse patient populations in real-world settings. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  2. Monitoring of activated sludge settling ability through image analysis: validation on full-scale wastewater treatment plants.

    PubMed

    Mesquita, D P; Dias, O; Amaral, A L; Ferreira, E C

    2009-04-01

    In recent years, a great deal of attention has been focused on the research of activated sludge processes, where the solid-liquid separation phase is frequently considered of critical importance, due to the different problems that severely affect the compaction and the settling of the sludge. Bearing that in mind, in this work, image analysis routines were developed in Matlab environment, allowing the identification and characterization of microbial aggregates and protruding filaments in eight different wastewater treatment plants, for a combined period of 2 years. The monitoring of the activated sludge contents allowed for the detection of bulking events proving that the developed image analysis methodology is adequate for a continuous examination of the morphological changes in microbial aggregates and subsequent estimation of the sludge volume index. In fact, the obtained results proved that the developed image analysis methodology is a feasible method for the continuous monitoring of activated sludge systems and identification of disturbances.

  3. Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization

    NASA Astrophysics Data System (ADS)

    Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija

    2017-07-01

    Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.

  4. Development of a methodology for structured reporting of information in echocardiography.

    PubMed

    Homorodean, Călin; Olinic, Maria; Olinic, Dan

    2012-03-01

    In order to conduct research relying on ultrasound images, it is necessary to access a large number of relevant cases represented by images and their interpretation. DICOM standard defines the structured reporting information object. Templates are tree-like structures which offer structural guidance in report construction. Laying the foundations of a structured reporting methodology in echocardiography, through the generation of a consistent set of DICOM templates. We developed an information system with the ability of managing echocardiographic images and structured reports. In order to perform a complete description of the cardiac structures, we used 1900 coded concepts organized into 344 contexts by their semantic meaning in a variety of cardiac diseases. We developed 30 templates, with up to 10 nesting levels. The list of templates has a pyramid-like architecture. Two templates are used for reporting every measurement and description: "EchoMeasurement" and "EchoDescription". Intermediate level templates specify how to report the features of echoDoppler findings: "Spectral Curve", "Color Jet", "Intracardiac mass". Templates for every cardiovascular structure include the previous ones. "Echocardiography Procedure Report" includes all other templates. The templates were tested in reporting echo features of 100 patients by analyzing 500 DICOM images. The benefits of these templates has been proven during the testing process, through the quality of the echocardiography report, the ability to argue and to link every diagnostic feature to a defining image and by opening up opportunities for education, research. In the future, our template-based reporting methodology might be extended to other imaging modalities.

  5. Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions

    NASA Astrophysics Data System (ADS)

    Ogiela, M. R.; Bodzioch, S.

    2011-06-01

    This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.

  6. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  7. An update on technical and methodological aspects for cardiac PET applications.

    PubMed

    Presotto, Luca; Busnardo, Elena; Gianolli, Luigi; Bettinardi, Valentino

    2016-12-01

    Positron emission tomography (PET) is indicated for a large number of cardiac diseases: perfusion and viability studies are commonly used to evaluate coronary artery disease; PET can also be used to assess sarcoidosis and endocarditis, as well as to investigate amyloidosis. Furthermore, a hot topic for research is plaque characterization. Most of these studies are technically very challenging. High count rates and short acquisition times characterize perfusion scans while very small targets have to be imaged in inflammation/infection and plaques examinations. Furthermore, cardiac PET suffers from respiratory and cardiac motion blur. Each type of studies has specific requirements from the technical and methodological point of view, thus PET systems with overall high performances are required. Furthermore, in the era of hybrid PET/computed tomography (CT) and PET/Magnetic Resonance Imaging (MRI) systems, the combination of complementary functional and anatomical information can be used to improve diagnosis and prognosis. Moreover, PET images can be qualitatively and quantitatively improved exploiting information from the other modality, using advanced algorithms. In this review we will report the latest technological and methodological innovations for PET cardiac applications, with particular reference to the state of the art of the hybrid PET/CT and PET/MRI. We will also report the most recent advancements in software, from reconstruction algorithms to image processing and analysis programs.

  8. Functional imaging of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ell, P.J.; Jarritt, P.H.; Costa, D.C.

    1987-07-01

    The radionuclide tracer method is unique among all other imaging methodologies in its ability to trace organ or tissue function and metabolism. Physical processes such as electron or proton density assessment or resonance, edge identification, electrical or ultrasonic impedence, do not pertain to the image generation process in nuclear medicine, and if so, only in a rather secondary manner. The nuclear medicine imaging study is primarily a study of the chemical nature, distribution and interaction of the tracer/radiopharmaceutical utilized with the cellular system which requires investigation: the thyroid cells with sodium iodide, the recticular endothelial cells with colloidal particles, themore » adrenal medulla cells with metaiodobenzylguanidine, and so on. In the two most recent areas of nuclear medicine expansion, oncology (with labelled monoclonal antibodies) and neurology and psychiatry (with a whole new series of lipid soluble radiopharmaceuticals), specific cell systems can also be targeted and hence imaged and investigated. The study of structure as masterly performed by Virchow and all his successors over more than a century, is now definitely the prerogative of such imaging systems which excel with spatial and contrast resolution However the investigation of function and metabolism, has clearly passed from the laboratory animal protocol and experiment to the direct investigation in man, this being the achievement of the radionuclide tracer methodology. In this article, we review present interest and developments in that part of nuclear medicine activity which is aimed at the study of the neurological or psychiatric patient.« less

  9. A methodology for the semi-automatic digital image analysis of fragmental impactites

    NASA Astrophysics Data System (ADS)

    Chanou, A.; Osinski, G. R.; Grieve, R. A. F.

    2014-04-01

    A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.

  10. Using genetically modified tomato crop plants with purple leaves for absolute weed/crop classification.

    PubMed

    Lati, Ran N; Filin, Sagi; Aly, Radi; Lande, Tal; Levin, Ilan; Eizenberg, Hanan

    2014-07-01

    Weed/crop classification is considered the main problem in developing precise weed-management methodologies, because both crops and weeds share similar hues. Great effort has been invested in the development of classification models, most based on expensive sensors and complicated algorithms. However, satisfactory results are not consistently obtained due to imaging conditions in the field. We report on an innovative approach that combines advances in genetic engineering and robust image-processing methods to detect weeds and distinguish them from crop plants by manipulating the crop's leaf color. We demonstrate this on genetically modified tomato (germplasm AN-113) which expresses a purple leaf color. An autonomous weed/crop classification is performed using an invariant-hue transformation that is applied to images acquired by a standard consumer camera (visible wavelength) and handles variations in illumination intensities. The integration of these methodologies is simple and effective, and classification results were accurate and stable under a wide range of imaging conditions. Using this approach, we simplify the most complicated stage in image-based weed/crop classification models. © 2013 Society of Chemical Industry.

  11. Functional Magnetic Resonance Imaging of the Domestic Dog: Research, Methodology, and Conceptual Issues

    PubMed Central

    Thompkins, Andie M.; Deshpande, Gopikrishna; Waggoner, Paul; Katz, Jeffrey S.

    2017-01-01

    Neuroimaging of the domestic dog is a rapidly expanding research topic in terms of the cognitive domains being investigated. Because dogs have shared both a physical and social world with humans for thousands of years, they provide a unique and socially relevant means of investigating a variety of shared human and canine psychological phenomena. Additionally, their trainability allows for neuroimaging to be carried out noninvasively in an awake and unrestrained state. In this review, a brief overview of functional magnetic resonance imaging (fMRI) is followed by an analysis of recent research with dogs using fMRI. Methodological and conceptual concerns found across multiple studies are raised, and solutions to these issues are suggested. With the research capabilities brought by canine functional imaging, findings may improve our understanding of canine cognitive processes, identify neural correlates of behavioral traits, and provide early-life selection measures for dogs in working roles. PMID:29456781

  12. Development of a diaphragmatic motion-based elastography framework for assessment of liver stiffness

    NASA Astrophysics Data System (ADS)

    Weis, Jared A.; Johnsen, Allison M.; Wile, Geoffrey E.; Yankeelov, Thomas E.; Abramson, Richard G.; Miga, Michael I.

    2015-03-01

    Evaluation of mechanical stiffness imaging biomarkers, through magnetic resonance elastography (MRE), has shown considerable promise for non-invasive assessment of liver stiffness to monitor hepatic fibrosis. MRE typically requires specialized externally-applied vibratory excitation and scanner-specific motion-sensitive pulse sequences. In this work, we have developed an elasticity imaging approach that utilizes natural diaphragmatic respiratory motion to induce deformation and eliminates the need for external deformation excitation hardware and specialized pulse sequences. Our approach uses clinically-available standard of care volumetric imaging acquisitions, combined with offline model-based post-processing to generate volumetric estimates of stiffness within the liver and surrounding tissue structures. We have previously developed a novel methodology for non-invasive elasticity imaging which utilizes a model-based elasticity reconstruction algorithm and MR image volumes acquired under different states of deformation. In prior work, deformation was external applied through inflation of an air bladder placed within the MR radiofrequency coil. In this work, we extend the methodology with the goal of determining the feasibility of assessing liver mechanical stiffness using diaphragmatic respiratory motion between end-inspiration and end-expiration breath-holds as a source of deformation. We present initial investigations towards applying this methodology to assess liver stiffness in healthy volunteers and cirrhotic patients. Our preliminary results suggest that this method is capable of non-invasive image-based assessment of liver stiffness using natural diaphragmatic respiratory motion and provides considerable enthusiasm for extension of our approach towards monitoring liver stiffness in cirrhotic patients with limited impact to standard-of-care clinical imaging acquisition workflow.

  13. CT Dose Optimization in Pediatric Radiology: A Multiyear Effort to Preserve the Benefits of Imaging While Reducing the Risks.

    PubMed

    Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C

    2015-01-01

    The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.

  14. A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data

    PubMed Central

    Vanegas, Fernando; Weiss, John; Gonzalez, Felipe

    2018-01-01

    Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used—the sensors, the UAV, and the flight operations—the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analysing and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications. PMID:29342101

  15. Challenges of implementing digital technology in motion picture distribution and exhibition: testing and evaluation methodology

    NASA Astrophysics Data System (ADS)

    Swartz, Charles S.

    2003-05-01

    The process of distributing and exhibiting a motion picture has changed little since the Lumière brothers presented the first motion picture to an audience in 1895. While this analog photochemical process is capable of producing screen images of great beauty and expressive power, more often the consumer experience is diminished by third generation prints and by the wear and tear of the mechanical process. Furthermore, the film industry globally spends approximately $1B annually manufacturing and shipping prints. Alternatively, distributing digital files would theoretically yield great benefits in terms of image clarity and quality, lower cost, greater security, and more flexibility in the cinema (e.g., multiple language versions). In order to understand the components of the digital cinema chain and evaluate the proposed technical solutions, the Entertainment Technology Center at USC in 2000 established the Digital Cinema Laboratory as a critical viewing environment, with the highest quality film and digital projection equipment. The presentation describes the infrastructure of the Lab, test materials, and testing methodologies developed for compression evaluation, and lessons learned up to the present. In addition to compression, the Digital Cinema Laboratory plans to evaluate other components of the digital cinema process as well.

  16. Task-based image quality assessment in radiation therapy: initial characterization and demonstration with CT simulation images

    NASA Astrophysics Data System (ADS)

    Dolly, Steven R.; Anastasio, Mark A.; Yu, Lifeng; Li, Hua

    2017-03-01

    In current radiation therapy practice, image quality is still assessed subjectively or by utilizing physically-based metrics. Recently, a methodology for objective task-based image quality (IQ) assessment in radiation therapy was proposed by Barrett et al.1 In this work, we present a comprehensive implementation and evaluation of this new IQ assessment methodology. A modular simulation framework was designed to perform an automated, computer-simulated end-to-end radiation therapy treatment. A fully simulated framework was created that utilizes new learning-based stochastic object models (SOM) to obtain known organ boundaries, generates a set of images directly from the numerical phantoms created with the SOM, and automates the image segmentation and treatment planning steps of a radiation therapy work ow. By use of this computational framework, therapeutic operating characteristic (TOC) curves can be computed and the area under the TOC curve (AUTOC) can be employed as a figure-of-merit to guide optimization of different components of the treatment planning process. The developed computational framework is employed to optimize X-ray CT pre-treatment imaging. We demonstrate that use of the radiation therapy-based-based IQ measures lead to different imaging parameters than obtained by use of physical-based measures.

  17. Quality initiatives: improving patient flow for a bone densitometry practice: results from a Mayo Clinic radiology quality initiative.

    PubMed

    Aakre, Kenneth T; Valley, Timothy B; O'Connor, Michael K

    2010-03-01

    Lean Six Sigma process improvement methodologies have been used in manufacturing for some time. However, Lean Six Sigma process improvement methodologies also are applicable to radiology as a way to identify opportunities for improvement in patient care delivery settings. A multidisciplinary team of physicians and staff conducted a 100-day quality improvement project with the guidance of a quality advisor. By using the framework of DMAIC (define, measure, analyze, improve, and control), time studies were performed for all aspects of patient and technologist involvement. From these studies, value stream maps for the current state and for the future were developed, and tests of change were implemented. Comprehensive value stream maps showed that before implementation of process changes, an average time of 20.95 minutes was required for completion of a bone densitometry study. Two process changes (ie, tests of change) were undertaken. First, the location for completion of a patient assessment form was moved from inside the imaging room to the waiting area, enabling patients to complete the form while waiting for the technologist. Second, the patient was instructed to sit in a waiting area immediately outside the imaging rooms, rather than in the main reception area, which is far removed from the imaging area. Realignment of these process steps, with reduced technologist travel distances, resulted in a 3-minute average decrease in the patient cycle time. This represented a 15% reduction in the initial patient cycle time with no change in staff or costs. Radiology process improvement projects can yield positive results despite small incremental changes.

  18. Advanced metrology by offline SEM data processing

    NASA Astrophysics Data System (ADS)

    Lakcher, Amine; Schneider, Loïc.; Le-Gratiet, Bertrand; Ducoté, Julien; Farys, Vincent; Besacier, Maxime

    2017-06-01

    Today's technology nodes contain more and more complex designs bringing increasing challenges to chip manufacturing process steps. It is necessary to have an efficient metrology to assess process variability of these complex patterns and thus extract relevant data to generate process aware design rules and to improve OPC models. Today process variability is mostly addressed through the analysis of in-line monitoring features which are often designed to support robust measurements and as a consequence are not always very representative of critical design rules. CD-SEM is the main CD metrology technique used in chip manufacturing process but it is challenged when it comes to measure metrics like tip to tip, tip to line, areas or necking in high quantity and with robustness. CD-SEM images contain a lot of information that is not always used in metrology. Suppliers have provided tools that allow engineers to extract the SEM contours of their features and to convert them into a GDS. Contours can be seen as the signature of the shape as it contains all the dimensional data. Thus the methodology is to use the CD-SEM to take high quality images then generate SEM contours and create a data base out of them. Contours are used to feed an offline metrology tool that will process them to extract different metrics. It was shown in two previous papers that it is possible to perform complex measurements on hotspots at different process steps (lithography, etch, copper CMP) by using SEM contours with an in-house offline metrology tool. In the current paper, the methodology presented previously will be expanded to improve its robustness and combined with the use of phylogeny to classify the SEM images according to their geometrical proximities.

  19. Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales

    NASA Technical Reports Server (NTRS)

    Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul

    2010-01-01

    Methodologies for understanding the plastic deformation mechanisms related to crack propagation at the nano-, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.

  20. Modeling and Characterization of Near-Crack-Tip Plasticity from Micro- to Nano-Scales

    NASA Technical Reports Server (NTRS)

    Glaessgen, Edward H.; Saether, Erik; Hochhalter, Jacob; Smith, Stephen W.; Ransom, Jonathan B.; Yamakov, Vesselin; Gupta, Vipul

    2011-01-01

    Methodologies for understanding the plastic deformation mechanisms related 10 crack propagation at the nano, meso- and micro-length scales are being developed. These efforts include the development and application of several computational methods including atomistic simulation, discrete dislocation plasticity, strain gradient plasticity and crystal plasticity; and experimental methods including electron backscattered diffraction and video image correlation. Additionally, methodologies for multi-scale modeling and characterization that can be used to bridge the relevant length scales from nanometers to millimeters are being developed. The paper focuses on the discussion of newly developed methodologies in these areas and their application to understanding damage processes in aluminum and its alloys.

  1. The Visible Heart® project and free-access website 'Atlas of Human Cardiac Anatomy'.

    PubMed

    Iaizzo, Paul A

    2016-12-01

    Pre- and post-evaluations of implantable cardiac devices require innovative and critical testing in all phases of the design process. The Visible Heart ® Project was successfully launched in 1997 and 3 years later the Atlas of Human Cardiac Anatomy website was online. The Visible Heart ® methodologies and Atlas website can be used to better understand human cardiac anatomy, disease states and/or to improve cardiac device design throughout the development process. To date, Visible ® Heart methodologies have been used to reanimate 75 human hearts, all considered non-viable for transplantation. The Atlas is a unique free-access website featuring novel images of functional and fixed human cardiac anatomies from >400 human heart specimens. Furthermore, this website includes education tutorials on anatomy, physiology, congenital heart disease and various imaging modalities. For instance, the Device Tutorial provides examples of commonly deployed devices that were present at the time of in vitro reanimation or were subsequently delivered, including: leads, catheters, valves, annuloplasty rings, leadless pacemakers and stents. Another section of the website displays 3D models of vasculature, blood volumes, and/or tissue volumes reconstructed from computed tomography (CT) and magnetic resonance images (MRI) of various heart specimens. A new section allows the user to interact with various heart models. Visible Heart ® methodologies have enabled our laboratory to reanimate 75 human hearts and visualize functional cardiac anatomies and device/tissue interfaces. The website freely shares all images, video clips and CT/MRI DICOM files in honour of the generous gifts received from donors and their families. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2016. For Permissions, please email: journals.permissions@oup.com.

  2. MRI Superresolution Using Self-Similarity and Image Priors

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2010-01-01

    In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094

  3. Intranucleus Single-Molecule Imaging in Living Cells.

    PubMed

    Shao, Shipeng; Xue, Boxin; Sun, Yujie

    2018-06-01

    Many critical processes occurring in mammalian cells are stochastic and can be directly observed at the single-molecule level within their physiological environment, which would otherwise be obscured in an ensemble measurement. There are various fundamental processes in the nucleus, such as transcription, replication, and DNA repair, the study of which can greatly benefit from intranuclear single-molecule imaging. However, the number of such studies is relatively small mainly because of lack of proper labeling and imaging methods. In the past decade, tremendous efforts have been devoted to developing tools for intranuclear imaging. Here, we mainly describe the recent methodological developments of single-molecule imaging and their emerging applications in the live nucleus. We also discuss the remaining issues and provide a perspective on future developments and applications of this field. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  4. Heuristic Enhancement of Magneto-Optical Images for NDE

    NASA Astrophysics Data System (ADS)

    Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo

    2010-12-01

    The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.

  5. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    PubMed

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  6. Target Transformation Constrained Sparse Unmixing (ttcsu) Algorithm for Retrieving Hydrous Minerals on Mars: Application to Southwest Melas Chasma

    NASA Astrophysics Data System (ADS)

    Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.

    2018-04-01

    Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.

  7. Development of 2D deconvolution method to repair blurred MTSAT-1R visible imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.; Okuyama, Arata

    2014-09-01

    Spatial cross-talk has been discovered in the visible channel data of the Multi-functional Transport Satellite (MTSAT)-1R. The slight image blurring is attributed to an imperfection in the mirror surface caused either by flawed polishing or a dust contaminant. An image processing methodology is described that employs a two-dimensional deconvolution routine to recover the original undistorted MTSAT-1R data counts. The methodology assumes that the dispersed portion of the signal is small and distributed randomly around the optical axis, which allows the image blurring to be described by a point spread function (PSF) based on the Gaussian profile. The PSF is described by 4 parameters, which are solved using a maximum likelihood estimator using coincident collocated MTSAT-2 images as truth. A subpixel image matching technique is used to align the MTSAT-2 pixels into the MTSAT-1R projection and to correct for navigation errors and cloud displacement due to the time and viewing geometry differences between the two satellite observations. An optimal set of the PSF parameters is derived by an iterative routine based on the 4-dimensional Powell's conjugate direction method that minimizes the difference between PSF-corrected MTSAT-1R and collocated MTSAT-2 images. This iterative approach is computationally intensive and was optimized analytically as well as by coding in assembly language incorporating parallel processing. The PSF parameters were found to be consistent over the 5-days of available daytime coincident MTSAT-1R and MTSAT-2 images, and can easily be applied to the MTSAT-1R imager pixel level counts to restore the original quality of the entire MTSAT-1R record.

  8. On Adapting the Tensor Voting Framework to Robust Color Image Denoising

    NASA Astrophysics Data System (ADS)

    Moreno, Rodrigo; Garcia, Miguel Angel; Puig, Domenec; Julià, Carme

    This paper presents an adaptation of the tensor voting framework for color image denoising, while preserving edges. Tensors are used in order to encode the CIELAB color channels, the uniformity and the edginess of image pixels. A specific voting process is proposed in order to propagate color from a pixel to its neighbors by considering the distance between pixels, the perceptual color difference (by using an optimized version of CIEDE2000), a uniformity measurement and the likelihood of the pixels being impulse noise. The original colors are corrected with those encoded by the tensors obtained after the voting process. Peak to noise ratios and visual inspection show that the proposed methodology has a better performance than state-of-the-art techniques.

  9. Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration

    PubMed Central

    Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.

    2014-01-01

    Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030

  10. Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    NASA Astrophysics Data System (ADS)

    Rector, Travis A.; Levay, Zoltan G.; Frattare, Lisa M.; English, Jayanne; Pu'uohau-Pummill, Kirk

    2007-02-01

    The quality of modern astronomical data and the agility of current image-processing software enable the visualization of data in a way that exceeds the traditional definition of an astronomical image. Two developments in particular have led to a fundamental change in how astronomical images can be assembled. First, the availability of high-quality multiwavelength and narrowband data allow for images that do not correspond to the wavelength sensitivity of the human eye, thereby introducing ambiguity in the usage and interpretation of color. Second, many image-processing software packages now use a layering metaphor that allows for any number of astronomical data sets to be combined into a color image. With this technique, images with as many as eight data sets have been produced. Each data set is intensity-scaled and colorized independently, creating an immense parameter space that can be used to assemble the image. Since such images are intended for data visualization, scaling and color schemes must be chosen that best illustrate the science. A practical guide is presented on how to use the layering metaphor to generate publication-ready astronomical images from as many data sets as desired. A methodology is also given on how to use intensity scaling, color, and composition to create contrasts in an image that highlight the scientific detail. Examples of image creation are discussed.

  11. The effect of image sharpness on quantitative eye movement data and on image quality evaluation while viewing natural images

    NASA Astrophysics Data System (ADS)

    Vuori, Tero; Olkkonen, Maria

    2006-01-01

    The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.

  12. Three-dimensional image contrast using biospeckle

    NASA Astrophysics Data System (ADS)

    Godinho, Robson Pierangeli; Braga, Roberto A., Jr.

    2010-09-01

    The biospeckle laser (BSL) has been applied in many areas of knowledge and a variety of approaches has been presented to address the best results in biological and non-biological samples, in fast or slow activities, or else in defined flow of materials or in random activities. The methodologies accounted in the literature consider the apparatus used in the image assembling and the way the collected data is processed. The image processing steps presents in turn a variety of procedures with first or second order statistics analysis, and as well with different sizes of data collected. One way to access the biospeckle in defined flow, such as in capillary blood flow in alive animals, was the adoption of the image contrast technique which uses only one image from the illuminated sample. That approach presents some problems related to the resolution of the image, which is reduced during the image contrast processing. In order to help the visualization of the low resolution image formed by the contrast technique, this work presents the three-dimensional procedure as a reliable alternative to enhance the final image. The work based on a parallel processing, with the generation of a virtual map of amplitudes, and maintaining the quasi-online characteristic of the contrast technique. Therefore, it was possible to generate in the same display the observed material, the image contrast result and in addiction the three-dimensional image with adjustable options of rotation. The platform also offers to the user the possibility to access the 3D image offline.

  13. An automatic panoramic image reconstruction scheme from dental computed tomography images

    PubMed Central

    Papakosta, Thekla K; Savva, Antonis D; Economopoulos, Theodore L; Gröhndal, H G

    2017-01-01

    Objectives: Panoramic images of the jaws are extensively used for dental examinations and/or surgical planning because they provide a general overview of the patient's maxillary and mandibular regions. Panoramic images are two-dimensional projections of three-dimensional (3D) objects. Therefore, it should be possible to reconstruct them from 3D radiographic representations of the jaws, produced by CBCT scanning, obviating the need for additional exposure to X-rays, should there be a need of panoramic views. The aim of this article is to present an automated method for reconstructing panoramic dental images from CBCT data. Methods: The proposed methodology consists of a series of sequential processing stages for detecting a fitting dental arch which is used for projecting the 3D information of the CBCT data to the two-dimensional plane of the panoramic image. The detection is based on a template polynomial which is constructed from a training data set. Results: A total of 42 CBCT data sets of real clinical pre-operative and post-operative representations from 21 patients were used. Eight data sets were used for training the system and the rest for testing. Conclusions: The proposed methodology was successfully applied to CBCT data sets, producing corresponding panoramic images, suitable for examining pre-operatively and post-operatively the patients' maxillary and mandibular regions. PMID:28112548

  14. Time-varying bispectral analysis of visually evoked multi-channel EEG

    NASA Astrophysics Data System (ADS)

    Chandran, Vinod

    2012-12-01

    Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.

  15. Methodological development of topographic correction in 2D/3D ToF-SIMS images using AFM images

    NASA Astrophysics Data System (ADS)

    Jung, Seokwon; Lee, Nodo; Choi, Myungshin; Lee, Jungmin; Cho, Eunkyunng; Joo, Minho

    2018-02-01

    Time-of-flight secondary-ion mass spectrometry (ToF-SIMS) is an emerging technique that provides chemical information directly from the surface of electronic materials, e.g. OLED and solar cell. It is very versatile and highly sensitive mass spectrometric technique that provides surface molecular information with their lateral distribution as a two-dimensional (2D) molecular image. Extending the usefulness of ToF-SIMS, a 3D molecular image can be generated by acquiring multiple 2D images in a stack. These imaging techniques by ToF-SIMS provide an insight into understanding the complex structures of unknown composition in electronic material. However, one drawback in ToF-SIMS is not able to represent topographical information in 2D and 3D mapping images. To overcome this technical limitation, topographic information by ex-situ technique such as atomic force microscopy (AFM) has been combined with chemical information from SIMS that provides both chemical and physical information in one image. The key to combine two different images obtained from ToF-SIMS and AFM techniques is to develop the image processing algorithm, which performs resize and alignment by comparing the specific pixel information of each image. In this work, we present methodological development of the semiautomatic alignment and the 3D structure interpolation system for the combination of 2D/3D images obtained by ToF-SIMS and AFM measurements, which allows providing useful analytical information in a single representation.

  16. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  17. ISSUES IN DIGITAL IMAGE PROCESSING OF AERIAL PHOTOGRAPHY FOR MAPPING SUBMERSED AQUATIC VEGETATION

    EPA Science Inventory

    The paper discusses the numerous issues that needed to be addressed when developing a methodology for mapping Submersed Aquatic Vegetation (SAV) from digital aerial photography. Specifically, we discuss 1) choice of film; 2) consideration of tide and weather constraints; 3) in-s...

  18. Mixed deep learning and natural language processing method for fake-food image recognition and standardization to help automated dietary assessment.

    PubMed

    Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara

    2018-04-06

    The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.

  19. From Biography to Osteobiography: An Example of Anthropological Historical Identification of the Remains of St. Paul.

    PubMed

    Mihanović, Frane; Jerković, Ivan; Kružić, Ivana; Anđelinović, Šimun; Janković, Stipan; Bašić, Željana

    2017-09-01

    In the identification process of historical figures, and especially in cases of Saint's bodies or mummified remains, any method that includes physical encroachment or sampling is often not allowed. In these cases, one of the few remaining possibilities is the application of nondestructive radiographical and anthropological methods. However, although there have been a few attempts of such analyses, no systematic standard methodology has been developed until now. In this study, we developed a methodological approach that was used to test the authenticity of the alleged body of Saint Paul the Confessor. Upon imaging the remains on MSCT and post-processing, the images were analyzed by an interdisciplinary team to explore the contents beneath the binding media (e.g., the remains) and to obtain osetobiographical data for comparison with historical biological data. Obtained results: ancestry, sex, age, occupation, and social status were consistent with historical data. Although the methodological approach proved to be appropriate in this case, due to the discrepancy in the amount of data, identity could not be fully confirmed. Nonetheless, the hypothesis that the remains do not belong to St. Paul was rejected, whilst positive identification receives support. Anat Rec, 300:1535-1546, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Computerized methodology for micro-CT and histological data inflation using an IVUS based translation map.

    PubMed

    Athanasiou, Lambros S; Rigas, George A; Sakellarios, Antonis I; Exarchos, Themis P; Siogkas, Panagiotis K; Naka, Katerina K; Panetta, Daniele; Pelosi, Gualtiero; Vozzi, Federico; Michalis, Lampros K; Parodi, Oberdan; Fotiadis, Dimitrios I

    2015-10-01

    A framework for the inflation of micro-CT and histology data using intravascular ultrasound (IVUS) images, is presented. The proposed methodology consists of three steps. In the first step the micro-CT/histological images are manually co-registered with IVUS by experts using fiducial points as landmarks. In the second step the lumen of both the micro-CT/histological images and IVUS images are automatically segmented. Finally, in the third step the micro-CT/histological images are inflated by applying a transformation method on each image. The transformation method is based on the IVUS and micro-CT/histological contour difference. In order to validate the proposed image inflation methodology, plaque areas in the inflated micro-CT and histological images are compared with the ones in the IVUS images. The proposed methodology for inflating micro-CT/histological images increases the sensitivity of plaque area matching between the inflated and the IVUS images (7% and 22% in histological and micro-CT images, respectively). Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Brain imaging registry for neurologic diagnosis and research

    NASA Astrophysics Data System (ADS)

    Hoo, Kent S., Jr.; Wong, Stephen T. C.; Knowlton, Robert C.; Young, Geoffrey S.; Walker, John; Cao, Xinhua; Dillon, William P.; Hawkins, Randall A.; Laxer, Kenneth D.

    2002-05-01

    The purpose of this paper is to demonstrate the importance of building a brain imaging registry (BIR) on top of existing medical information systems including Picture Archiving Communication Systems (PACS) environment. We describe the design framework for a cluster of data marts whose purpose is to provide clinicians and researchers efficient access to a large volume of raw and processed patient images and associated data originating from multiple operational systems over time and spread out across different hospital departments and laboratories. The framework is designed using object-oriented analysis and design methodology. The BIR data marts each contain complete image and textual data relating to patients with a particular disease.

  2. A new approach to subjectively assess quality of plenoptic content

    NASA Astrophysics Data System (ADS)

    Viola, Irene; Řeřábek, Martin; Ebrahimi, Touradj

    2016-09-01

    Plenoptic content is becoming increasingly popular thanks to the availability of acquisition and display devices. Thanks to image-based rendering techniques, a plenoptic content can be rendered in real time in an interactive manner allowing virtual navigation through the captured scenes. This way of content consumption enables new experiences, and therefore introduces several challenges in terms of plenoptic data processing, transmission and consequently visual quality evaluation. In this paper, we propose a new methodology to subjectively assess the visual quality of plenoptic content. We also introduce a prototype software to perform subjective quality assessment according to the proposed methodology. The proposed methodology is further applied to assess the visual quality of a light field compression algorithm. Results show that this methodology can be successfully used to assess the visual quality of plenoptic content.

  3. Photoacoustic and fluorescent imaging GAF2 photoswitchable chromoproteins (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Chee, Ryan K.; Li, Yan; Paproski, Robert J.; Campbell, Robert E.; Zemp, Roger J.

    2017-03-01

    Molecular photoacoustic imaging is hindered by hemoglobin background signal. Photoswitchable chromoproteins can be used to obtain images with significantly reduced background signal. Molecular imaging of multiple biological processes via multiple chromoprotiens is difficult due to overlapping imaging spectra. Using a new rate-of-change imaging methodology, we can obtain molecular images with multiple chromoprotiens with overlapping imaging spectra. We also present a new photoswitchable chromoprotein, GAF2, which is significantly smaller than the BphP1 which has shown promise for photoswitchable photoacoustic imaging [Yao et al., Nat. Meth. 13, 67-73 (2016)]. We use BphP1 and GAF2 with photoacoustic (Vevo LAZR, Fujifilm Visualsonics Inc) and fluorescence (In vivo Xtreme, Bruker) imaging systems to show background-free multiplexed images. We image before, after, and during photoconversion to obtain background-free rate-of-change images and compare our results to difference imaging and spectral demixing. After phantom imaging, we inject mice with different chromoprotein-expressing E. coli bacteria to show multiplexed images of bacterial infections. We show distinguishable differences in the rate-of-change between GAF2 and BphP1. We obtain rate-of-change feasibility images and in vivo images in mice showing the ability to differentiate between GAF2 and BphP1 even though they are spectrally similar. We photoconvert both GAF2 and BphP1 using 550nm and 735nm light. Phantom studies suggest a 10-20dB improvement in the rate-of-change and difference images in comparison to images with background. Multiplexed background-free molecular imaging using chromoproteins could prove to be a promising new imaging methodology especially when combined with spectral demixing.

  4. RNA imaging: tracking in real-time RNA transport in neurons using molecular beacons and confocal microscopy.

    PubMed

    Zepeda, Angélica; Arias, Clorinda; Flores-Jasso, Fabian; Vaca, Luis

    2013-01-01

    RNAs are present within eukaryotic cells and are involved in several biological processes. RNA transport within cell compartments is important for proper cell function. To understand in depth the cellular processes in which RNA is involved requires a method that reveals RNA localization in real time in a sub-cellular context in living cells. In this protocol we describe a method for imaging RNA in living cells and in particular in neuronal cultures based on cell microinjection of molecular beacons in conjunction with confocal microscopy. This methodology overcomes some of the main obstacles for imaging RNA in live cells since microinjection allows the delivery of the probe to a desired cellular compartment and MBs bind with high specificity to its target RNA without inhibiting its function. The proper design of the MBs is essential to obtain RNA-MB association at the temperature of the cell cytosol. MBs design with other purposes in mind (such as PCR experiments) have a design that facilitates association to its target at high temperatures, rendering them unsuitable for live cell imaging. Using the methodology described in this chapter allows the study of RNA transport to different regions of neurons and may be combined with the tagging of proteins of interest to measure co-transport of the protein and the RNA to different cellular regions. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Spatial frequency characteristics at image decision-point locations for observers with different radiological backgrounds in lung nodule detection

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Manning, David J.; Dix, Alan; Donovan, Tim

    2009-02-01

    Aim: The goal of the study is to determine the spatial frequency characteristics at locations in the image of overt and covert observers' decisions and find out if there are any similarities in different observers' groups: the same radiological experience group or the same accuracy scored level. Background: The radiological task is described as a visual searching decision making procedure involving visual perception and cognitive processing. Humans perceive the world through a number of spatial frequency channels, each sensitive to visual information carried by different spatial frequency ranges and orientations. Recent studies have shown that particular physical properties of local and global image-based elements are correlated with the performance and the level of experience of human observers in breast cancer and lung nodule detections. Neurological findings in visual perception were an inspiration for wavelet applications in vision research because the methodology tries to mimic the brain processing algorithms. Methods: The wavelet approach to the set of postero-anterior chest radiographs analysis has been used to characterize perceptual preferences observers with different levels of experience in the radiological task. Psychophysical methodology has been applied to track eye movements over the image, where particular ROIs related to the observers' fixation clusters has been analysed in the spaces frame by Daubechies functions. Results: Significance differences have been found between the spatial frequency characteristics at the location of different decisions.

  6. A neuromorphic approach to satellite image understanding

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Perakakis, Manolis

    2014-05-01

    Remote sensing satellite imagery provides high altitude, top viewing aspects of large geographic regions and as such the depicted features are not always easily recognizable. Nevertheless, geoscientists familiar to remote sensing data, gradually gain experience and enhance their satellite image interpretation skills. The aim of this study is to devise a novel computational neuro-centered classification approach for feature extraction and image understanding. Object recognition through image processing practices is related to a series of known image/feature based attributes including size, shape, association, texture, etc. The objective of the study is to weight these attribute values towards the enhancement of feature recognition. The key cognitive experimentation concern is to define the point when a user recognizes a feature as it varies in terms of the above mentioned attributes and relate it with their corresponding values. Towards this end, we have set up an experimentation methodology that utilizes cognitive data from brain signals (EEG) and eye gaze data (eye tracking) of subjects watching satellite images of varying attributes; this allows the collection of rich real-time data that will be used for designing the image classifier. Since the data are already labeled by users (using an input device) a first step is to compare the performance of various machine-learning algorithms on the collected data. On the long-run, the aim of this work would be to investigate the automatic classification of unlabeled images (unsupervised learning) based purely on image attributes. The outcome of this innovative process is twofold: First, in an abundance of remote sensing image datasets we may define the essential image specifications in order to collect the appropriate data for each application and improve processing and resource efficiency. E.g. for a fault extraction application in a given scale a medium resolution 4-band image, may be more effective than costly, multispectral, very high resolution imagery. Second, we attempt to relate the experienced against the non-experienced user understanding in order to indirectly assess the possible limits of purely computational systems. In other words, obtain the conceptual limits of computation vs human cognition concerning feature recognition from satellite imagery. Preliminary results of this pilot study show relations between collected data and differentiation of the image attributes which indicates that our methodology can lead to important results.

  7. Verification of nonlinear dynamic structural test results by combined image processing and acoustic analysis

    NASA Astrophysics Data System (ADS)

    Tene, Yair; Tene, Noam; Tene, G.

    1993-08-01

    An interactive data fusion methodology of video, audio, and nonlinear structural dynamic analysis for potential application in forensic engineering is presented. The methodology was developed and successfully demonstrated in the analysis of heavy transportable bridge collapse during preparation for testing. Multiple bridge elements failures were identified after the collapse, including fracture, cracks and rupture of high performance structural materials. Videotape recording by hand held camcorder was the only source of information about the collapse sequence. The interactive data fusion methodology resulted in extracting relevant information form the videotape and from dynamic nonlinear structural analysis, leading to full account of the sequence of events during the bridge collapse.

  8. Creating Polyphony with Exploratory Web Documentation in Singapore

    ERIC Educational Resources Information Center

    Lim, Sirene; Hoo, Lum Chee

    2012-01-01

    We introduce and reflect on "Images of Teaching", an ongoing web documentation research project on preschool teaching in Singapore. This paper discusses the project's purpose, methodological process, and our learning points as researchers who aim to contribute towards inquiry-based professional learning. The website offers a window into…

  9. Dialoguing with Dreams in Existential Art Therapy

    ERIC Educational Resources Information Center

    Moon, Bruce L.

    2007-01-01

    This article presents a theoretical and methodological framework for interactive dialogue and analysis of dream images in existential art therapy. In this phenomenological-existential approach, the client and art therapist are regarded as equal partners with respect to sharing in the process of creation and discovery of meaning (Frankl, 1955,…

  10. Identification, regression and validation of an image processing degradation model to assess the effects of aeromechanical turbulence due to installation aircraft

    NASA Astrophysics Data System (ADS)

    Miccoli, M.; Usai, A.; Tafuto, A.; Albertoni, A.; Togna, F.

    2016-10-01

    The propagation environment around airborne platforms may significantly degrade the performance of Electro-Optical (EO) self-protection systems installed onboard. To ensure the sufficient level of protection, it is necessary to understand that are the best sensors/effectors installation positions to guarantee that the aeromechanical turbulence, generated by the engine exhausts and the rotor downwash, does not interfere with the imaging systems normal operations. Since the radiation-propagation-in-turbulence is a hardly predictable process, it was proposed a high-level approach in which, instead of studying the medium under turbulence, the turbulence effects on the imaging systems processing are assessed by means of an equivalent statistical model representation, allowing a definition of a Turbulence index to classify different level of turbulence intensities. Hence, a general measurement methodology for the degradation of the imaging systems performance in turbulence conditions was developed. The analysis of the performance degradation started by evaluating the effects of turbulences with a given index on the image processing chain (i.e., thresholding, blob analysis). The processing in turbulence (PIT) index is then derived by combining the effects of the given turbulence on the different image processing primitive functions. By evaluating the corresponding PIT index for a sufficient number of testing directions, it is possible to map the performance degradation around the aircraft installation for a generic imaging system, and to identify the best installation position for sensors/effectors composing the EO self-protection suite.

  11. ESARR: enhanced situational awareness via road sign recognition

    NASA Astrophysics Data System (ADS)

    Perlin, V. E.; Johnson, D. B.; Rohde, M. M.; Lupa, R. M.; Fiorani, G.; Mohammad, S.

    2010-04-01

    The enhanced situational awareness via road sign recognition (ESARR) system provides vehicle position estimates in the absence of GPS signal via automated processing of roadway fiducials (primarily directional road signs). Sign images are detected and extracted from vehicle-mounted camera system, and preprocessed and read via a custom optical character recognition (OCR) system specifically designed to cope with low quality input imagery. Vehicle motion and 3D scene geometry estimation enables efficient and robust sign detection with low false alarm rates. Multi-level text processing coupled with GIS database validation enables effective interpretation even of extremely low resolution low contrast sign images. In this paper, ESARR development progress will be reported on, including the design and architecture, image processing framework, localization methodologies, and results to date. Highlights of the real-time vehicle-based directional road-sign detection and interpretation system will be described along with the challenges and progress in overcoming them.

  12. Fuzzy logic and image processing techniques for the interpretation of seismic data

    NASA Astrophysics Data System (ADS)

    Orozco-del-Castillo, M. G.; Ortiz-Alemán, C.; Urrutia-Fucugauchi, J.; Rodríguez-Castellanos, A.

    2011-06-01

    Since interpretation of seismic data is usually a tedious and repetitive task, the ability to do so automatically or semi-automatically has become an important objective of recent research. We believe that the vagueness and uncertainty in the interpretation process makes fuzzy logic an appropriate tool to deal with seismic data. In this work we developed a semi-automated fuzzy inference system to detect the internal architecture of a mass transport complex (MTC) in seismic images. We propose that the observed characteristics of a MTC can be expressed as fuzzy if-then rules consisting of linguistic values associated with fuzzy membership functions. The constructions of the fuzzy inference system and various image processing techniques are presented. We conclude that this is a well-suited problem for fuzzy logic since the application of the proposed methodology yields a semi-automatically interpreted MTC which closely resembles the MTC from expert manual interpretation.

  13. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    PubMed

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.

  14. The impact of temporal sampling resolution on parameter inference for biological transport models.

    PubMed

    Harrison, Jonathan U; Baker, Ruth E

    2018-06-25

    Imaging data has become an essential tool to explore key biological questions at various scales, for example the motile behaviour of bacteria or the transport of mRNA, and it has the potential to transform our understanding of important transport mechanisms. Often these imaging studies require us to compare biological species or mutants, and to do this we need to quantitatively characterise their behaviour. Mathematical models offer a quantitative description of a system that enables us to perform this comparison, but to relate mechanistic mathematical models to imaging data, we need to estimate their parameters. In this work we study how collecting data at different temporal resolutions impacts our ability to infer parameters of biological transport models; performing exact inference for simple velocity jump process models in a Bayesian framework. The question of how best to choose the frequency with which data is collected is prominent in a host of studies because the majority of imaging technologies place constraints on the frequency with which images can be taken, and the discrete nature of observations can introduce errors into parameter estimates. In this work, we mitigate such errors by formulating the velocity jump process model within a hidden states framework. This allows us to obtain estimates of the reorientation rate and noise amplitude for noisy observations of a simple velocity jump process. We demonstrate the sensitivity of these estimates to temporal variations in the sampling resolution and extent of measurement noise. We use our methodology to provide experimental guidelines for researchers aiming to characterise motile behaviour that can be described by a velocity jump process. In particular, we consider how experimental constraints resulting in a trade-off between temporal sampling resolution and observation noise may affect parameter estimates. Finally, we demonstrate the robustness of our methodology to model misspecification, and then apply our inference framework to a dataset that was generated with the aim of understanding the localization of RNA-protein complexes.

  15. Methodology for classification of geographical features with remote sensing images: Application to tidal flats

    NASA Astrophysics Data System (ADS)

    Revollo Sarmiento, G. N.; Cipolletti, M. P.; Perillo, M. M.; Delrieux, C. A.; Perillo, Gerardo M. E.

    2016-03-01

    Tidal flats generally exhibit ponds of diverse size, shape, orientation and origin. Studying the genesis, evolution, stability and erosive mechanisms of these geographic features is critical to understand the dynamics of coastal wetlands. However, monitoring these locations through direct access is hard and expensive, not always feasible, and environmentally damaging. Processing remote sensing images is a natural alternative for the extraction of qualitative and quantitative data due to their non-invasive nature. In this work, a robust methodology for automatic classification of ponds and tidal creeks in tidal flats using Google Earth images is proposed. The applicability of our method is tested in nine zones with different morphological settings. Each zone is processed by a segmentation stage, where ponds and tidal creeks are identified. Next, each geographical feature is measured and a set of shape descriptors is calculated. This dataset, together with a-priori classification of each geographical feature, is used to define a regression model, which allows an extensive automatic classification of large volumes of data discriminating ponds and tidal creeks against other various geographical features. In all cases, we identified and automatically classified different geographic features with an average accuracy over 90% (89.7% in the worst case, and 99.4% in the best case). These results show the feasibility of using freely available Google Earth imagery for the automatic identification and classification of complex geographical features. Also, the presented methodology may be easily applied in other wetlands of the world and perhaps employing other remote sensing imagery.

  16. Quality assurance and quality control in mammography: a review of available guidance worldwide.

    PubMed

    Reis, Cláudia; Pascoal, Ana; Sakellaris, Taxiarchis; Koutalonis, Manthos

    2013-10-01

    Review available guidance for quality assurance (QA) in mammography and discuss its contribution to harmonise practices worldwide. Literature search was performed on different sources to identify guidance documents for QA in mammography available worldwide in international bodies, healthcare providers, professional/scientific associations. The guidance documents identified were reviewed and a selection was compared for type of guidance (clinical/technical), technology and proposed QA methodologies focusing on dose and image quality (IQ) performance assessment. Fourteen protocols (targeted at conventional and digital mammography) were reviewed. All included recommendations for testing acquisition, processing and display systems associated with mammographic equipment. All guidance reviewed highlighted the importance of dose assessment and testing the Automatic Exposure Control (AEC) system. Recommended tests for assessment of IQ showed variations in the proposed methodologies. Recommended testing focused on assessment of low-contrast detection, spatial resolution and noise. QC of image display is recommended following the American Association of Physicists in Medicine guidelines. The existing QA guidance for mammography is derived from key documents (American College of Radiology and European Union guidelines) and proposes similar tests despite the variations in detail and methodologies. Studies reported on QA data should provide detail on experimental technique to allow robust data comparison. Countries aiming to implement a mammography/QA program may select/prioritise the tests depending on available technology and resources. •An effective QA program should be practical to implement in a clinical setting. •QA should address the various stages of the imaging chain: acquisition, processing and display. •AEC system QC testing is simple to implement and provides information on equipment performance.

  17. Specific methodology for capacitance imaging by atomic force microscopy: A breakthrough towards an elimination of parasitic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Ivan; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Chrétien, Pascal

    2014-02-24

    On the basis of a home-made nanoscale impedance measurement device associated with a commercial atomic force microscope, a specific operating process is proposed in order to improve absolute (in sense of “nonrelative”) capacitance imaging by drastically reducing the parasitic effects due to stray capacitance, surface topography, and sample tilt. The method, combining a two-pass image acquisition with the exploitation of approach curves, has been validated on sets of calibration samples consisting in square parallel plate capacitors for which theoretical capacitance values were numerically calculated.

  18. Nuclear medicine and quantitative imaging research (quantitative studies in radiopharmaceutical science): Comprehensive progress report, April 1, 1986-December 31, 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, M.D.; Beck, R.N.

    1988-06-01

    This document describes several years research to improve PET imaging and diagnostic techniques in man. This program addresses the problems involving the basic science and technology underlying the physical and conceptual tools of radioactive tracer methodology as they relate to the measurement of structural and functional parameters of physiologic importance in health and disease. The principal tool is quantitative radionuclide imaging. The overall objective of this program is to further the development and transfer of radiotracer methodology from basic theory to routine clinical practice in order that individual patients and society as a whole will receive the maximum net benefitmore » from the new knowledge gained. The focus of the research is on the development of new instruments and radiopharmaceuticals, and the evaluation of these through the phase of clinical feasibility. The reports in the study were processed separately for the data bases. (TEM)« less

  19. Simulation of a complete X-ray digital radiographic system for industrial applications.

    PubMed

    Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H

    2018-05-19

    Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. A statistical pixel intensity model for segmentation of confocal laser scanning microscopy images.

    PubMed

    Calapez, Alexandre; Rosa, Agostinho

    2010-09-01

    Confocal laser scanning microscopy (CLSM) has been widely used in the life sciences for the characterization of cell processes because it allows the recording of the distribution of fluorescence-tagged macromolecules on a section of the living cell. It is in fact the cornerstone of many molecular transport and interaction quantification techniques where the identification of regions of interest through image segmentation is usually a required step. In many situations, because of the complexity of the recorded cellular structures or because of the amounts of data involved, image segmentation either is too difficult or inefficient to be done by hand and automated segmentation procedures have to be considered. Given the nature of CLSM images, statistical segmentation methodologies appear as natural candidates. In this work we propose a model to be used for statistical unsupervised CLSM image segmentation. The model is derived from the CLSM image formation mechanics and its performance is compared to the existing alternatives. Results show that it provides a much better description of the data on classes characterized by their mean intensity, making it suitable not only for segmentation methodologies with known number of classes but also for use with schemes aiming at the estimation of the number of classes through the application of cluster selection criteria.

  1. Monitoring machining conditions by infrared images

    NASA Astrophysics Data System (ADS)

    Borelli, Joao E.; Gonzaga Trabasso, Luis; Gonzaga, Adilson; Coelho, Reginaldo T.

    2001-03-01

    During machining process the knowledge of the temperature is the most important factor in tool analysis. It allows to control main factors that influence tool use, life time and waste. The temperature in the contact area between the piece and the tool is resulting from the material removal in cutting operation and it is too difficult to be obtained because the tool and the work piece are in motion. One way to measure the temperature in this situation is detecting the infrared radiation. This work presents a new methodology for diagnosis and monitoring of machining processes with the use of infrared images. The infrared image provides a map in gray tones of the elements in the process: tool, work piece and chips. Each gray tone in the image corresponds to a certain temperature for each one of those materials and the relationship between the gray tones and the temperature is gotten by the previous of infrared camera calibration. The system developed in this work uses an infrared camera, a frame grabber board and a software composed of three modules. The first module makes the image acquisition and processing. The second module makes the feature image extraction and performs the feature vector. Finally, the third module uses fuzzy logic to evaluate the feature vector and supplies the tool state diagnostic as output.

  2. Responding mindfully to distressing psychosis: A grounded theory analysis.

    PubMed

    Abba, Nicola; Chadwick, Paul; Stevenson, Chris

    2008-01-01

    This study investigates the psychological process involved when people with current distressing psychosis learned to respond mindfully to unpleasant psychotic sensations (voices, thoughts, and images). Sixteen participants were interviewed on completion of a mindfulness group program. Grounded theory methodology was used to generate a theory of the core psychological process using a systematically applied set of methods linking analysis with data collection. The theory inducted describes the experience of relating differently to psychosis through a three-stage process: centering in awareness of psychosis; allowing voices, thoughts, and images to come and go without reacting or struggle; and reclaiming power through acceptance of psychosis and the self. The conceptual and clinical applications of the theory and its limits are discussed.

  3. Determination of the coalescence temperature of latexes by environmental scanning electron microscopy.

    PubMed

    Gonzalez, Edurne; Tollan, Christopher; Chuvilin, Andrey; Barandiaran, Maria J; Paulis, Maria

    2012-08-01

    A new methodology for quantitative characterization of the coalescence process of waterborne polymer dispersion (latex) particles by environmental scanning electron microscopy (ESEM) is proposed. The experimental setup has been developed to provide reproducible latex monolayer depositions, optimized contrast of the latex particles, and a reliable readout of the sample temperature. Quantification of the coalescence process under dry conditions has been performed by image processing based on evaluation of the image autocorrelation function. As a proof of concept the coalescence of two latexes with known and differing glass transition temperatures has been measured. It has been shown that a reproducibility of better than 1.5 °C can be obtained for the measurement of the coalescence temperature.

  4. Image processing and machine learning in the morphological analysis of blood cells.

    PubMed

    Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A

    2018-05-01

    This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.

  5. Multi-modal image registration: matching MRI with histology

    NASA Astrophysics Data System (ADS)

    Alic, Lejla; Haeck, Joost C.; Klein, Stefan; Bol, Karin; van Tiel, Sandra T.; Wielopolski, Piotr A.; Bijster, Magda; Niessen, Wiro J.; Bernsen, Monique; Veenland, Jifke F.; de Jong, Marion

    2010-03-01

    Spatial correspondence between histology and multi sequence MRI can provide information about the capabilities of non-invasive imaging to characterize cancerous tissue. However, shrinkage and deformation occurring during the excision of the tumor and the histological processing complicate the co registration of MR images with histological sections. This work proposes a methodology to establish a detailed 3D relation between histology sections and in vivo MRI tumor data. The key features of the methodology are a very dense histological sampling (up to 100 histology slices per tumor), mutual information based non-rigid B-spline registration, the utilization of the whole 3D data sets, and the exploitation of an intermediate ex vivo MRI. In this proof of concept paper, the methodology was applied to one tumor. We found that, after registration, the visual alignment of tumor borders and internal structures was fairly accurate. Utilizing the intermediate ex vivo MRI, it was possible to account for changes caused by the excision of the tumor: we observed a tumor expansion of 20%. Also the effects of fixation, dehydration and histological sectioning could be determined: 26% shrinkage of the tumor was found. The annotation of viable tissue, performed in histology and transformed to the in vivo MRI, matched clearly with high intensity regions in MRI. With this methodology, histological annotation can be directly related to the corresponding in vivo MRI. This is a vital step for the evaluation of the feasibility of multi-spectral MRI to depict histological groundtruth.

  6. A method to optimize the processing algorithm of a computed radiography system for chest radiography.

    PubMed

    Moore, C S; Liney, G P; Beavis, A W; Saunderson, J R

    2007-09-01

    A test methodology using an anthropomorphic-equivalent chest phantom is described for the optimization of the Agfa computed radiography "MUSICA" processing algorithm for chest radiography. The contrast-to-noise ratio (CNR) in the lung, heart and diaphragm regions of the phantom, and the "system modulation transfer function" (sMTF) in the lung region, were measured using test tools embedded in the phantom. Using these parameters the MUSICA processing algorithm was optimized with respect to low-contrast detectability and spatial resolution. Two optimum "MUSICA parameter sets" were derived respectively for maximizing the CNR and sMTF in each region of the phantom. Further work is required to find the relative importance of low-contrast detectability and spatial resolution in chest images, from which the definitive optimum MUSICA parameter set can then be derived. Prior to this further work, a compromised optimum MUSICA parameter set was applied to a range of clinical images. A group of experienced image evaluators scored these images alongside images produced from the same radiographs using the MUSICA parameter set in clinical use at the time. The compromised optimum MUSICA parameter set was shown to produce measurably better images.

  7. Hybrid cardiac imaging with MR-CAT scan: a feasibility study.

    PubMed

    Hillenbrand, C; Sandstede, J; Pabst, T; Hahn, D; Haase, A; Jakob, P M

    2000-06-01

    We demonstrate the feasibility of a new versatile hybrid imaging concept, the combined acquisition technique (CAT), for cardiac imaging. The cardiac CAT approach, which combines new methodology with existing technology, essentially integrates fast low-angle shot (FLASH) and echoplanar imaging (EPI) modules in a sequential fashion, whereby each acquisition module is employed with independently optimized imaging parameters. One important CAT sequence optimization feature is the ability to use different bandwidths for different acquisition modules. Twelve healthy subjects were imaged using three cardiac CAT acquisition strategies: a) CAT was used to reduce breath-hold duration times while maintaining constant spatial resolution; b) CAT was used to increase spatial resolution in a given breath-hold time; and c) single-heart beat CAT imaging was performed. The results obtained demonstrate the feasibility of cardiac imaging using the CAT approach and the potential of this technique to accelerate the imaging process with almost conserved image quality. Copyright 2000 Wiley-Liss, Inc.

  8. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  9. Simulation and optimization of volume holographic imaging systems in Zemax.

    PubMed

    Wissmann, Patrick; Oh, Se Baek; Barbastathis, George

    2008-05-12

    We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.

  10. Automated generation of image products for Mars Exploration Rover Mission tactical operations

    NASA Technical Reports Server (NTRS)

    Alexander, Doug; Zamani, Payam; Deen, Robert; Andres, Paul; Mortensen, Helen

    2005-01-01

    This paper will discuss, from design to implementation, the methodologies applied to MIPL's automated pipeline processing as a 'system of systems' integrated with the MER GDS. Overviews of the interconnected product generating systems will also be provided with emphasis on interdependencies, including those for a) geometric rectificationn of camera lens distortions, b) generation of stereo disparity, c) derivation of 3-dimensional coordinates in XYZ space, d) generation of unified terrain meshes, e) camera-to-target ranging (distance) and f) multi-image mosaicking.

  11. Sorting Olive Batches for the Milling Process Using Image Processing

    PubMed Central

    Puerto, Daniel Aguilera; Martínez Gila, Diego Manuel; Gámez García, Javier; Gómez Ortega, Juan

    2015-01-01

    The quality of virgin olive oil obtained in the milling process is directly bound to the characteristics of the olives. Hence, the correct classification of the different incoming olive batches is crucial to reach the maximum quality of the oil. The aim of this work is to provide an automatic inspection system, based on computer vision, and to classify automatically different batches of olives entering the milling process. The classification is based on the differentiation between ground and tree olives. For this purpose, three different species have been studied (Picudo, Picual and Hojiblanco). The samples have been obtained by picking the olives directly from the tree or from the ground. The feature vector of the samples has been obtained on the basis of the olive image histograms. Moreover, different image preprocessing has been employed, and two classification techniques have been used: these are discriminant analysis and neural networks. The proposed methodology has been validated successfully, obtaining good classification results. PMID:26147729

  12. Compiled visualization with IPI method for analysing of liquid liquid mixing process

    NASA Astrophysics Data System (ADS)

    Jasikova, Darina; Kotek, Michal; Kysela, Bohus; Sulc, Radek; Kopecky, Vaclav

    2018-06-01

    The article deals with the research of mixing process using visualization techniques and IPI method. Characteristics of the size distribution and the evolution of two liquid-liquid phase's disintegration were studied. A methodology has been proposed for visualization and image analysis of data acquired during the initial phase of the mixing process. IPI method was used for subsequent detailed study of the disintegrated droplets. The article describes advantages of usage of appropriate method, presents the limits of each method, and compares them.

  13. Image simulation for automatic license plate recognition

    NASA Astrophysics Data System (ADS)

    Bala, Raja; Zhao, Yonghui; Burry, Aaron; Kozitsky, Vladimir; Fillion, Claude; Saunders, Craig; Rodríguez-Serrano, José

    2012-01-01

    Automatic license plate recognition (ALPR) is an important capability for traffic surveillance applications, including toll monitoring and detection of different types of traffic violations. ALPR is a multi-stage process comprising plate localization, character segmentation, optical character recognition (OCR), and identification of originating jurisdiction (i.e. state or province). Training of an ALPR system for a new jurisdiction typically involves gathering vast amounts of license plate images and associated ground truth data, followed by iterative tuning and optimization of the ALPR algorithms. The substantial time and effort required to train and optimize the ALPR system can result in excessive operational cost and overhead. In this paper we propose a framework to create an artificial set of license plate images for accelerated training and optimization of ALPR algorithms. The framework comprises two steps: the synthesis of license plate images according to the design and layout for a jurisdiction of interest; and the modeling of imaging transformations and distortions typically encountered in the image capture process. Distortion parameters are estimated by measurements of real plate images. The simulation methodology is successfully demonstrated for training of OCR.

  14. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  15. New Directions in 3D Medical Modeling: 3D-Printing Anatomy and Functions in Neurosurgical Planning

    PubMed Central

    Árnadóttir, Íris; Gíslason, Magnús; Ólafsson, Ingvar

    2017-01-01

    This paper illustrates the feasibility and utility of combining cranial anatomy and brain function on the same 3D-printed model, as evidenced by a neurosurgical planning case study of a 29-year-old female patient with a low-grade frontal-lobe glioma. We herein report the rapid prototyping methodology utilized in conjunction with surgical navigation to prepare and plan a complex neurosurgery. The method introduced here combines CT and MRI images with DTI tractography, while using various image segmentation protocols to 3D model the skull base, tumor, and five eloquent fiber tracts. This 3D model is rapid-prototyped and coregistered with patient images and a reported surgical navigation system, establishing a clear link between the printed model and surgical navigation. This methodology highlights the potential for advanced neurosurgical preparation, which can begin before the patient enters the operation theatre. Moreover, the work presented here demonstrates the workflow developed at the National University Hospital of Iceland, Landspitali, focusing on the processes of anatomy segmentation, fiber tract extrapolation, MRI/CT registration, and 3D printing. Furthermore, we present a qualitative and quantitative assessment for fiber tract generation in a case study where these processes are applied in the preparation of brain tumor resection surgery. PMID:29065569

  16. Performance Evaluation of 18F Radioluminescence Microscopy Using Computational Simulation

    PubMed Central

    Wang, Qian; Sengupta, Debanti; Kim, Tae Jin; Pratx, Guillem

    2017-01-01

    Purpose Radioluminescence microscopy can visualize the distribution of beta-emitting radiotracers in live single cells with high resolution. Here, we perform a computational simulation of 18F positron imaging using this modality to better understand how radioluminescence signals are formed and to assist in optimizing the experimental setup and image processing. Methods First, the transport of charged particles through the cell and scintillator and the resulting scintillation is modeled using the GEANT4 Monte-Carlo simulation. Then, the propagation of the scintillation light through the microscope is modeled by a convolution with a depth-dependent point-spread function, which models the microscope response. Finally, the physical measurement of the scintillation light using an electron-multiplying charge-coupled device (EMCCD) camera is modeled using a stochastic numerical photosensor model, which accounts for various sources of noise. The simulated output of the EMCCD camera is further processed using our ORBIT image reconstruction methodology to evaluate the endpoint images. Results The EMCCD camera model was validated against experimentally acquired images and the simulated noise, as measured by the standard deviation of a blank image, was found to be accurate within 2% of the actual detection. Furthermore, point-source simulations found that a reconstructed spatial resolution of 18.5 μm can be achieved near the scintillator. As the source is moved away from the scintillator, spatial resolution degrades at a rate of 3.5 μm per μm distance. These results agree well with the experimentally measured spatial resolution of 30–40 μm (live cells). The simulation also shows that the system sensitivity is 26.5%, which is also consistent with our previous experiments. Finally, an image of a simulated sparse set of single cells is visually similar to the measured cell image. Conclusions Our simulation methodology agrees with experimental measurements taken with radioluminescence microscopy. This in silico approach can be used to guide further instrumentation developments and to provide a framework for improving image reconstruction. PMID:28273348

  17. Diffusion Weighted Image Denoising Using Overcomplete Local PCA

    PubMed Central

    Manjón, José V.; Coupé, Pierrick; Concha, Luis; Buades, Antonio; Collins, D. Louis; Robles, Montserrat

    2013-01-01

    Diffusion Weighted Images (DWI) normally shows a low Signal to Noise Ratio (SNR) due to the presence of noise from the measurement process that complicates and biases the estimation of quantitative diffusion parameters. In this paper, a new denoising methodology is proposed that takes into consideration the multicomponent nature of multi-directional DWI datasets such as those employed in diffusion imaging. This new filter reduces random noise in multicomponent DWI by locally shrinking less significant Principal Components using an overcomplete approach. The proposed method is compared with state-of-the-art methods using synthetic and real clinical MR images, showing improved performance in terms of denoising quality and estimation of diffusion parameters. PMID:24019889

  18. Neuroimaging Feature Terminology: A Controlled Terminology for the Annotation of Brain Imaging Features.

    PubMed

    Iyappan, Anandhi; Younesi, Erfan; Redolfi, Alberto; Vrooman, Henri; Khanna, Shashank; Frisoni, Giovanni B; Hofmann-Apitius, Martin

    2017-01-01

    Ontologies and terminologies are used for interoperability of knowledge and data in a standard manner among interdisciplinary research groups. Existing imaging ontologies capture general aspects of the imaging domain as a whole such as methodological concepts or calibrations of imaging instruments. However, none of the existing ontologies covers the diagnostic features measured by imaging technologies in the context of neurodegenerative diseases. Therefore, the Neuro-Imaging Feature Terminology (NIFT) was developed to organize the knowledge domain of measured brain features in association with neurodegenerative diseases by imaging technologies. The purpose is to identify quantitative imaging biomarkers that can be extracted from multi-modal brain imaging data. This terminology attempts to cover measured features and parameters in brain scans relevant to disease progression. In this paper, we demonstrate the systematic retrieval of measured indices from literature and how the extracted knowledge can be further used for disease modeling that integrates neuroimaging features with molecular processes.

  19. Image enhancements of Landsat 8 (OLI) and SAR data for preliminary landslide identification and mapping applied to the central region of Kenya

    NASA Astrophysics Data System (ADS)

    Mwaniki, M. W.; Kuria, D. N.; Boitt, M. K.; Ngigi, T. G.

    2017-04-01

    Image enhancements lead to improved performance and increased accuracy of feature extraction, recognition, identification, classification and hence change detection. This increases the utility of remote sensing to suit environmental applications and aid disaster monitoring of geohazards involving large areas. The main aim of this study was to compare the effect of image enhancement applied to synthetic aperture radar (SAR) data and Landsat 8 imagery in landslide identification and mapping. The methodology involved pre-processing Landsat 8 imagery, image co-registration, despeckling of the SAR data, after which Landsat 8 imagery was enhanced by Principal and Independent Component Analysis (PCA and ICA), a spectral index involving bands 7 and 4, and using a False Colour Composite (FCC) with the components bearing the most geologic information. The SAR data were processed using textural and edge filters, and computation of SAR incoherence. The enhanced spatial, textural and edge information from the SAR data was incorporated to the spectral information from Landsat 8 imagery during the knowledge based classification. The methodology was tested in the central highlands of Kenya, characterized by rugged terrain and frequent rainfall induced landslides. The results showed that the SAR data complemented Landsat 8 data which had enriched spectral information afforded by the FCC with enhanced geologic information. The SAR classification depicted landslides along the ridges and lineaments, important information lacking in the Landsat 8 image classification. The success of landslide identification and classification was attributed to the enhanced geologic features by spectral, textural and roughness properties.

  20. Winging It: Using Digital Imaging To Investigate Butterfly Metamorphosis

    ERIC Educational Resources Information Center

    Bowen, Anne; Bell, Randy L.

    2004-01-01

    One of the best ways to inspire interest in biology is through observations of living things. Unfortunately, this important component of science methodology is often left out because of the difficulty of including it in the classroom. Additionally, amazing processes occur in nature that few have the chance to observe. This article reviews a…

  1. Chain of evidence generation for contrast enhancement in digital image forensics

    NASA Astrophysics Data System (ADS)

    Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela

    2010-01-01

    The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.

  2. Imaging dynamic redox processes with genetically encoded probes.

    PubMed

    Ezeriņa, Daria; Morgan, Bruce; Dick, Tobias P

    2014-08-01

    Redox signalling plays an important role in many aspects of physiology, including that of the cardiovascular system. Perturbed redox regulation has been associated with numerous pathological conditions; nevertheless, the causal relationships between redox changes and pathology often remain unclear. Redox signalling involves the production of specific redox species at specific times in specific locations. However, until recently, the study of these processes has been impeded by a lack of appropriate tools and methodologies that afford the necessary redox species specificity and spatiotemporal resolution. Recently developed genetically encoded fluorescent redox probes now allow dynamic real-time measurements, of defined redox species, with subcellular compartment resolution, in intact living cells. Here we discuss the available genetically encoded redox probes in terms of their sensitivity and specificity and highlight where uncertainties or controversies currently exist. Furthermore, we outline major goals for future probe development and describe how progress in imaging methodologies will improve our ability to employ genetically encoded redox probes in a wide range of situations. This article is part of a special issue entitled "Redox Signalling in the Cardiovascular System." Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach

    PubMed Central

    Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.

    2017-01-01

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625

  4. Soil Moisture Content Estimation Based on Sentinel-1 and Auxiliary Earth Observation Products. A Hydrological Approach.

    PubMed

    Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K

    2017-06-21

    A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.

  5. Evaluation of a Change Detection Methodology by Means of Binary Thresholding Algorithms and Informational Fusion Processes

    PubMed Central

    Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier

    2012-01-01

    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth’s resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution. PMID:22737023

  6. Evaluation of a change detection methodology by means of binary thresholding algorithms and informational fusion processes.

    PubMed

    Molina, Iñigo; Martinez, Estibaliz; Arquero, Agueda; Pajares, Gonzalo; Sanchez, Javier

    2012-01-01

    Landcover is subject to continuous changes on a wide variety of temporal and spatial scales. Those changes produce significant effects in human and natural activities. Maintaining an updated spatial database with the occurred changes allows a better monitoring of the Earth's resources and management of the environment. Change detection (CD) techniques using images from different sensors, such as satellite imagery, aerial photographs, etc., have proven to be suitable and secure data sources from which updated information can be extracted efficiently, so that changes can also be inventoried and monitored. In this paper, a multisource CD methodology for multiresolution datasets is applied. First, different change indices are processed, then different thresholding algorithms for change/no_change are applied to these indices in order to better estimate the statistical parameters of these categories, finally the indices are integrated into a change detection multisource fusion process, which allows generating a single CD result from several combination of indices. This methodology has been applied to datasets with different spectral and spatial resolution properties. Then, the obtained results are evaluated by means of a quality control analysis, as well as with complementary graphical representations. The suggested methodology has also been proved efficiently for identifying the change detection index with the higher contribution.

  7. Advance Preparation in Task-Switching: Converging Evidence from Behavioral, Brain Activation, and Model-Based Approaches

    PubMed Central

    Karayanidis, Frini; Jamadar, Sharna; Ruge, Hannes; Phillips, Natalie; Heathcote, Andrew; Forstmann, Birte U.

    2010-01-01

    Recent research has taken advantage of the temporal and spatial resolution of event-related brain potentials (ERPs) and functional magnetic resonance imaging (fMRI) to identify the time course and neural circuitry of preparatory processes required to switch between different tasks. Here we overview some key findings contributing to understanding strategic processes in advance preparation. Findings from these methodologies are compatible with advance preparation conceptualized as a set of processes activated for both switch and repeat trials, but with substantial variability as a function of individual differences and task requirements. We then highlight new approaches that attempt to capitalize on this variability to link behavior and brain activation patterns. One approach examines correlations among behavioral, ERP and fMRI measures. A second “model-based” approach accounts for differences in preparatory processes by estimating quantitative model parameters that reflect latent psychological processes. We argue that integration of behavioral and neuroscientific methodologies is key to understanding the complex nature of advance preparation in task-switching. PMID:21833196

  8. A novel neural network based image reconstruction model with scale and rotation invariance for target identification and classification for Active millimetre wave imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad

    2014-12-01

    Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.

  9. Land use mapping from CBERS-2 images with open source tools by applying different classification algorithms

    NASA Astrophysics Data System (ADS)

    Sanhouse-García, Antonio J.; Rangel-Peraza, Jesús Gabriel; Bustos-Terrones, Yaneth; García-Ferrer, Alfonso; Mesas-Carrascosa, Francisco J.

    2016-02-01

    Land cover classification is often based on different characteristics between their classes, but with great homogeneity within each one of them. This cover is obtained through field work or by mean of processing satellite images. Field work involves high costs; therefore, digital image processing techniques have become an important alternative to perform this task. However, in some developing countries and particularly in Casacoima municipality in Venezuela, there is a lack of geographic information systems due to the lack of updated information and high costs in software license acquisition. This research proposes a low cost methodology to develop thematic mapping of local land use and types of coverage in areas with scarce resources. Thematic mapping was developed from CBERS-2 images and spatial information available on the network using open source tools. The supervised classification method per pixel and per region was applied using different classification algorithms and comparing them among themselves. Classification method per pixel was based on Maxver algorithms (maximum likelihood) and Euclidean distance (minimum distance), while per region classification was based on the Bhattacharya algorithm. Satisfactory results were obtained from per region classification, where overall reliability of 83.93% and kappa index of 0.81% were observed. Maxver algorithm showed a reliability value of 73.36% and kappa index 0.69%, while Euclidean distance obtained values of 67.17% and 0.61% for reliability and kappa index, respectively. It was demonstrated that the proposed methodology was very useful in cartographic processing and updating, which in turn serve as a support to develop management plans and land management. Hence, open source tools showed to be an economically viable alternative not only for forestry organizations, but for the general public, allowing them to develop projects in economically depressed and/or environmentally threatened areas.

  10. GreenView and GreenLand Applications Development on SEE-GRID Infrastructure

    NASA Astrophysics Data System (ADS)

    Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor

    2010-05-01

    The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central Eastern Europe) regions. On the other hand, GreenLand is used for generating maps for different vegetation indexes (e.g. NDVI, EVI, SAVI, GEMI) based on Landsat satellite images. Both applications are using interpolation and random value generation algorithms, but also specific formulas for computing vegetation index values. The GreenView and GreenLand applications have been experimented over the SEE-GRID infrastructure and the performance evaluation is reported in [6]. The improvement of the execution time (obtained through a better parallelization of jobs), the extension of geographical areas to other parts of the Earth, and new user interaction techniques on spatial data and large set of satellite images are the goals of the future work. References [1] GreenView application on Wiki, http://wiki.egee-see.org/index.php/GreenView [2] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [3] Gorgan D., Stefanut T., Bâcu V., Mihon D., Grid based Environment Application Development Methodology, SCICOM, 7th International Conference on "Large-Scale Scientific Computations", 4-8 June, 2009, Sozopol, Bulgaria, (To be published by Springer), (2009). [4] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [5] Mihon D., Bacu V., Stefanut T., Gorgan D., "Grid Based Environment Application Development - GreenView Application". ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27 Aug, 2009 Cluj-Napoca. Published by IEEE Computer Press, pp. 275-282 (2009). [6] Danut Mihon, Victor Bacu, Dorian Gorgan, Róbert Mészáros, Györgyi Gelybó, Teodor Stefanut, Practical Considerations on the GreenView Application Development and Execution over SEE-GRID. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 167-175 (2009).

  11. The effect of SUV discretization in quantitative FDG-PET Radiomics: the need for standardized methodology in tumor texture analysis

    NASA Astrophysics Data System (ADS)

    Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe

    2015-08-01

    FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.

  12. Noninvasive three-dimensional live imaging methodology for the spindles at meiosis and mitosis

    NASA Astrophysics Data System (ADS)

    Zheng, Jing-gao; Huo, Tiancheng; Tian, Ning; Chen, Tianyuan; Wang, Chengming; Zhang, Ning; Zhao, Fengying; Lu, Danyu; Chen, Dieyan; Ma, Wanyun; Sun, Jia-lin; Xue, Ping

    2013-05-01

    The spindle plays a crucial role in normal chromosome alignment and segregation during meiosis and mitosis. Studying spindles in living cells noninvasively is of great value in assisted reproduction technology (ART). Here, we present a novel spindle imaging methodology, full-field optical coherence tomography (FF-OCT). Without any dye labeling and fixation, we demonstrate the first successful application of FF-OCT to noninvasive three-dimensional (3-D) live imaging of the meiotic spindles within the mouse living oocytes at metaphase II as well as the mitotic spindles in the living zygotes at metaphase and telophase. By post-processing of the 3-D dataset obtained with FF-OCT, the important morphological and spatial parameters of the spindles, such as short and long axes, spatial localization, and the angle of meiotic spindle deviation from the first polar body in the oocyte were precisely measured with the spatial resolution of 0.7 μm. Our results reveal the potential of FF-OCT as an imaging tool capable of noninvasive 3-D live morphological analysis for spindles, which might be useful to ART related procedures and many other spindle related studies.

  13. Translational research of optical molecular imaging for personalized medicine.

    PubMed

    Qin, C; Ma, X; Tian, J

    2013-12-01

    In the medical imaging field, molecular imaging is a rapidly developing discipline and forms many imaging modalities, providing us effective tools to visualize, characterize, and measure molecular and cellular mechanisms in complex biological processes of living organisms, which can deepen our understanding of biology and accelerate preclinical research including cancer study and medicine discovery. Among many molecular imaging modalities, although the penetration depth of optical imaging and the approved optical probes used for clinics are limited, it has evolved considerably and has seen spectacular advances in basic biomedical research and new drug development. With the completion of human genome sequencing and the emergence of personalized medicine, the specific drug should be matched to not only the right disease but also to the right person, and optical molecular imaging should serve as a strong adjunct to develop personalized medicine by finding the optimal drug based on an individual's proteome and genome. In this process, the computational methodology and imaging system as well as the biomedical application regarding optical molecular imaging will play a crucial role. This review will focus on recent typical translational studies of optical molecular imaging for personalized medicine followed by a concise introduction. Finally, the current challenges and the future development of optical molecular imaging are given according to the understanding of the authors, and the review is then concluded.

  14. Evaluation of an Area-Based matching algorithm with advanced shape models

    NASA Astrophysics Data System (ADS)

    Re, C.; Roncella, R.; Forlani, G.; Cremonese, G.; Naletto, G.

    2014-04-01

    Nowadays, the scientific institutions involved in planetary mapping are working on new strategies to produce accurate high resolution DTMs from space images at planetary scale, usually dealing with extremely large data volumes. From a methodological point of view, despite the introduction of a series of new algorithms for image matching (e.g. the Semi Global Matching) that yield superior results (especially because they produce usually smooth and continuous surfaces) with lower processing times, the preference in this field still goes to well established area-based matching techniques. Many efforts are consequently directed to improve each phase of the photogrammetric process, from image pre-processing to DTM interpolation. In this context, the Dense Matcher software (DM) developed at the University of Parma has been recently optimized to cope with very high resolution images provided by the most recent missions (LROC NAC and HiRISE) focusing the efforts mainly to the improvement of the correlation phase and the process automation. Important changes have been made to the correlation algorithm, still maintaining its high performance in terms of precision and accuracy, by implementing an advanced version of the Least Squares Matching (LSM) algorithm. In particular, an iterative algorithm has been developed to adapt the geometric transformation in image resampling using different shape functions as originally proposed by other authors in different applications.

  15. Classification of rice grain varieties arranged in scattered and heap fashion using image processing

    NASA Astrophysics Data System (ADS)

    Bhat, Sudhanva; Panat, Sreedath; N, Arunachalam

    2017-03-01

    Inspection and classification of food grains is a manual process in many of the food grain processing industries. Automation of such a process is going to be beneficial for industries facing shortage of skilled workforce. Machine Vision techniques are some of the popular approaches for developing such automations. Most of the existing works on the topic deal with identification of the rice variety by analyzing images of well separated and isolated rice grains from which a lot of geometrical features can be extracted. This paper proposes techniques to estimate geometrical parameters from the images of scattered as well as heaped rice grains where the grain boundaries are not clearly identifiable. A methodology based on convexity is proposed to separate touching rice grains in the scattered rice grain images and get their geometrical parameters. And in case of heaped arrangement a Pixel-Distance Contribution Function is defined and is used to get points inside rice grains and then to find the boundary points of rice grains. These points are fit with the equation of an ellipse to estimate their lengths and breadths. The proposed techniques are applied on images of scattered and heaped rice grains of different varieties. It is shown that each variety gives a unique set of results.

  16. Matrix phased array (MPA) imaging technology for resistance spot welds

    NASA Astrophysics Data System (ADS)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-01

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth of scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.

  17. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  18. Matrix phased array (MPA) imaging technology for resistance spot welds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Jeong K.; Gleeson, Sean T.

    2014-02-18

    A three-dimensional MPA probe has been incorporated with a high speed phased array electronic board to visualize nugget images of resistance spot welds. The primary application area of this battery operated portable MPA ultrasonic imaging system is in the automotive industry which a conventional destructive testing process is commonly adopted to check the quality of resistance spot welds in auto bodies. Considering an average of five-thousand spot welds in a medium size passenger vehicle, the amount of time and effort given to popping the welds and measuring nugget size are immeasurable in addition to the millions of dollars' worth ofmore » scrap metals recycled per plant per year. This wasteful labor intensive destructive testing process has become less reliable as auto body sheet metal has transitioned from thick and heavy mild steels to thin and light high strength steels. Consequently, the necessity of developing a non-destructive inspection methodology has become inevitable. In this paper, the fundamental aspects of the current 3-D probe design, data acquisition algorithms, and weld nugget imaging process are discussed.« less

  19. Pulse compression favourable aperiodic infrared imaging approach for non-destructive testing and evaluation of bio-materials

    NASA Astrophysics Data System (ADS)

    Mulaveesala, Ravibabu; Dua, Geetika; Arora, Vanita; Siddiqui, Juned A.; Muniyappa, Amarnath

    2017-05-01

    In recent years, aperiodic, transient pulse compression favourable infrared imaging methodologies demonstrated as reliable, quantitative, remote characterization and evaluation techniques for testing and evaluation of various biomaterials. This present work demonstrates a pulse compression favourable aperiodic thermal wave imaging technique, frequency modulated thermal wave imaging technique for bone diagnostics, especially by considering the bone with tissue, skin and muscle over layers. In order to find the capabilities of the proposed frequency modulated thermal wave imaging technique to detect the density variations in a multi layered skin-fat-muscle-bone structure, finite element modeling and simulation studies have been carried out. Further, frequency and time domain post processing approaches have been adopted on the temporal temperature data in order to improve the detection capabilities of frequency modulated thermal wave imaging.

  20. On use of image quality metrics for perceptual blur modeling: image/video compression case

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  1. Associative image analysis: a method for automated quantification of 3D multi-parameter images of brain tissue

    PubMed Central

    Bjornsson, Christopher S; Lin, Gang; Al-Kofahi, Yousef; Narayanaswamy, Arunachalam; Smith, Karen L; Shain, William; Roysam, Badrinath

    2009-01-01

    Brain structural complexity has confounded prior efforts to extract quantitative image-based measurements. We present a systematic ‘divide and conquer’ methodology for analyzing three-dimensional (3D) multi-parameter images of brain tissue to delineate and classify key structures, and compute quantitative associations among them. To demonstrate the method, thick (~100 μm) slices of rat brain tissue were labeled using 3 – 5 fluorescent signals, and imaged using spectral confocal microscopy and unmixing algorithms. Automated 3D segmentation and tracing algorithms were used to delineate cell nuclei, vasculature, and cell processes. From these segmentations, a set of 23 intrinsic and 8 associative image-based measurements was computed for each cell. These features were used to classify astrocytes, microglia, neurons, and endothelial cells. Associations among cells and between cells and vasculature were computed and represented as graphical networks to enable further analysis. The automated results were validated using a graphical interface that permits investigator inspection and corrective editing of each cell in 3D. Nuclear counting accuracy was >89%, and cell classification accuracy ranged from 81–92% depending on cell type. We present a software system named FARSIGHT implementing our methodology. Its output is a detailed XML file containing measurements that may be used for diverse quantitative hypothesis-driven and exploratory studies of the central nervous system. PMID:18294697

  2. WE-B-BRC-01: Current Methodologies in Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rath, F.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  3. Ten Years of Vegetation Change in Northern California Marshlands Detected using Landsat Satellite Image Analysis

    NASA Technical Reports Server (NTRS)

    Potter, Christopher

    2013-01-01

    The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) methodology was applied to detected changes in perennial vegetation cover at marshland sites in Northern California reported to have undergone restoration between 1999 and 2009. Results showed extensive contiguous areas of restored marshland plant cover at 10 of the 14 sites selected. Gains in either woody shrub cover and/or from recovery of herbaceous cover that remains productive and evergreen on a year-round basis could be mapped out from the image results. However, LEDAPS may not be highly sensitive changes in wetlands that have been restored mainly with seasonal herbaceous cover (e.g., vernal pools), due to the ephemeral nature of the plant greenness signal. Based on this evaluation, the LEDAPS methodology would be capable of fulfilling a pressing need for consistent, continual, low-cost monitoring of changes in marshland ecosystems of the Pacific Flyway.

  4. Detecting the presence-absence of bluefin tuna by automated analysis of medium-range sonars on fishing vessels.

    PubMed

    Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A; Yurramendi, Yosu; Santiago, Josu

    2017-01-01

    This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class ("tuna" or "no-tuna") and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed.

  5. Detecting the presence-absence of bluefin tuna by automated analysis of medium-range sonars on fishing vessels

    PubMed Central

    Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A.; Yurramendi, Yosu; Santiago, Josu

    2017-01-01

    This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class (“tuna” or “no-tuna”) and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed. PMID:28152032

  6. VET in the European Aircraft and Space Industry

    ERIC Educational Resources Information Center

    Bremer, Rainer

    2008-01-01

    Purpose: This article aims to take up a mirror image-oriented position of the EQF and the announced ECVET system. It seeks to be concerned with the effects that the EQF transformation process into the respective NQF might have on the underlying systems of vocational education and training. Design/methodology/approach: A comparison is drawn between…

  7. Spotlight SAR interferometry for terrain elevation mapping and interferometric change detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, P.H.; Ghiglia, D.C.; Jakowatz, C.V. Jr.

    1996-02-01

    In this report, we employ an approach quite different from any previous work; we show that a new methodology leads to a simpler and clearer understanding of the fundamental principles of SAR interferometry. This methodology also allows implementation of an important collection mode that has not been demonstrated to date. Specifically, we introduce the following six new concepts for the processing of interferometric SAR (INSAR) data: (1) processing using spotlight mode SAR imaging (allowing ultra-high resolution), as opposed to conventional strip-mapping techniques; (2) derivation of the collection geometry constraints required to avoid decorrelation effects in two-pass INSAR; (3) derivation ofmore » maximum likelihood estimators for phase difference and the change parameter employed in interferometric change detection (ICD); (4) processing for the two-pass case wherein the platform ground tracks make a large crossing angle; (5) a robust least-squares method for two-dimensional phase unwrapping formulated as a solution to Poisson`s equation, instead of using traditional path-following techniques; and (6) the existence of a simple linear scale factor that relates phase differences between two SAR images to terrain height. We show both theoretical analysis, as well as numerous examples that employ real SAR collections to demonstrate the innovations listed above.« less

  8. Methodological approach for the assessment of ultrasound reproducibility of cardiac structure and function: a proposal of the study group of Echocardiography of the Italian Society of Cardiology (Ultra Cardia SIC) Part I

    PubMed Central

    2011-01-01

    When applying echo-Doppler imaging for either clinical or research purposes it is very important to select the most adequate modality/technology and choose the most reliable and reproducible measurements. Quality control is a mainstay to reduce variability among institutions and operators and must be obtained by using appropriate procedures for data acquisition, storage and interpretation of echo-Doppler data. This goal can be achieved by employing an echo core laboratory (ECL), with the responsibility for standardizing image acquisition processes (performed at the peripheral echo-labs) and analysis (by monitoring and optimizing the internal intra- and inter-reader variability of measurements). Accordingly, the Working Group of Echocardiography of the Italian Society of Cardiology decided to design standardized procedures for imaging acquisition in peripheral laboratories and reading procedures and to propose a methodological approach to assess the reproducibility of echo-Doppler parameters of cardiac structure and function by using both standard and advanced technologies. A number of cardiologists experienced in cardiac ultrasound was involved to set up an ECL available for future studies involving complex imaging or including echo-Doppler measures as primary or secondary efficacy or safety end-points. The present manuscript describes the methodology of the procedures (imaging acquisition and measurement reading) and provides the documentation of the work done so far to test the reproducibility of the different echo-Doppler modalities (standard and advanced). These procedures can be suggested for utilization also in non referall echocardiographic laboratories as an "inside" quality check, with the aim at optimizing clinical consistency of echo-Doppler data. PMID:21943283

  9. Combined semantic and similarity search in medical image databases

    NASA Astrophysics Data System (ADS)

    Seifert, Sascha; Thoma, Marisa; Stegmaier, Florian; Hammon, Matthias; Kramer, Martin; Huber, Martin; Kriegel, Hans-Peter; Cavallaro, Alexander; Comaniciu, Dorin

    2011-03-01

    The current diagnostic process at hospitals is mainly based on reviewing and comparing images coming from multiple time points and modalities in order to monitor disease progression over a period of time. However, for ambiguous cases the radiologist deeply relies on reference literature or second opinion. Although there is a vast amount of acquired images stored in PACS systems which could be reused for decision support, these data sets suffer from weak search capabilities. Thus, we present a search methodology which enables the physician to fulfill intelligent search scenarios on medical image databases combining ontology-based semantic and appearance-based similarity search. It enabled the elimination of 12% of the top ten hits which would arise without taking the semantic context into account.

  10. Comparative study of different approaches for multivariate image analysis in HPTLC fingerprinting of natural products such as plant resin.

    PubMed

    Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka

    2017-01-01

    Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Advances in low-cost long-wave infrared polymer windows

    NASA Astrophysics Data System (ADS)

    Weimer, Wayne A.; Klocek, Paul

    1999-07-01

    Recent improvements in engineered polymeric material compositions and advances in processing methodologies developed and patented at Raytheon Systems Company have produced long wave IR windows at exceptionally low costs. These UV stabilized, high strength windows incorporating subwavelength structured antireflection surfaces are enabling IR imaging systems to penetrate commercial markets and will reduce the cost of systems delivered to the military. The optical and mechanical properties of these windows will be discussed in detail with reference to the short and long-term impact on military IR imaging systems.

  12. GPR-Based Water Leak Models in Water Distribution Systems

    PubMed Central

    Ayala-Cabrera, David; Herrera, Manuel; Izquierdo, Joaquín; Ocaña-Levario, Silvia J.; Pérez-García, Rafael

    2013-01-01

    This paper addresses the problem of leakage in water distribution systems through the use of ground penetrating radar (GPR) as a nondestructive method. Laboratory tests are performed to extract features of water leakage from the obtained GPR images. Moreover, a test in a real-world urban system under real conditions is performed. Feature extraction is performed by interpreting GPR images with the support of a pre-processing methodology based on an appropriate combination of statistical methods and multi-agent systems. The results of these tests are presented, interpreted, analyzed and discussed in this paper.

  13. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  14. A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto

    2017-10-01

    Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.

  15. Large Area Scene Selection Interface (LASSI). Methodology of Selecting Landsat Imagery for the Global Land Survey 2005

    NASA Technical Reports Server (NTRS)

    Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry

    2009-01-01

    The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.

  16. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  17. A Framework for Reproducible Latent Fingerprint Enhancements.

    PubMed

    Carasso, Alfred S

    2014-01-01

    Photoshop processing of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology.

  18. A spectral approach for the quantitative description of cardiac collagen network from nonlinear optical imaging.

    PubMed

    Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia

    2015-01-01

    The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.

  19. A Framework for Reproducible Latent Fingerprint Enhancements

    PubMed Central

    Carasso, Alfred S.

    2014-01-01

    Photoshop processing1 of latent fingerprints is the preferred methodology among law enforcement forensic experts, but that appproach is not fully reproducible and may lead to questionable enhancements. Alternative, independent, fully reproducible enhancements, using IDL Histogram Equalization and IDL Adaptive Histogram Equalization, can produce better-defined ridge structures, along with considerable background information. Applying a systematic slow motion smoothing procedure to such IDL enhancements, based on the rapid FFT solution of a Lévy stable fractional diffusion equation, can attenuate background detail while preserving ridge information. The resulting smoothed latent print enhancements are comparable to, but distinct from, forensic Photoshop images suitable for input into automated fingerprint identification systems, (AFIS). In addition, this progressive smoothing procedure can be reexamined by displaying the suite of progressively smoother IDL images. That suite can be stored, providing an audit trail that allows monitoring for possible loss of useful information, in transit to the user-selected optimal image. Such independent and fully reproducible enhancements provide a valuable frame of reference that may be helpful in informing, complementing, and possibly validating the forensic Photoshop methodology. PMID:26601028

  20. Comparison of 250 MHz electron spin echo and continuous wave oxygen EPR imaging methods for in vivo applications

    PubMed Central

    Epel, Boris; Sundramoorthy, Subramanian V.; Barth, Eugene D.; Mailer, Colin; Halpern, Howard J.

    2011-01-01

    Purpose: The authors compare two electron paramagnetic resonance imaging modalities at 250 MHz to determine advantages and disadvantages of those modalities for in vivo oxygen imaging. Methods: Electron spin echo (ESE) and continuous wave (CW) methodologies were used to obtain three-dimensional images of a narrow linewidth, water soluble, nontoxic oxygen-sensitive trityl molecule OX063 in vitro and in vivo. The authors also examined sequential images obtained from the same animal injected intravenously with trityl spin probe to determine temporal stability of methodologies. Results: A study of phantoms with different oxygen concentrations revealed a threefold advantage of the ESE methodology in terms of reduced imaging time and more precise oxygen resolution for samples with less than 70 torr oxygen partial pressure. Above∼100 torr, CW performed better. The images produced by both methodologies showed pO2 distributions with similar mean values. However, ESE images demonstrated superior performance in low pO2 regions while missing voxels in high pO2 regions. Conclusions: ESE and CW have different areas of applicability. ESE is superior for hypoxia studies in tumors. PMID:21626937

  1. Image analysis method for the measurement of water saturation in a two-dimensional experimental flow tank

    NASA Astrophysics Data System (ADS)

    Belfort, Benjamin; Weill, Sylvain; Lehmann, François

    2017-04-01

    A novel, non-invasive imaging technique that determines 2D maps of water content in unsaturated porous media is presented. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage / imbibition experiment in a 2D flow tank with inner dimensions of 40 cm x 14 cm x 6 cm (L x W x D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using numerical simulations with a state-of-the-art computational code that solves the Richards. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Application examples to a larger flow tank with various boundary conditions are finally presented to illustrate the potential of the methodology.

  2. Radiation levels and image quality in patients undergoing chest X-ray examinations

    NASA Astrophysics Data System (ADS)

    de Oliveira, Paulo Márcio Campos; do Carmo Santana, Priscila; de Sousa Lacerda, Marco Aurélio; da Silva, Teógenes Augusto

    2017-11-01

    Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and "doses" (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image.

  3. Emotion Modulation of Visual Attention: Categorical and Temporal Characteristics

    PubMed Central

    Ciesielski, Bethany G.; Armstrong, Thomas; Zald, David H.; Olatunji, Bunmi O.

    2010-01-01

    Background Experimental research has shown that emotional stimuli can either enhance or impair attentional performance. However, the relative effects of specific emotional stimuli and the specific time course of these differential effects are unclear. Methodology/Principal Findings In the present study, participants (n = 50) searched for a single target within a rapid serial visual presentation of images. Irrelevant fear, disgust, erotic or neutral images preceded the target by two, four, six, or eight items. At lag 2, erotic images induced the greatest deficits in subsequent target processing compared to other images, consistent with a large emotional attentional blink. Fear and disgust images also produced a larger attentional blinks at lag 2 than neutral images. Erotic, fear, and disgust images continued to induce greater deficits than neutral images at lag 4 and 6. However, target processing deficits induced by erotic, fear, and disgust images at intermediate lags (lag 4 and 6) did not consistently differ from each other. In contrast to performance at lag 2, 4, and 6, enhancement in target processing for emotional stimuli was observed in comparison to neutral stimuli at lag 8. Conclusions/Significance These findings suggest that task-irrelevant emotion information, particularly erotica, impairs intentional allocation of attention at early temporal stages, but at later temporal stages, emotional stimuli can have an enhancing effect on directed attention. These data suggest that the effects of emotional stimuli on attention can be both positive and negative depending upon temporal factors. PMID:21079773

  4. Automated x-ray/light field congruence using the LINAC EPID panel.

    PubMed

    Polak, Wojciech; O'Doherty, Jim; Jones, Matt

    2013-03-01

    X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal gantry angles which can be difficult when using radiographic film. Therefore, the authors propose it can be used as an alternative to the radiographic film method allowing decommissioning of the film processor.

  5. Exploitation of realistic computational anthropomorphic phantoms for the optimization of nuclear imaging acquisition and processing protocols.

    PubMed

    Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C

    2014-01-01

    Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.

  6. Vision-based in-line fabric defect detection using yarn-specific shape features

    NASA Astrophysics Data System (ADS)

    Schneider, Dorian; Aach, Til

    2012-01-01

    We develop a methodology for automatic in-line flaw detection in industrial woven fabrics. Where state of the art detection algorithms apply texture analysis methods to operate on low-resolved ({200 ppi) image data, we describe here a process flow to segment single yarns in high-resolved ({1000 ppi) textile images. Four yarn shape features are extracted, allowing a precise detection and measurement of defects. The degree of precision reached allows a classification of detected defects according to their nature, providing an innovation in the field of automatic fabric flaw detection. The design has been carried out to meet real time requirements and face adverse conditions caused by loom vibrations and dirt. The entire process flow is discussed followed by an evaluation using a database with real-life industrial fabric images. This work pertains to the construction of an on-loom defect detection system to be used in manufacturing practice.

  7. Automatic crack detection and classification method for subway tunnel safety monitoring.

    PubMed

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-10-16

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  8. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    PubMed Central

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-01-01

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification. PMID:25325337

  9. Automated image processing of LANDSAT 2 digital data for watershed runoff prediction

    NASA Technical Reports Server (NTRS)

    Sasso, R. R.; Jensen, J. R.; Estes, J. E.

    1977-01-01

    The U.S. Soil Conservation Service (SCS) model for watershed runoff prediction uses soil and land cover information as its major drivers. Kern County Water Agency is implementing the SCS model to predict runoff for 10,400 sq cm of mountainous watershed in Kern County, California. The Remote Sensing Unit, University of California, Santa Barbara, was commissioned by KCWA to conduct a 230 sq cm feasibility study in the Lake Isabella, California region to evaluate remote sensing methodologies which could be ultimately extrapolated to the entire 10,400 sq cm Kern County watershed. Digital results indicate that digital image processing of Landsat 2 data will provide usable land cover required by KCWA for input to the SCS runoff model.

  10. Methodological considerations in conducting an olfactory fMRI study.

    PubMed

    Vedaei, Faezeh; Fakhri, Mohammad; Harirchian, Mohammad Hossein; Firouznia, Kavous; Lotfi, Yones; Ali Oghabian, Mohammad

    2013-01-01

    The sense of smell is a complex chemosensory processing in human and animals that allows them to connect with the environment as one of their chief sensory systems. In the field of functional brain imaging, many studies have focused on locating brain regions that are involved during olfactory processing. Despite wealth of literature about brain network in different olfactory tasks, there is a paucity of data regarding task design. Moreover, considering importance of olfactory tasks for patients with variety of neurological diseases, special contemplations should be addressed for patients. In this article, we review current olfaction tasks for behavioral studies and functional neuroimaging assessments, as well as technical principles regarding utilization of these tasks in functional magnetic resonance imaging studies.

  11. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  12. Neuroimaging Feature Terminology: A Controlled Terminology for the Annotation of Brain Imaging Features

    PubMed Central

    Iyappan, Anandhi; Younesi, Erfan; Redolfi, Alberto; Vrooman, Henri; Khanna, Shashank; Frisoni, Giovanni B.; Hofmann-Apitius, Martin

    2017-01-01

    Ontologies and terminologies are used for interoperability of knowledge and data in a standard manner among interdisciplinary research groups. Existing imaging ontologies capture general aspects of the imaging domain as a whole such as methodological concepts or calibrations of imaging instruments. However, none of the existing ontologies covers the diagnostic features measured by imaging technologies in the context of neurodegenerative diseases. Therefore, the Neuro-Imaging Feature Terminology (NIFT) was developed to organize the knowledge domain of measured brain features in association with neurodegenerative diseases by imaging technologies. The purpose is to identify quantitative imaging biomarkers that can be extracted from multi-modal brain imaging data. This terminology attempts to cover measured features and parameters in brain scans relevant to disease progression. In this paper, we demonstrate the systematic retrieval of measured indices from literature and how the extracted knowledge can be further used for disease modeling that integrates neuroimaging features with molecular processes. PMID:28731430

  13. A strategy to optimize CT pediatric dose with a visual discrimination model

    NASA Astrophysics Data System (ADS)

    Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.

    2008-03-01

    Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.

  14. A state-of-the-art review on segmentation algorithms in intravascular ultrasound (IVUS) images.

    PubMed

    Katouzian, Amin; Angelini, Elsa D; Carlier, Stéphane G; Suri, Jasjit S; Navab, Nassir; Laine, Andrew F

    2012-09-01

    Over the past two decades, intravascular ultrasound (IVUS) image segmentation has remained a challenge for researchers while the use of this imaging modality is rapidly growing in catheterization procedures and in research studies. IVUS provides cross-sectional grayscale images of the arterial wall and the extent of atherosclerotic plaques with high spatial resolution in real time. In this paper, we review recently developed image processing methods for the detection of media-adventitia and luminal borders in IVUS images acquired with different transducers operating at frequencies ranging from 20 to 45 MHz. We discuss methodological challenges, lack of diversity in reported datasets, and weaknesses of quantification metrics that make IVUS segmentation still an open problem despite all efforts. In conclusion, we call for a common reference database, validation metrics, and ground-truth definition with which new and existing algorithms could be benchmarked.

  15. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1992-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  16. Forensic detection of noise addition in digital images

    NASA Astrophysics Data System (ADS)

    Cao, Gang; Zhao, Yao; Ni, Rongrong; Ou, Bo; Wang, Yongbin

    2014-03-01

    We proposed a technique to detect the global addition of noise to a digital image. As an anti-forensics tool, noise addition is typically used to disguise the visual traces of image tampering or to remove the statistical artifacts left behind by other operations. As such, the blind detection of noise addition has become imperative as well as beneficial to authenticate the image content and recover the image processing history, which is the goal of general forensics techniques. Specifically, the special image blocks, including constant and strip ones, are used to construct the features for identifying noise addition manipulation. The influence of noising on blockwise pixel value distribution is formulated and analyzed formally. The methodology of detectability recognition followed by binary decision is proposed to ensure the applicability and reliability of noising detection. Extensive experimental results demonstrate the efficacy of our proposed noising detector.

  17. Integrated Imaging and Vision Techniques for Industrial Inspection: A Special Issue on Machine Vision and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep

    2010-06-05

    Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less

  18. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1991-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  19. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  20. [One decade of functional imaging in schizophrenia research. From visualisation of basic information processing steps to molecular-genetic oriented imaging].

    PubMed

    Tost, H; Meyer-Lindenberg, A; Ruf, M; Demirakça, T; Grimm, O; Henn, F A; Ende, G

    2005-02-01

    Modern neuroimaging techniques such as magnetic resonance imaging (MRI) and positron emission tomography (PET) have contributed tremendously to our current understanding of psychiatric disorders in the context of functional, biochemical and microstructural alterations of the brain. Since the mid-nineties, functional MRI has provided major insights into the neurobiological correlates of signs and symptoms in schizophrenia. The current paper reviews important fMRI studies of the past decade in the domains of motor, visual, auditory, attentional and working memory function. Special emphasis is given to new methodological approaches, such as the visualisation of medication effects and the functional characterisation of risk genes.

  1. Delineation of karst terranes in complex environments: Application of modern developments in the wavelet theory and data mining

    NASA Astrophysics Data System (ADS)

    Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery

    2013-04-01

    Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.

  2. Direct Bio-printing with Heterogeneous Topology Design.

    PubMed

    Ahsan, Amm Nazmul; Xie, Ruinan; Khoda, Bashir

    2017-01-01

    Bio-additive manufacturing is a promising tool to fabricate porous scaffold structures for expediting the tissue regeneration processes. Unlike the most traditional bulk material objects, the microstructures of tissue and organs are mostly highly anisotropic, heterogeneous, and porous in nature. However, modelling the internal heterogeneity of tissues/organs structures in the traditional CAD environment is difficult and oftentimes inaccurate. Besides, the de facto STL conversion of bio-models introduces loss of information and piles up more errors in each subsequent step (build orientation, slicing, tool-path planning) of the bio-printing process plan. We are proposing a topology based scaffold design methodology to accurately represent the heterogeneous internal architecture of tissues/organs. An image analysis technique is used that digitizes the topology information contained in medical images of tissues/organs. A weighted topology reconstruction algorithm is implemented to represent the heterogeneity with parametric functions. The parametric functions are then used to map the spatial material distribution. The generated information is directly transferred to the 3D bio-printer and heterogeneous porous tissue scaffold structure is manufactured without STL file. The proposed methodology is implemented to verify the effectiveness of the approach and the designed example structure is bio-fabricated with a deposition based bio-additive manufacturing system.

  3. Representation of scientific methodology in secondary science textbooks

    NASA Astrophysics Data System (ADS)

    Binns, Ian C.

    The purpose of this investigation was to assess the representation of scientific methodology in secondary science textbooks. More specifically, this study looked at how textbooks introduced scientific methodology and to what degree the examples from the rest of the textbook, the investigations, and the images were consistent with the text's description of scientific methodology, if at all. The sample included eight secondary science textbooks from two publishers, McGraw-Hill/Glencoe and Harcourt/Holt, Rinehart & Winston. Data consisted of all student text and teacher text that referred to scientific methodology. Second, all investigations in the textbooks were analyzed. Finally, any images that depicted scientists working were also collected and analyzed. The text analysis and activity analysis used the ethnographic content analysis approach developed by Altheide (1996). The rubrics used for the text analysis and activity analysis were initially guided by the Benchmarks (AAAS, 1993), the NSES (NRC, 1996), and the nature of science literature. Preliminary analyses helped to refine each of the rubrics and grounded them in the data. Image analysis used stereotypes identified in the DAST literature. Findings indicated that all eight textbooks presented mixed views of scientific methodology in their initial descriptions. Five textbooks placed more emphasis on the traditional view and three placed more emphasis on the broad view. Results also revealed that the initial descriptions, examples, investigations, and images all emphasized the broad view for Glencoe Biology and the traditional view for Chemistry: Matter and Change. The initial descriptions, examples, investigations, and images in the other six textbooks were not consistent. Overall, the textbook with the most appropriate depiction of scientific methodology was Glencoe Biology and the textbook with the least appropriate depiction of scientific methodology was Physics: Principles and Problems. These findings suggest that compared to earlier investigations, textbooks have begun to improve in how they represent scientific methodology. However, there is still much room for improvement. Future research needs to consider how textbooks impact teachers' and students' understandings of scientific methodology.

  4. From intuition to statistics in building subsurface structural models

    USGS Publications Warehouse

    Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.

    2011-01-01

    Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.

  5. Go With the Flow, on Jupiter and Snow. Coherence from Model-Free Video Data Without Trajectories

    NASA Astrophysics Data System (ADS)

    AlMomani, Abd AlRahman R.; Bollt, Erik

    2018-06-01

    Viewing a data set such as the clouds of Jupiter, coherence is readily apparent to human observers, especially the Great Red Spot, but also other great storms and persistent structures. There are now many different definitions and perspectives mathematically describing coherent structures, but we will take an image processing perspective here. We describe an image processing perspective inference of coherent sets from a fluidic system directly from image data, without attempting to first model underlying flow fields, related to a concept in image processing called motion tracking. In contrast to standard spectral methods for image processing which are generally related to a symmetric affinity matrix, leading to standard spectral graph theory, we need a not symmetric affinity which arises naturally from the underlying arrow of time. We develop an anisotropic, directed diffusion operator corresponding to flow on a directed graph, from a directed affinity matrix developed with coherence in mind, and corresponding spectral graph theory from the graph Laplacian. Our methodology is not offered as more accurate than other traditional methods of finding coherent sets, but rather our approach works with alternative kinds of data sets, in the absence of vector field. Our examples will include partitioning the weather and cloud structures of Jupiter, and a local to Potsdam, NY, lake effect snow event on Earth, as well as the benchmark test double-gyre system.

  6. A graph-based approach to detect spatiotemporal dynamics in satellite image time series

    NASA Astrophysics Data System (ADS)

    Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal

    2017-08-01

    Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.

  7. Field-based rice classification in Wuhua county through integration of multi-temporal Sentinel-1A and Landsat-8 OLI data

    NASA Astrophysics Data System (ADS)

    Yang, Huijin; Pan, Bin; Wu, Wenfu; Tai, Jianhao

    2018-07-01

    Rice is one of the most important cereals in the world. With the change of agricultural land, it is urgently necessary to update information about rice planting areas. This study aims to map rice planting areas with a field-based approach through the integration of multi-temporal Sentinel-1A and Landsat-8 OLI data in Wuhua County of South China where has many basins and mountains. This paper, using multi-temporal SAR and optical images, proposes a methodology for the identification of rice-planting areas. This methodology mainly consists of SSM applied to time series SAR images for the calculation of a similarity measure, image segmentation process applied to the pan-sharpened optical image for the searching of homogenous objects, and the integration of SAR and optical data for the elimination of some speckles. The study compares the per-pixel approach with the per-field approach and the results show that the highest accuracy (91.38%) based on the field-based approach is 1.18% slightly higher than that based on the pixel-based approach for VH polarization, which is brought by eliminating speckle noise through comparing the rice maps of these two approaches. Therefore, the integration of Sentinel-1A and Landsat-8 OLI images with a field-based approach has great potential for mapping rice or other crops' areas.

  8. How to investigate neuro-biochemical relationships on a regional level in humans? Methodological considerations for combining functional with biochemical imaging.

    PubMed

    Duncan, Niall W; Wiebking, Christine; Muñoz-Torres, Zeidy; Northoff, Georg

    2014-01-15

    There is an increasing interest in combining different imaging modalities to investigate the relationship between neural and biochemical activity. More specifically, imaging techniques like MRS and PET that allow for biochemical measurement are combined with techniques like fMRI and EEG that measure neural activity in different states. Such combination of neural and biochemical measures raises not only technical issues, such as merging the different data sets, but also several methodological issues. These methodological issues – ranging from hypothesis generation and hypothesis-guided use of technical facilities to target measures and experimental measures – are the focus of this paper. We discuss the various methodological problems and issues raised by the combination of different imaging methodologies in order to investigate neuro-biochemical relationships on a regional level in humans. For example, the choice of transmitter and scan type is discussed, along with approaches to allow the establishment of particular specificities (such as regional or biochemical) to in turn make results fully interpretable. An algorithm that can be used as a form of checklist for designing such multimodal studies is presented. The paper concludes that while several methodological and technical caveats needs to be overcome and addressed, multimodal imaging of the neuro-biochemical relationship provides an important tool to better understand the physiological mechanisms of the human brain.

  9. How to investigate neuro-biochemical relationships on a regional level in humans? Methodological considerations for combining functional with biochemical imaging.

    PubMed

    Duncan, Niall W; Wiebking, Christine; Munoz-Torres, Zeidy; Northoff, Georg

    2013-10-25

    There is an increasing interest in combining different imaging modalities to investigate the relationship between neural and biochemical activity. More specifically, imaging techniques like MRS and PET that allow for biochemical measurement are combined with techniques like fMRI and EEG that measure neural activity in different states. Such combination of neural and biochemical measures raises not only technical issues, such as merging the different data sets, but also several methodological issues. These methodological issues - ranging from hypothesis generation and hypothesis-guided use of technical facilities to target measures and experimental measures - are the focus of this paper. We discuss the various methodological problems and issues raised by the combination of different imaging methodologies in order to investigate neuro-biochemical relationships on a regional level in humans. For example, the choice of transmitter and scan type is discussed, along with approaches to allow the establishment of particular specificities (such as regional or biochemical) to in turn make results fully interpretable. An algorithm that can be used as a form of checklist for designing such multimodal studies is presented. The paper concludes that while several methodological and technical caveats needs to be overcome and addressed, multimodal imaging of the neuro-biochemical relationship provides an important tool to better understand the physiological mechanisms of the human brain. Copyright © 2013. Published by Elsevier B.V.

  10. A global "imaging'' view on systems approaches in immunology.

    PubMed

    Ludewig, Burkhard; Stein, Jens V; Sharpe, James; Cervantes-Barragan, Luisa; Thiel, Volker; Bocharov, Gennady

    2012-12-01

    The immune system exhibits an enormous complexity. High throughput methods such as the "-omic'' technologies generate vast amounts of data that facilitate dissection of immunological processes at ever finer resolution. Using high-resolution data-driven systems analysis, causal relationships between complex molecular processes and particular immunological phenotypes can be constructed. However, processes in tissues, organs, and the organism itself (so-called higher level processes) also control and regulate the molecular (lower level) processes. Reverse systems engineering approaches, which focus on the examination of the structure, dynamics and control of the immune system, can help to understand the construction principles of the immune system. Such integrative mechanistic models can properly describe, explain, and predict the behavior of the immune system in health and disease by combining both higher and lower level processes. Moving from molecular and cellular levels to a multiscale systems understanding requires the development of methodologies that integrate data from different biological levels into multiscale mechanistic models. In particular, 3D imaging techniques and 4D modeling of the spatiotemporal dynamics of immune processes within lymphoid tissues are central for such integrative approaches. Both dynamic and global organ imaging technologies will be instrumental in facilitating comprehensive multiscale systems immunology analyses as discussed in this review. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Localized water reverberation phases and its impact on back-projection images

    NASA Astrophysics Data System (ADS)

    Yue, H.; Castillo, J.; Yu, C.; Meng, L.; Zhan, Z.

    2017-12-01

    Coherent radiators imaged by back-projections (BP) are commonly interpreted as part of the rupture process. Nevertheless, artifacts introduced by structure related phases are rarely discriminated from the rupture process. In this study, we adopt the logic of empirical Greens' function analysis (EGF) to discriminate between rupture and structure effect. We re-examine the waveforms and BP images of the 2012 Mw 7.2 Indian Ocean earthquake and an EGF event (Mw 6.2). The P wave codas of both events present similar shape with characteristic period of approximately 10 s, which are back-projected as coherent radiators near the trench. S wave BP doesn't image energy radiation near the trench. We interpret those coda waves as localized water reverberation phases excited near the trench. We perform a 2D waveform modeling using realistic bathymetry model, and find that the sharp near-trench bathymetry traps the acoustic water waves forming localized reverberation phases. These waves can be imaged as coherent near-trench radiators with similar features as that in the observations. We present a set of methodology to discriminate between the rupture and propagation effects in BP images, which can serve as a criterion of subevent identification.

  12. Good reasons to implement quality assurance in nationwide breast cancer screening programs in Croatia and Serbia: results from a pilot study.

    PubMed

    Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje

    2011-04-01

    The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  13. Low cost, multiscale and multi-sensor application for flooded area mapping

    NASA Astrophysics Data System (ADS)

    Giordan, Daniele; Notti, Davide; Villa, Alfredo; Zucca, Francesco; Calò, Fabiana; Pepe, Antonio; Dutto, Furio; Pari, Paolo; Baldo, Marco; Allasia, Paolo

    2018-05-01

    Flood mapping and estimation of the maximum water depth are essential elements for the first damage evaluation, civil protection intervention planning and detection of areas where remediation is needed. In this work, we present and discuss a methodology for mapping and quantifying flood severity over floodplains. The proposed methodology considers a multiscale and multi-sensor approach using free or low-cost data and sensors. We applied this method to the November 2016 Piedmont (northwestern Italy) flood. We first mapped the flooded areas at the basin scale using free satellite data from low- to medium-high-resolution from both the SAR (Sentinel-1, COSMO-Skymed) and multispectral sensors (MODIS, Sentinel-2). Using very- and ultra-high-resolution images from the low-cost aerial platform and remotely piloted aerial system, we refined the flooded zone and detected the most damaged sector. The presented method considers both urbanised and non-urbanised areas. Nadiral images have several limitations, in particular in urbanised areas, where the use of terrestrial images solved this limitation. Very- and ultra-high-resolution images were processed with structure from motion (SfM) for the realisation of 3-D models. These data, combined with an available digital terrain model, allowed us to obtain maps of the flooded area, maximum high water area and damaged infrastructures.

  14. Lung parenchymal analysis on dynamic MRI in thoracic insufficiency syndrome to assess changes following surgical intervention

    NASA Astrophysics Data System (ADS)

    Jagadale, Basavaraj N.; Udupa, Jayaram K.; Tong, Yubing; Wu, Caiyun; McDonough, Joseph; Torigian, Drew A.; Campbell, Robert M.

    2018-02-01

    General surgeons, orthopedists, and pulmonologists individually treat patients with thoracic insufficiency syndrome (TIS). The benefits of growth-sparing procedures such as Vertical Expandable Prosthetic Titanium Rib (VEPTR)insertionfor treating patients with TIS have been demonstrated. However, at present there is no objective assessment metricto examine different thoracic structural components individually as to their roles in the syndrome, in contributing to dynamics and function, and in influencing treatment outcome. Using thoracic dynamic MRI (dMRI), we have been developing a methodology to overcome this problem. In this paper, we extend this methodology from our previous structural analysis approaches to examining lung tissue properties. We process the T2-weighted dMRI images through a series of steps involving 4D image construction of the acquired dMRI images, intensity non-uniformity correction and standardization of the 4D image, lung segmentation, and estimation of the parameters describing lung tissue intensity distributions in the 4D image. Based on pre- and post-operative dMRI data sets from 25 TIS patients (predominantly neuromuscular and congenital conditions), we demonstrate how lung tissue can be characterized by the estimated distribution parameters. Our results show that standardized T2-weighted image intensity values decrease from the pre- to post-operative condition, likely reflecting improved lung aeration post-operatively. In both pre- and post-operative conditions, the intensity values decrease also from end-expiration to end-inspiration, supporting the basic premise of our results.

  15. Evaluation of color grading impact in restoration process of archive films

    NASA Astrophysics Data System (ADS)

    Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Janout, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek

    2016-09-01

    Color grading of archive films is a very particular task in the process of their restoration. The ultimate goal of color grading here is to achieve the same look of the movie as intended at the time of its first presentation. The role of the expert restorer, expert group and a digital colorist in this complicated process is to find the optimal settings of the digital color grading system so that the resulting image look is as close as possible to the estimate of the original reference release print adjusted by the expert group of cinematographers. A methodology for subjective assessment of perceived differences between the outcomes of color grading is introduced, and results of a subjective study are presented. Techniques for objective assessment of perceived differences are discussed, and their performance is evaluated using ground truth obtained from the subjective experiment. In particular, a solution based on calibrated digital single-lens reflex camera and subsequent analysis of image features captured from the projection screen is described. The system based on our previous work is further developed so that it can be used for the analysis of projected images. It allows assessing color differences in these images and predict their impact on the perceived difference in image look.

  16. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications

    PubMed Central

    Moussa, Adel; El-Sheimy, Naser; Habib, Ayman

    2017-01-01

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research. PMID:29057847

  17. Time Series UAV Image-Based Point Clouds for Landslide Progression Evaluation Applications.

    PubMed

    Al-Rawabdeh, Abdulla; Moussa, Adel; Foroutan, Marzieh; El-Sheimy, Naser; Habib, Ayman

    2017-10-18

    Landslides are major and constantly changing threats to urban landscapes and infrastructure. It is essential to detect and capture landslide changes regularly. Traditional methods for monitoring landslides are time-consuming, costly, dangerous, and the quality and quantity of the data is sometimes unable to meet the necessary requirements of geotechnical projects. This motivates the development of more automatic and efficient remote sensing approaches for landslide progression evaluation. Automatic change detection involving low-altitude unmanned aerial vehicle image-based point clouds, although proven, is relatively unexplored, and little research has been done in terms of accounting for volumetric changes. In this study, a methodology for automatically deriving change displacement rates, in a horizontal direction based on comparisons between extracted landslide scarps from multiple time periods, has been developed. Compared with the iterative closest projected point (ICPP) registration method, the developed method takes full advantage of automated geometric measuring, leading to fast processing. The proposed approach easily processes a large number of images from different epochs and enables the creation of registered image-based point clouds without the use of extensive ground control point information or further processing such as interpretation and image correlation. The produced results are promising for use in the field of landslide research.

  18. Building an outpatient imaging center: A case study at genesis healthcare system, part 2.

    PubMed

    Yanci, Jim

    2006-01-01

    In the second of 2 parts, this article will focus on process improvement projects utilizing a case study at Genesis HealthCare System located in Zanesville, OH. Operational efficiency is a key step in developing a freestanding diagnostic imaging center. The process improvement projects began with an Expert Improvement Session (EIS) on the scheduling process. An EIS session is a facilitated meeting that can last anywhere from 3 hours to 2 days. Its intention is to take a group of people involved with the problem or operational process and work to understand current failures or breakdowns in the process. Recommendations are jointly developed to overcome any current deficiencies, and a work plan is structured to create ownership over the changes. A total of 11 EIS sessions occurred over the course of this project, covering 5 sections: Scheduling/telephone call process, Pre-registration, Verification/pre-certification, MRI throughput, CT throughput. Following is a single example of a project focused on the process improvement efforts. All of the process improvement projects utilized a quasi methodology of "DMAIC" (Define, Measure, Analyze, Improve, and Control).

  19. IJ-OpenCV: Combining ImageJ and OpenCV for processing images in biomedicine.

    PubMed

    Domínguez, César; Heras, Jónathan; Pascual, Vico

    2017-05-01

    The effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library. Based on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library. We have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest. The IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Distributed computing methodology for training neural networks in an image-guided diagnostic application.

    PubMed

    Plagianakos, V P; Magoulas, G D; Vrahatis, M N

    2006-03-01

    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.

  1. Research on Multi-Temporal PolInSAR Modeling and Applications

    NASA Astrophysics Data System (ADS)

    Hong, Wen; Pottier, Eric; Chen, Erxue

    2014-11-01

    In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman-Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal Land P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.

  2. Research on Multi-Temporal PolInSAR Modeling and Applications

    NASA Astrophysics Data System (ADS)

    Hong, Wen; Pottier, Eric; Chen, Erxue

    2014-11-01

    In the study of theory and processing methodology, we apply accurate topographic phase to the Freeman- Durden decomposition for PolInSAR data. On the other hand, we present a TomoSAR imaging method based on convex optimization regularization theory. The target decomposition and reconstruction performance will be evaluated by multi-temporal L- and P-band fully polarimetric images acquired in BioSAR campaigns. In the study of hybrid Quad-Pol system performance, we analyse the expression of range ambiguity to signal ratio (RASR) in this architecture. Simulations are used to testify its advantage in the improvement of range ambiguities.

  3. System engineering for image and video systems

    NASA Astrophysics Data System (ADS)

    Talbot, Raymond J., Jr.

    1997-02-01

    The National Law Enforcement and Corrections Technology Centers (NLECTC) support public law enforcement agencies with technology development, evaluation, planning, architecture, and implementation. The NLECTC Western Region has a particular emphasis on surveillance and imaging issues. Among its activities, working with government and industry, NLECTC-WR produces 'Guides to Best Practices and Acquisition Methodologies' that facilitate government organizations in making better informed purchasing and operational decisions. This presentation includes specific examples from current activities. Through these systematic procedures, it is possible to design solutions optimally matched to the desired outcomes and provide a process for continuous improvement and greater public awareness of success.

  4. Functional imaging with low-resolution brain electromagnetic tomography (LORETA): a review.

    PubMed

    Pascual-Marqui, R D; Esslen, M; Kochi, K; Lehmann, D

    2002-01-01

    This paper reviews several recent publications that have successfully used the functional brain imaging method known as LORETA. Emphasis is placed on the electrophysiological and neuroanatomical basis of the method, on the localization properties of the method, and on the validation of the method in real experimental human data. Papers that criticize LORETA are briefly discussed. LORETA publications in the 1994-1997 period based localization inference on images of raw electric neuronal activity. In 1998, a series of papers appeared that based localization inference on the statistical parametric mapping methodology applied to high-time resolution LORETA images. Starting in 1999, quantitative neuroanatomy was added to the methodology, based on the digitized Talairach atlas provided by the Brain Imaging Centre, Montreal Neurological Institute. The combination of these methodological developments has placed LORETA at a level that compares favorably to the more classical functional imaging methods, such as PET and fMRI.

  5. System and method for improving ultrasound image acquisition and replication for repeatable measurements of vascular structures

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2006-01-01

    High resolution B-mode ultrasound images of the common carotid artery are obtained with an ultrasound transducer using a standardized methodology. Subjects are supine with the head counter-rotated 45 degrees using a head pillow. The jugular vein and carotid artery are located and positioned in a vertical stacked orientation. The transducer is rotated 90 degrees around the centerline of the transverse image of the stacked structure to obtain a longitudinal image while maintaining the vessels in a stacked position. A computerized methodology assists operators to accurately replicate images obtained over several spaced-apart examinations. The methodology utilizes a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time live ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, measurement of vascular dimensions such as carotid arterial IMT and diameter, the coefficient of variation is substantially reduced to values approximating from about 1.0% to about 1.25%. All images contain anatomical landmarks for reproducing probe angulation, including visualization of the carotid bulb, stacking of the jugular vein above the carotid artery, and initial instrumentation settings, used at a baseline measurement are maintained during all follow-up examinations.

  6. SEM contour based metrology for microlens process studies in CMOS image sensor technologies

    NASA Astrophysics Data System (ADS)

    Lakcher, Amine; Ostrovsky, Alain; Le-Gratiet, Bertrand; Berthier, Ludovic; Bidault, Laurent; Ducoté, Julien; Jamin-Mornet, Clémence; Mortini, Etienne; Besacier, Maxime

    2018-03-01

    From the first digital cameras which appeared during the 70s to cameras of current smartphones, image sensors have undergone significant technological development in the last decades. The development of CMOS image sensor technologies in the 90s has been the main driver of the recent progresses. The main component of an image sensor is the pixel. A pixel contains a photodiode connected to transistors but only the photodiode area is light sensitive. This results in a significant loss of efficiency. To solve this issue, microlenses are used to focus the incident light on the photodiode. A microlens array is made out of a transparent material and has a spherical cap shape. To obtain this spherical shape, a lithography process is performed to generate resist blocks which are then annealed above their glass transition temperature (reflow). Even if the dimensions to consider are higher than in advanced IC nodes, microlenses are sensitive to process variability during lithography and reflow. A good control of the microlens dimensions is key to optimize the process and thus the performance of the final product. The purpose of this paper is to apply SEM contour metrology [1, 2, 3, 4] to microlenses in order to develop a relevant monitoring methodology and to propose new metrics to engineers to evaluate their process or optimize the design of the microlens arrays.

  7. GLOBECOM '84 - Global Telecommunications Conference, Atlanta, GA, November 26-29, 1984, Conference Record. Volume 3

    NASA Astrophysics Data System (ADS)

    Attention is given to aspects of quality assurance methodologies in development life cycles, optical intercity transmission systems, multiaccess protocols, system and technology aspects in the case of regional/domestic satellites, advances in SSB-AM radio transmission over terrestrial and satellite network, and development environments for telecommunications systems. Other subjects studied are concerned with business communication networks for voice and data, VLSI in local network and communication protocol, product evaluation and support, an update regarding Videotex, topics in communication theory, topics in radio propagation, a status report regarding societal effects of technology in the workplace, digital image processing, and adaptive signal processing for communications. The management of the reliability function in the development process is considered along with Giga-bit technologies for long distance large capacity optical transmission equipment. The application of gallium arsenide analog and digital integrated circuits for high-speed fiber optical communications, and a simple algorithm for image data coding.

  8. Automated measurement of human body shape and curvature using computer vision

    NASA Astrophysics Data System (ADS)

    Pearson, Jeremy D.; Hobson, Clifford A.; Dangerfield, Peter H.

    1993-06-01

    A system to measure the surface shape of the human body has been constructed. The system uses a fringe pattern generated by projection of multi-stripe structured light. The optical methodology used is fully described and the algorithms used to process acquired digital images are outlined. The system has been applied to the measurement of the shape of the human back in scoliosis.

  9. Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes.

    PubMed

    Pinto-Lopera, Jesús Emilio; S T Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo

    2016-09-15

    Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch's t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system.

  10. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Rational Variety Mapping for Contrast-Enhanced Nonlinear Unsupervised Segmentation of Multispectral Images of Unstained Specimen

    PubMed Central

    Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej

    2011-01-01

    A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. PMID:21708116

  12. Mining biomedical images towards valuable information retrieval in biomedical and life sciences.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.

  13. Multi-frequency Defect Selective Imaging via Nonlinear Ultrasound

    NASA Astrophysics Data System (ADS)

    Solodov, Igor; Busse, Gerd

    The concept of defect-selective ultrasonic nonlinear imaging is based on visualization of strongly nonlinear inclusions in the form of localized cracked defects. For intense excitation, the ultrasonic response of defects is affected by mechanical constraint between their fragments that makes their vibrations extremely nonlinear. The cracked flaws, therefore, efficiently generate multiple new frequencies, which can be used as a nonlinear "tag" to detect and image them. In this paper, the methodologies of nonlinear scanning laser vibrometry (NSLV) and nonlinear air-coupled emission (NACE) are applied for nonlinear imaging of various defects in hi-tech and constructional materials. A broad database obtained demonstrates evident advantages of the nonlinear approach over its linear counterpart. The higher-order nonlinear frequencies provide increase in signal-to-noise ratio and enhance the contrast of imaging. Unlike conventional ultrasonic instruments, the nonlinear approach yields abundant multi-frequency information on defect location. The application of image recognition and processing algorithms is described and shown to improve reliability and quality of ultrasonic imaging.

  14. Importance of methodology on (99m)technetium dimercapto-succinic acid scintigraphic image quality: imaging pilot study for RIVUR (Randomized Intervention for Children With Vesicoureteral Reflux) multicenter investigation.

    PubMed

    Ziessman, Harvey A; Majd, Massoud

    2009-07-01

    We reviewed our experience with (99m)technetium dimercapto-succinic acid scintigraphy obtained during an imaging pilot study for a multicenter investigation (Randomized Intervention for Children With Vesicoureteral Reflux) of the effectiveness of daily antimicrobial prophylaxis for preventing recurrent urinary tract infection and renal scarring. We analyzed imaging methodology and its relation to diagnostic image quality. (99m)Technetium dimercapto-succinic acid imaging guidelines were provided to participating sites. High-resolution planar imaging with parallel hole or pinhole collimation was required. Two core reviewers evaluated all submitted images. Analysis included appropriate views, presence or lack of patient motion, adequate magnification, sufficient counts and diagnostic image quality. Inter-reader agreement was evaluated. We evaluated 70, (99m)technetium dimercapto-succinic acid studies from 14 institutions. Variability was noted in methodology and image quality. Correlation (r value) between dose administered and patient age was 0.780. For parallel hole collimator imaging good correlation was noted between activity administered and counts (r = 0.800). For pinhole imaging the correlation was poor (r = 0.110). A total of 10 studies (17%) were rejected for quality issues of motion, kidney overlap, inadequate magnification, inadequate counts and poor quality images. The submitting institution was informed and provided with recommendations for improving quality, and resubmission of another study was required. Only 4 studies (6%) were judged differently by the 2 reviewers, and the differences were minor. Methodology and image quality for (99m)technetium dimercapto-succinic acid scintigraphy varied more than expected between institutions. The most common reason for poor image quality was inadequate count acquisition with insufficient attention to the tradeoff between administered dose, length of image acquisition, start time of imaging and resulting image quality. Inter-observer core reader agreement was high. The pilot study ensured good diagnostic quality standardized images for the Randomized Intervention for Children With Vesicoureteral Reflux investigation.

  15. Collaborative Initiative on Fetal Alcohol Spectrum Disorders: Methodology of Clinical Projects

    PubMed Central

    Mattson, Sarah N.; Foroud, Tatiana; Sowell, Elizabeth R.; Jones, Kenneth Lyons; Coles, Claire D.; Fagerlund, Åse; Autti-Rämö, Ilona; May, Philip A.; Adnams, Colleen M.; Konovalova, Valentina; Wetherill, Leah; Arenson, Andrew D.; Barnett, William K.; Riley, Edward P.

    2009-01-01

    The Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CIFASD) was created in 2003 to further understanding of fetal alcohol spectrum disorders. Clinical and basic science projects collect data across multiple sites using standardized methodology. This paper describes the methodology being used by the clinical projects that pertain to assessment of children and adolescents. Domains being addressed are dysmorphology, neurobehavior, 3D facial imaging, and brain imaging. PMID:20036488

  16. Study of Tissue Phantoms, Tissues, and Contrast Agent with the Biophotoacoustic Radar and Comparison to Ultrasound Imaging for Deep Subsurface Imaging

    NASA Astrophysics Data System (ADS)

    Alwi, R.; Telenkov, S.; Mandelis, A.; Gu, F.

    2012-11-01

    In this study, the imaging capability of our wide-spectrum frequency-domain photoacoustic (FD-PA) imaging alias "photoacoustic radar" methodology for imaging of soft tissues is explored. A practical application of the mathematical correlation processing method with relatively long (1 ms) frequency-modulated optical excitation is demonstrated for reconstruction of the spatial location of the PA sources. Image comparison with ultrasound (US) modality was investigated to see the complementarity between the two techniques. The obtained results with a phased array probe on tissue phantoms and their comparison to US images demonstrated that the FD-PA technique has strong potential for deep subsurface imaging with excellent contrast and high signal-to-noise ratio. FD-PA images of blood vessels in a human wrist and an in vivo subcutaneous tumor in a rat model are presented. As in other imaging modalities, the employment of contrast agents is desirable to improve the capability of medical diagnostics. Therefore, this study also evaluated and characterized the use of Food and Drug Administration (FDA)-approved superparamagnetic iron oxide nanoparticles (SPION) as PA contrast agents.

  17. Axial segmentation of lungs CT scan images using canny method and morphological operation

    NASA Astrophysics Data System (ADS)

    Noviana, Rina; Febriani, Rasal, Isram; Lubis, Eva Utari Cintamurni

    2017-08-01

    Segmentation is a very important topic in digital image process. It is found simply in varied fields of image analysis, particularly within the medical imaging field. Axial segmentation of lungs CT scan is beneficial in designation of abnormalities and surgery planning. It will do to ascertain every section within the lungs. The results of the segmentation are accustomed discover the presence of nodules. The method which utilized in this analysis are image cropping, image binarization, Canny edge detection and morphological operation. Image cropping is done so as to separate the lungs areas, that is the region of interest. Binarization method generates a binary image that has 2 values with grey level, that is black and white (ROI), from another space of lungs CT scan image. Canny method used for the edge detection. Morphological operation is applied to smoothing the lungs edge. The segmentation methodology shows an honest result. It obtains an awfully smooth edge. Moreover, the image background can also be removed in order to get the main focus, the lungs.

  18. Bayesian superresolution

    NASA Astrophysics Data System (ADS)

    Isakson, Steve Wesley

    2001-12-01

    Well-known principles of physics explain why resolution restrictions occur in images produced by optical diffraction-limited systems. The limitations involved are present in all diffraction-limited imaging systems, including acoustical and microwave. In most circumstances, however, prior knowledge about the object and the imaging system can lead to resolution improvements. In this dissertation I outline a method to incorporate prior information into the process of reconstructing images to superresolve the object beyond the above limitations. This dissertation research develops the details of this methodology. The approach can provide the most-probable global solution employing a finite number of steps in both far-field and near-field images. In addition, in order to overcome the effects of noise present in any imaging system, this technique provides a weighted image that quantifies the likelihood of various imaging solutions. By utilizing Bayesian probability, the procedure is capable of incorporating prior information about both the object and the noise to overcome the resolution limitation present in many imaging systems. Finally I will present an imaging system capable of detecting the evanescent waves missing from far-field systems, thus improving the resolution further.

  19. Optimization of a novel improver gel formulation for Barbari flat bread using response surface methodology.

    PubMed

    Pourfarzad, Amir; Haddad Khodaparast, Mohammad Hossein; Karimi, Mehdi; Mortazavi, Seyed Ali

    2014-10-01

    Nowadays, the use of bread improvers has become an essential part of improving the production methods and quality of bakery products. In the present study, the Response Surface Methodology (RSM) was used to determine the optimum improver gel formulation which gave the best quality, shelf life, sensory and image properties for Barbari flat bread. Sodium stearoyl-2-lactylate (SSL), diacetyl tartaric acid esters of monoglyceride (DATEM) and propylene glycol (PG) were constituents of the gel and considered in this study. A second-order polynomial model was fitted to each response and the regression coefficients were determined using least square method. The optimum gel formulation was found to be 0.49 % of SSL, 0.36 % of DATEM and 0.5 % of PG when desirability function method was applied. There was a good agreement between the experimental data and their predicted counterparts. Results showed that the RSM, image processing and texture analysis are useful tools to investigate, approximate and predict a large number of bread properties.

  20. Aberration correction for transcranial photoacoustic tomography of primates employing adjunct image data

    NASA Astrophysics Data System (ADS)

    Huang, Chao; Nie, Liming; Schoonover, Robert W.; Guo, Zijian; Schirra, Carsten O.; Anastasio, Mark A.; Wang, Lihong V.

    2012-06-01

    A challenge in photoacoustic tomography (PAT) brain imaging is to compensate for aberrations in the measured photoacoustic data due to their propagation through the skull. By use of information regarding the skull morphology and composition obtained from adjunct x-ray computed tomography image data, we developed a subject-specific imaging model that accounts for such aberrations. A time-reversal-based reconstruction algorithm was employed with this model for image reconstruction. The image reconstruction methodology was evaluated in experimental studies involving phantoms and monkey heads. The results establish that our reconstruction methodology can effectively compensate for skull-induced acoustic aberrations and improve image fidelity in transcranial PAT.

  1. Longitudinal assessment of treatment effects on pulmonary ventilation using 1H/3He MRI multivariate templates

    NASA Astrophysics Data System (ADS)

    Tustison, Nicholas J.; Contrella, Benjamin; Altes, Talissa A.; Avants, Brian B.; de Lange, Eduard E.; Mugler, John P.

    2013-03-01

    The utitlity of pulmonary functional imaging techniques, such as hyperpolarized 3He MRI, has encouraged their inclusion in research studies for longitudinal assessment of disease progression and the study of treatment effects. We present methodology for performing voxelwise statistical analysis of ventilation maps derived from hyper­ polarized 3He MRI which incorporates multivariate template construction using simultaneous acquisition of IH and 3He images. Additional processing steps include intensity normalization, bias correction, 4-D longitudinal segmentation, and generation of expected ventilation maps prior to voxelwise regression analysis. Analysis is demonstrated on a cohort of eight individuals with diagnosed cystic fibrosis (CF) undergoing treatment imaged five times every two weeks with a prescribed treatment schedule.

  2. Validating a new methodology for optical probe design and image registration in fNIRS studies

    PubMed Central

    Wijeakumar, Sobanawartiny; Spencer, John P.; Bohache, Kevin; Boas, David A.; Magnotta, Vincent A.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an imaging technique that relies on the principle of shining near-infrared light through tissue to detect changes in hemodynamic activation. An important methodological issue encountered is the creation of optimized probe geometry for fNIRS recordings. Here, across three experiments, we describe and validate a processing pipeline designed to create an optimized, yet scalable probe geometry based on selected regions of interest (ROIs) from the functional magnetic resonance imaging (fMRI) literature. In experiment 1, we created a probe geometry optimized to record changes in activation from target ROIs important for visual working memory. Positions of the sources and detectors of the probe geometry on an adult head were digitized using a motion sensor and projected onto a generic adult atlas and a segmented head obtained from the subject's MRI scan. In experiment 2, the same probe geometry was scaled down to fit a child's head and later digitized and projected onto the generic adult atlas and a segmented volume obtained from the child's MRI scan. Using visualization tools and by quantifying the amount of intersection between target ROIs and channels, we show that out of 21 ROIs, 17 and 19 ROIs intersected with fNIRS channels from the adult and child probe geometries, respectively. Further, both the adult atlas and adult subject-specific MRI approaches yielded similar results and can be used interchangeably. However, results suggest that segmented heads obtained from MRI scans be used for registering children's data. Finally, in experiment 3, we further validated our processing pipeline by creating a different probe geometry designed to record from target ROIs involved in language and motor processing. PMID:25705757

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harben, P E; Harris, D; Myers, S

    Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3Dmore » finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications.« less

  4. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  5. Retinal image registration for eye movement estimation.

    PubMed

    Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan

    2015-01-01

    This paper describes a novel methodology for eye fixation measurement using a unique videoophthalmoscope setup and advanced image registration approach. The representation of the eye movements via Poincare plot is also introduced. The properties, limitations and perspective of this methodology are finally discussed.

  6. Integration of High-resolution Data for Temporal Bone Surgical Simulations

    PubMed Central

    Wiet, Gregory J.; Stredney, Don; Powell, Kimerly; Hittle, Brad; Kerwin, Thomas

    2016-01-01

    Purpose To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experience in this area and discuss the issues involved to further the field. Data Sources Current temporal bone image acquisition and image processing established in the literature as well as in house methodological development. Review Methods We reviewed the current English literature for the techniques used in computer-based temporal bone simulation systems to obtain and process anatomical data for use within the simulation. Search terms included “temporal bone simulation, surgical simulation, temporal bone.” Articles were chosen and reviewed that directly addressed data acquisition and processing/segmentation and enhancement with emphasis given to computer based systems. We present the results from this review in relationship to our approach. Conclusions High-resolution CT imaging (≤100μm voxel resolution), along with unique image processing and rendering algorithms, and structure specific enhancement are needed for high-level training and assessment using temporal bone surgical simulators. Higher resolution clinical scanning and automated processes that run in efficient time frames are needed before these systems can routinely support pre-surgical planning. Additionally, protocols such as that provided in this manuscript need to be disseminated to increase the number and variety of virtual temporal bones available for training and performance assessment. PMID:26762105

  7. A "Do-It-Yourself" phenotyping system: measuring growth and morphology throughout the diel cycle in rosette shaped plants.

    PubMed

    Dobrescu, Andrei; Scorza, Livia C T; Tsaftaris, Sotirios A; McCormick, Alistair J

    2017-01-01

    Improvements in high-throughput phenotyping technologies are rapidly expanding the scope and capacity of plant biology studies to measure growth traits. Nevertheless, the costs of commercial phenotyping equipment and infrastructure remain prohibitively expensive for wide-scale uptake, while academic solutions can require significant local expertise. Here we present a low-cost methodology for plant biologists to build their own phenotyping system for quantifying growth rates and phenotypic characteristics of Arabidopsis thaliana rosettes throughout the diel cycle. We constructed an image capture system consisting of a near infra-red (NIR, 940 nm) LED panel with a mounted Raspberry Pi NoIR camera and developed a MatLab-based software module (iDIEL Plant) to characterise rosette expansion. Our software was able to accurately segment and characterise multiple rosettes within an image, regardless of plant arrangement or genotype, and batch process image sets. To further validate our system, wild-type Arabidopsis plants (Col-0) and two mutant lines with reduced Rubisco contents, pale leaves and slow growth phenotypes ( 1a3b and 1a2b ) were grown on a single plant tray. Plants were imaged from 9 to 24 days after germination every 20 min throughout the 24 h light-dark growth cycle (i.e. the diel cycle). The resulting dataset provided a dynamic and uninterrupted characterisation of differences in rosette growth and expansion rates over time for the three lines tested. Our methodology offers a straightforward solution for setting up automated, scalable and low-cost phenotyping facilities in a wide range of lab environments that could greatly increase the processing power and scalability of Arabidopsis soil growth experiments.

  8. Application of atomic force microscopy as a nanotechnology tool in food science.

    PubMed

    Yang, Hongshun; Wang, Yifen; Lai, Shaojuan; An, Hongjie; Li, Yunfei; Chen, Fusheng

    2007-05-01

    Atomic force microscopy (AFM) provides a method for detecting nanoscale structural information. First, this review explains the fundamentals of AFM, including principle, manipulation, and analysis. Applications of AFM are then reported in food science and technology research, including qualitative macromolecule and polymer imaging, complicated or quantitative structure analysis, molecular interaction, molecular manipulation, surface topography, and nanofood characterization. The results suggested that AFM could bring insightful knowledge on food properties, and the AFM analysis could be used to illustrate some mechanisms of property changes during processing and storage. However, the current difficulty in applying AFM to food research is lacking appropriate methodology for different food systems. Better understanding of AFM technology and developing corresponding methodology for complicated food systems would lead to a more in-depth understanding of food properties at macromolecular levels and enlarge their applications. The AFM results could greatly improve the food processing and storage technologies.

  9. Nutrient Stress Detection in Corn Using Neural Networks and AVIRIS Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Estep, Lee

    2001-01-01

    AVIRIS image cube data has been processed for the detection of nutrient stress in corn by both known, ratio-type algorithms and by trained neural networks. The USDA Shelton, NE, ARS Variable Rate Nitrogen Application (VRAT) experimental farm was the site used in the study. Upon application of ANOVA and Dunnett multiple comparsion tests on the outcome of both the neural network processing and the ratio-type algorithm results, it was found that the neural network methodology provides a better overall capability to separate nutrient stressed crops from in-field controls.

  10. Autonomous control systems: applications to remote sensing and image processing

    NASA Astrophysics Data System (ADS)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  11. A New Experiment on Bengali Character Recognition

    NASA Astrophysics Data System (ADS)

    Barman, Sumana; Bhattacharyya, Debnath; Jeon, Seung-Whan; Kim, Tai-Hoon; Kim, Haeng-Kon

    This paper presents a method to use View based approach in Bangla Optical Character Recognition (OCR) system providing reduced data set to the ANN classification engine rather than the traditional OCR methods. It describes how Bangla characters are processed, trained and then recognized with the use of a Backpropagation Artificial neural network. This is the first published account of using a segmentation-free optical character recognition system for Bangla using a view based approach. The methodology presented here assumes that the OCR pre-processor has presented the input images to the classification engine described here. The size and the font face used to render the characters are also significant in both training and classification. The images are first converted into greyscale and then to binary images; these images are then scaled to a fit a pre-determined area with a fixed but significant number of pixels. The feature vectors are then formed extracting the characteristics points, which in this case is simply a series of 0s and 1s of fixed length. Finally, an artificial neural network is chosen for the training and classification process.

  12. Unified modeling language and design of a case-based retrieval system in medical imaging.

    PubMed Central

    LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346

  13. 3D characterization of trans- and inter-lamellar fatigue crack in (α + β) Ti alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babout, Laurent, E-mail: Laurent.babout@p.lodz.pl; Jopek, Łukasz; Preuss, Michael

    2014-12-15

    This paper presents a three dimensional image processing strategy that has been developed to quantitatively analyze and correlate the path of a fatigue crack with the lamellar microstructure found in Ti-6246. The analysis is carried out on X-ray microtomography images acquired in situ during uniaxial fatigue testing. The crack, the primary β-grain boundaries and the α lamellae have been segmented separately and merged for the first time to allow a better characterization and understanding of their mutual interaction. This has particularly emphasized the role of translamellar crack growth at a very high propagation angle with regard to the lamellar orientation,more » supporting the central role of colonies favorably oriented for basal 〈a〉 slip to guide the crack in the fully lamellar microstructure of Ti alloy. - Highlights: • 3D tomography images reveal strong short fatigue crack interaction with α lamellae. • Proposed 3D image processing methodology makes their segmentation possible. • Crack-lamellae orientation maps show prevalence of translamellar cracking. • Angle study comforts the influence of basal/prismatic slip on crack path.« less

  14. Understanding Physiological and Degenerative Natural Vision Mechanisms to Define Contrast and Contour Operators

    PubMed Central

    Demongeot, Jacques; Fouquet, Yannick; Tayyab, Muhammad; Vuillerme, Nicolas

    2009-01-01

    Background Dynamical systems like neural networks based on lateral inhibition have a large field of applications in image processing, robotics and morphogenesis modeling. In this paper, we will propose some examples of dynamical flows used in image contrasting and contouring. Methodology First we present the physiological basis of the retina function by showing the role of the lateral inhibition in the optical illusions and pathologic processes generation. Then, based on these biological considerations about the real vision mechanisms, we study an enhancement method for contrasting medical images, using either a discrete neural network approach, or its continuous version, i.e. a non-isotropic diffusion reaction partial differential system. Following this, we introduce other continuous operators based on similar biomimetic approaches: a chemotactic contrasting method, a viability contouring algorithm and an attentional focus operator. Then, we introduce the new notion of mixed potential Hamiltonian flows; we compare it with the watershed method and we use it for contouring. Conclusions We conclude by showing the utility of these biomimetic methods with some examples of application in medical imaging and computed assisted surgery. PMID:19547712

  15. Taking the lead from our colleagues in medical education: the use of images of the in-vivo setting in teaching concepts of pharmaceutical science.

    PubMed

    Curley, Louise E; Kennedy, Julia; Hinton, Jordan; Mirjalili, Ali; Svirskis, Darren

    2017-01-01

    Despite pharmaceutical sciences being a core component of pharmacy curricula, few published studies have focussed on innovative methodologies to teach the content. This commentary identifies imaging techniques which can visualise oral dosage forms in-vivo and observe formulation disintegration in order to achieve a better understanding of in-vivo performance. Images formed through these techniques can provide students with a deeper appreciation of the fate of oral formulations in the body compared to standard disintegration and dissolution testing, which is conducted in-vitro. Such images which represent the in-vivo setting can be used in teaching to give context to both theory and experimental work, thereby increasing student understanding and enabling teaching of pharmaceutical sciences supporting students to correlate in-vitro and in-vivo processes.

  16. A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging.

    PubMed

    Bagci, Ulas; Foster, Brent; Miller-Jaster, Kirsten; Luna, Brian; Dey, Bappaditya; Bishai, William R; Jonsson, Colleen B; Jain, Sanjay; Mollura, Daniel J

    2013-07-23

    Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers' understanding of infectious diseases. We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists' delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.

  17. CARES: Completely Automated Robust Edge Snapper for carotid ultrasound IMT measurement on a multi-institutional database of 300 images: a two stage system combining an intensity-based feature approach with first order absolute moments

    NASA Astrophysics Data System (ADS)

    Molinari, Filippo; Acharya, Rajendra; Zeng, Guang; Suri, Jasjit S.

    2011-03-01

    The carotid intima-media thickness (IMT) is the most used marker for the progression of atherosclerosis and onset of the cardiovascular diseases. Computer-aided measurements improve accuracy, but usually require user interaction. In this paper we characterized a new and completely automated technique for carotid segmentation and IMT measurement based on the merits of two previously developed techniques. We used an integrated approach of intelligent image feature extraction and line fitting for automatically locating the carotid artery in the image frame, followed by wall interfaces extraction based on Gaussian edge operator. We called our system - CARES. We validated the CARES on a multi-institutional database of 300 carotid ultrasound images. IMT measurement bias was 0.032 +/- 0.141 mm, better than other automated techniques and comparable to that of user-driven methodologies. Our novel approach of CARES processed 96% of the images leading to the figure of merit to be 95.7%. CARES ensured complete automation and high accuracy in IMT measurement; hence it could be a suitable clinical tool for processing of large datasets in multicenter studies involving atherosclerosis.pre-

  18. 3D-Printed Tissue-Mimicking Phantoms for Medical Imaging and Computational Validation Applications

    PubMed Central

    Shahmirzadi, Danial; Li, Ronny X.; Doyle, Barry J.; Konofagou, Elisa E.; McGloughlin, Tim M.

    2014-01-01

    Abstract Abdominal aortic aneurysm (AAA) is a permanent, irreversible dilation of the distal region of the aorta. Recent efforts have focused on improved AAA screening and biomechanics-based failure prediction. Idealized and patient-specific AAA phantoms are often employed to validate numerical models and imaging modalities. To produce such phantoms, the investment casting process is frequently used, reconstructing the 3D vessel geometry from computed tomography patient scans. In this study the alternative use of 3D printing to produce phantoms is investigated. The mechanical properties of flexible 3D-printed materials are benchmarked against proven elastomers. We demonstrate the utility of this process with particular application to the emerging imaging modality of ultrasound-based pulse wave imaging, a noninvasive diagnostic methodology being developed to obtain regional vascular wall stiffness properties, differentiating normal and pathologic tissue in vivo. Phantom wall displacements under pulsatile loading conditions were observed, showing good correlation to fluid–structure interaction simulations and regions of peak wall stress predicted by finite element analysis. 3D-printed phantoms show a strong potential to improve medical imaging and computational analysis, potentially helping bridge the gap between experimental and clinical diagnostic tools. PMID:28804733

  19. Segmentation methodology for automated classification and differentiation of soft tissues in multiband images of high-resolution ultrasonic transmission tomography.

    PubMed

    Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z

    2006-08-01

    This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.

  20. 3D contour fluorescence spectroscopy with Brus model: Determination of size and band gap of double stranded DNA templated silver nanoclusters

    NASA Astrophysics Data System (ADS)

    Kamalraj, Devaraj; Yuvaraj, Selvaraj; Yoganand, Coimbatore Paramasivam; Jaffer, Syed S.

    2018-01-01

    Here, we propose a new synthetic methodology for silver nanocluster preparation by using a double stranded-DNA (ds-DNA) template which no one has reported yet. A new calculative method was formulated to determine the size of the nanocluster and their band gaps by using steady state 3D contour fluorescence technique with Brus model. Generally, the structure and size of the nanoclusters determine by using High Resolution Transmission Electron Microscopy (HR-TEM). Before imaging the samples by using HR-TEM, they are introduced to drying process which causes aggregation and forms bigger polycrystalline particles. It takes long time duration and expensive methodology. In this current methodology, we found out the size and band gap of the nanocluster in the liquid form without any polycrystalline aggregation for which 3D contour fluorescence technique was used as an alternative approach to the HR-TEM method.

  1. 2dx_automator: implementation of a semiautomatic high-throughput high-resolution cryo-electron crystallography pipeline.

    PubMed

    Scherer, Sebastian; Kowal, Julia; Chami, Mohamed; Dandey, Venkata; Arheit, Marcel; Ringler, Philippe; Stahlberg, Henning

    2014-05-01

    The introduction of direct electron detectors (DED) to cryo-electron microscopy has tremendously increased the signal-to-noise ratio (SNR) and quality of the recorded images. We discuss the optimal use of DEDs for cryo-electron crystallography, introduce a new automatic image processing pipeline, and demonstrate the vast improvement in the resolution achieved by the use of both together, especially for highly tilted samples. The new processing pipeline (now included in the software package 2dx) exploits the high SNR and frame readout frequency of DEDs to automatically correct for beam-induced sample movement, and reliably processes individual crystal images without human interaction as data are being acquired. A new graphical user interface (GUI) condenses all information required for quality assessment in one window, allowing the imaging conditions to be verified and adjusted during the data collection session. With this new pipeline an automatically generated unit cell projection map of each recorded 2D crystal is available less than 5 min after the image was recorded. The entire processing procedure yielded a three-dimensional reconstruction of the 2D-crystallized ion-channel membrane protein MloK1 with a much-improved resolution of 5Å in-plane and 7Å in the z-direction, within 2 days of data acquisition and simultaneous processing. The results obtained are superior to those delivered by conventional photographic film-based methodology of the same sample, and demonstrate the importance of drift-correction. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Towards modeling of cardiac micro-structure with catheter-based confocal microscopy: a novel approach for dye delivery and tissue characterization.

    PubMed

    Lasher, Richard A; Hitchcock, Robert W; Sachse, Frank B

    2009-08-01

    This work presents a methodology for modeling of cardiac tissue micro-structure. The approach is based on catheter-based confocal imaging systems, which are emerging as tools for diagnosis in various clinical disciplines. A limitation of these systems is that a fluorescent marker must be available in sufficient concentration in the imaged region. We introduce a novel method for the local delivery of fluorescent markers to cardiac tissue based on a hydro-gel carrier brought into contact with the tissue surface. The method was tested with living rabbit cardiac tissue and applied to acquire three-dimensional image stacks with a standard inverted confocal microscope and two-dimensional images with a catheter-based confocal microscope. We processed these image stacks to obtain spatial models and quantitative data on tissue microstructure. Volumes of atrial and ventricular myocytes were 4901 +/- 1713 and 10 299 +/-3598 mum (3) (mean+/-sd), respectively. Atrial and ventricular myocyte volume fractions were 72.4 +/-4.7% and 79.7 +/- 2.9% (mean +/-sd), respectively. Atrial and ventricular myocyte density was 165 571 +/- 55 836 and 86 957 +/- 32 280 cells/mm (3) (mean+/-sd), respectively. These statistical data and spatial descriptions of tissue microstructure provide important input for modeling studies of cardiac tissue function. We propose that the described methodology can also be used to characterize diseased tissue and allows for personalized modeling of cardiac tissue.

  3. Concurrent application of TMS and near-infrared optical imaging: methodological considerations and potential artifacts

    PubMed Central

    Parks, Nathan A.

    2013-01-01

    The simultaneous application of transcranial magnetic stimulation (TMS) with non-invasive neuroimaging provides a powerful method for investigating functional connectivity in the human brain and the causal relationships between areas in distributed brain networks. TMS has been combined with numerous neuroimaging techniques including, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET). Recent work has also demonstrated the feasibility and utility of combining TMS with non-invasive near-infrared optical imaging techniques, functional near-infrared spectroscopy (fNIRS) and the event-related optical signal (EROS). Simultaneous TMS and optical imaging affords a number of advantages over other neuroimaging methods but also involves a unique set of methodological challenges and considerations. This paper describes the methodology of concurrently performing optical imaging during the administration of TMS, focusing on experimental design, potential artifacts, and approaches to controlling for these artifacts. PMID:24065911

  4. Pressure ulcer image segmentation technique through synthetic frequencies generation and contrast variation using toroidal geometry.

    PubMed

    David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García

    2017-01-06

    Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.

  5. Precise design-based defect characterization and root cause analysis

    NASA Astrophysics Data System (ADS)

    Xie, Qian; Venkatachalam, Panneerselvam; Lee, Julie; Chen, Zhijin; Zafar, Khurram

    2017-03-01

    As semiconductor manufacturing continues its march towards more advanced technology nodes, it becomes increasingly important to identify and characterize design weak points, which is typically done using a combination of inline inspection data and the physical layout (or design). However, the employed methodologies have been somewhat imprecise, relying greatly on statistical techniques to signal excursions. For example, defect location error that is inherent to inspection tools prevents them from reporting the true locations of defects. Therefore, common operations such as background-based binning that are designed to identify frequently failing patterns cannot reliably identify specific weak patterns. They can only identify an approximate set of possible weak patterns, but within these sets there are many perfectly good patterns. Additionally, characterizing the failure rate of a known weak pattern based on inline inspection data also has a lot of fuzziness due to coordinate uncertainty. SEM (Scanning Electron Microscope) Review attempts to come to the rescue by capturing high resolution images of the regions surrounding the reported defect locations, but SEM images are reviewed by human operators and the weak patterns revealed in those images must be manually identified and classified. Compounding the problem is the fact that a single Review SEM image may contain multiple defective patterns and several of those patterns might not appear defective to the human eye. In this paper we describe a significantly improved methodology that brings advanced computer image processing and design-overlay techniques to better address the challenges posed by today's leading technology nodes. Specifically, new software techniques allow the computer to analyze Review SEM images in detail, to overlay those images with reference design to detect every defect that might be present in all regions of interest within the overlaid reference design (including several classes of defects that human operators will typically miss), to obtain the exact defect location on design, to compare all defective patterns thus detected against a library of known patterns, and to classify all defective patterns as either new or known. By applying the computer to these tasks, we automate the entire process from defective pattern identification to pattern classification with high precision, and we perform this operation en masse during R & D, ramp, and volume production. By adopting the methodology, whenever a specific weak pattern is identified, we are able to run a series of characterization operations to ultimately arrive at the root cause. These characterization operations can include (a) searching all pre-existing Review SEM images for the presence of the specific weak pattern to determine whether there is any spatial (within die or within wafer) or temporal (within any particular date range, before or after a mask revision, etc.) correlation and (b) understanding the failure rate of the specific weak pattern to prioritize the urgency of the problem, (c) comparing the weak pattern against an OPC (Optical Procimity Correction) Verification report or a PWQ (Process Window Qualification)/FEM (Focus Exposure Matrix) result to assess the likelihood of it being a litho-sensitive pattern, etc. After resolving the specific weak pattern, we will categorize it as known pattern, and the engineer will move forward with discovering new weak patterns.

  6. Breaking the limits of structural and mechanical imaging of the heterogeneous structure of coal macerals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, L.; Tselev, A.; Jesse, S.

    The correlation between local mechanical (elasto-plastic) and structural (composition) properties of coal presents significant fundamental and practical interest for coal processing and the development of rheological models of coal to coke transformations and for advancing novel approaches. Here, we explore the relationship between the local structural, chemical composition and mechanical properties of coal using a combination of confocal micro-Raman imaging and band excitation atomic force acoustic microscopy (BE-AFAM) for a bituminous coal. This allows high resolution imaging (10s of nm) of mechanical properties of the heterogeneous (banded) architecture of coal and correlating them to the optical gap, average crystallite size,more » the bond-bending disorder of sp2 aromatic double bonds and the defect density. This methodology hence allows the structural and mechanical properties of coal components (lithotypes, microlithotypes, and macerals) to be understood, and related to local chemical structure, potentially allowing for knowledge-based modelling and optimization of coal utilization processes.« less

  7. The effect of different standard illumination conditions on color balance failure in offset printed images on glossy coated paper expressed by color difference

    NASA Astrophysics Data System (ADS)

    Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.

    2012-05-01

    One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.

  8. A Statistical Representation of Pyrotechnic Igniter Output

    NASA Astrophysics Data System (ADS)

    Guo, Shuyue; Cooper, Marcia

    2017-06-01

    The output of simplified pyrotechnic igniters for research investigations is statistically characterized by monitoring the post-ignition external flow field with Schlieren imaging. Unique to this work is a detailed quantification of all measurable manufacturing parameters (e.g., bridgewire length, charge cavity dimensions, powder bed density) and associated shock-motion variability in the tested igniters. To demonstrate experimental precision of the recorded Schlieren images and developed image processing methodologies, commercial exploding bridgewires using wires of different parameters were tested. Finally, a statistically-significant population of manufactured igniters were tested within the Schlieren arrangement resulting in a characterization of the nominal output. Comparisons between the variances measured throughout the manufacturing processes and the calculated output variance provide insight into the critical device phenomena that dominate performance. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under contract DE-AC04-94AL85000.

  9. A Hitchhiker's Guide to Functional Magnetic Resonance Imaging

    PubMed Central

    Soares, José M.; Magalhães, Ricardo; Moreira, Pedro S.; Sousa, Alexandre; Ganz, Edward; Sampaio, Adriana; Alves, Victor; Marques, Paulo; Sousa, Nuno

    2016-01-01

    Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. PMID:27891073

  10. Identification and super-resolution imaging of ligand-activated receptor dimers in live cells

    NASA Astrophysics Data System (ADS)

    Winckler, Pascale; Lartigue, Lydia; Giannone, Gregory; de Giorgi, Francesca; Ichas, François; Sibarita, Jean-Baptiste; Lounis, Brahim; Cognet, Laurent

    2013-08-01

    Molecular interactions are key to many chemical and biological processes like protein function. In many signaling processes they occur in sub-cellular areas displaying nanoscale organizations and involving molecular assemblies. The nanometric dimensions and the dynamic nature of the interactions make their investigations complex in live cells. While super-resolution fluorescence microscopies offer live-cell molecular imaging with sub-wavelength resolutions, they lack specificity for distinguishing interacting molecule populations. Here we combine super-resolution microscopy and single-molecule Förster Resonance Energy Transfer (FRET) to identify dimers of receptors induced by ligand binding and provide super-resolved images of their membrane distribution in live cells. By developing a two-color universal-Point-Accumulation-In-the-Nanoscale-Topography (uPAINT) method, dimers of epidermal growth factor receptors (EGFR) activated by EGF are studied at ultra-high densities, revealing preferential cell-edge sub-localization. This methodology which is specifically devoted to the study of molecules in interaction, may find other applications in biological systems where understanding of molecular organization is crucial.

  11. Simultaneous profiling of activity patterns in multiple neuronal subclasses.

    PubMed

    Parrish, R Ryley; Grady, John; Codadu, Neela K; Trevelyan, Andrew J; Racca, Claudia

    2018-06-01

    Neuronal networks typically comprise heterogeneous populations of neurons. A core objective when seeking to understand such networks, therefore, is to identify what roles these different neuronal classes play. Acquiring single cell electrophysiology data for multiple cell classes can prove to be a large and daunting task. Alternatively, Ca 2+ network imaging provides activity profiles of large numbers of neurons simultaneously, but without distinguishing between cell classes. We therefore developed a strategy for combining cellular electrophysiology, Ca 2+ network imaging, and immunohistochemistry to provide activity profiles for multiple cell classes at once. This involves cross-referencing easily identifiable landmarks between imaging of the live and fixed tissue, and then using custom MATLAB functions to realign the two imaging data sets, to correct for distortions of the tissue introduced by the fixation or immunohistochemical processing. We illustrate the methodology for analyses of activity profiles during epileptiform events recorded in mouse brain slices. We further demonstrate the activity profile of a population of parvalbumin-positive interneurons prior, during, and following a seizure-like event. Current approaches to Ca 2+ network imaging analyses are severely limited in their ability to subclassify neurons, and often rely on transgenic approaches to identify cell classes. In contrast, our methodology is a generic, affordable, and flexible technique to characterize neuronal behaviour with respect to classification based on morphological and neurochemical identity. We present a new approach for analysing Ca 2+ network imaging datasets, and use this to explore the parvalbumin-positive interneuron activity during epileptiform events. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Remote-sensing application for facilitating land resource assessment and monitoring for utility-scale solar energy development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamada, Yuki; Grippo, Mark A.

    2015-01-01

    A monitoring plan that incorporates regional datasets and integrates cost-effective data collection methods is necessary to sustain the long-term environmental monitoring of utility-scale solar energy development in expansive, environmentally sensitive desert environments. Using very high spatial resolution (VHSR; 15 cm) multispectral imagery collected in November 2012 and January 2014, an image processing routine was developed to characterize ephemeral streams, vegetation, and land surface in the southwestern United States where increased utility-scale solar development is anticipated. In addition to knowledge about desert landscapes, the methodology integrates existing spectral indices and transformation (e.g., visible atmospherically resistant index and principal components); a newlymore » developed index, erosion resistance index (ERI); and digital terrain and surface models, all of which were derived from a common VHSR image. The methodology identified fine-scale ephemeral streams with greater detail than the National Hydrography Dataset and accurately estimated vegetation distribution and fractional cover of various surface types. The ERI classified surface types that have a range of erosive potentials. The remote-sensing methodology could ultimately reduce uncertainty and monitoring costs for all stakeholders by providing a cost-effective monitoring approach that accurately characterizes the land resources at potential development sites.« less

  13. Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Ming

    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields.

  14. Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.

    PubMed

    Saadia, Ayesha; Rashdi, Adnan

    2016-12-01

    Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    PubMed Central

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  16. Multi-stage robust scheme for citrus identification from high resolution airborne images

    NASA Astrophysics Data System (ADS)

    Amorós-López, Julia; Izquierdo Verdiguier, Emma; Gómez-Chova, Luis; Muñoz-Marí, Jordi; Zoilo Rodríguez-Barreiro, Jorge; Camps-Valls, Gustavo; Calpe-Maravilla, Javier

    2008-10-01

    Identification of land cover types is one of the most critical activities in remote sensing. Nowadays, managing land resources by using remote sensing techniques is becoming a common procedure to speed up the process while reducing costs. However, data analysis procedures should satisfy the accuracy figures demanded by institutions and governments for further administrative actions. This paper presents a methodological scheme to update the citrus Geographical Information Systems (GIS) of the Comunidad Valenciana autonomous region, Spain). The proposed approach introduces a multi-stage automatic scheme to reduce visual photointerpretation and ground validation tasks. First, an object-oriented feature extraction process is carried out for each cadastral parcel from very high spatial resolution (VHR) images (0.5m) acquired in the visible and near infrared. Next, several automatic classifiers (decision trees, multilayer perceptron, and support vector machines) are trained and combined to improve the final accuracy of the results. The proposed strategy fulfills the high accuracy demanded by policy makers by means of combining automatic classification methods with visual photointerpretation available resources. A level of confidence based on the agreement between classifiers allows us an effective management by fixing the quantity of parcels to be reviewed. The proposed methodology can be applied to similar problems and applications.

  17. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach.

    PubMed

    Cichy, Radoslaw Martin; Teng, Santani

    2017-02-19

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  18. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  19. Rational variety mapping for contrast-enhanced nonlinear unsupervised segmentation of multispectral images of unstained specimen.

    PubMed

    Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej

    2011-08-01

    A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. Copyright © 2011 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.

  20. New Developments in Observer Performance Methodology in Medical Imaging

    PubMed Central

    Chakraborty, Dev P.

    2011-01-01

    A common task in medical imaging is assessing whether a new imaging system, or a variant of an existing one, is an improvement over an existing imaging technology. Imaging systems are generally quite complex, consisting of several components – e.g., image acquisition hardware, image processing and display hardware and software, and image interpretation by radiologists– each of which can affect performance. While it may appear odd to include the radiologist as a “component” of the imaging chain, since the radiologist’s decision determines subsequent patient care, the effect of the human interpretation has to be included. Physical measurements like modulation transfer function, signal to noise ratio, etc., are useful for characterizing the non-human parts of the imaging chain under idealized and often unrealistic conditions, such as uniform background phantoms, target objects with sharp edges, etc. Measuring the effect on performance of the entire imaging chain, including the radiologist, and using real clinical images, requires different methods that fall under the rubric of observer performance methods or “ROC analysis”. The purpose of this paper is to review recent developments in this field, particularly with respect to the free-response method. PMID:21978444

  1. Progress in sensor performance testing, modeling and range prediction using the TOD method: an overview

    NASA Astrophysics Data System (ADS)

    Bijl, Piet; Hogervorst, Maarten A.; Toet, Alexander

    2017-05-01

    The Triangle Orientation Discrimination (TOD) methodology includes i) a widely applicable, accurate end-to-end EO/IR sensor test, ii) an image-based sensor system model and iii) a Target Acquisition (TA) range model. The method has been extensively validated against TA field performance for a wide variety of well- and under-sampled imagers, systems with advanced image processing techniques such as dynamic super resolution and local adaptive contrast enhancement, and sensors showing smear or noise drift, for both static and dynamic test stimuli and as a function of target contrast. Recently, significant progress has been made in various directions. Dedicated visual and NIR test charts for lab and field testing are available and thermal test benches are on the market. Automated sensor testing using an objective synthetic human observer is within reach. Both an analytical and an image-based TOD model have recently been developed and are being implemented in the European Target Acquisition model ECOMOS and in the EOSTAR TDA. Further, the methodology is being applied for design optimization of high-end security camera systems. Finally, results from a recent perception study suggest that DRI ranges for real targets can be predicted by replacing the relevant distinctive target features by TOD test patterns of the same characteristic size and contrast, enabling a new TA modeling approach. This paper provides an overview.

  2. Evaluation of the Anisotropic Radiative Conductivity of a Low-Density Carbon Fiber Material from Realistic Microscale Imaging

    NASA Technical Reports Server (NTRS)

    Nouri, Nima; Panerai, Francesco; Tagavi, Kaveh A.; Mansour, Nagi N.; Martin, Alexandre

    2015-01-01

    The radiative heat transfer inside a low-density carbon fiber insulator is analyzed using a three-dimensional direct simulation model. A robust procedure is presented for the numerical calculation of the geometric configuration factor to compute the radiative energy exchange processes among the small discretized surface areas of the fibrous material. The methodology is applied to a polygonal mesh of a fibrous insulator obtained from three-dimensional microscale imaging of the real material. The anisotropic values of the radiative conductivity are calculated for that geometry. The results yield both directional and thermal dependence of the radiative conductivity.

  3. Optical and digital pattern recognition; Proceedings of the Meeting, Los Angeles, CA, Jan. 13-15, 1987

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)

    1987-01-01

    The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.

  4. Stereoscopic Feature Tracking System for Retrieving Velocity of Surface Waters

    NASA Astrophysics Data System (ADS)

    Zuniga Zamalloa, C. C.; Landry, B. J.

    2017-12-01

    The present work is concerned with the surface velocity retrieval of flows using a stereoscopic setup and finding the correspondence in the images via feature tracking (FT). The feature tracking provides a key benefit of substantially reducing the level of user input. In contrast to other commonly used methods (e.g., normalized cross-correlation), FT does not require the user to prescribe interrogation window sizes and removes the need for masking when specularities are present. The results of the current FT methodology are comparable to those obtained via Large Scale Particle Image Velocimetry while requiring little to no user input which allowed for rapid, automated processing of imagery.

  5. Multi-scale Gaussian representation and outline-learning based cell image segmentation.

    PubMed

    Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli

    2013-01-01

    High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.

  6. Multi-scale Gaussian representation and outline-learning based cell image segmentation

    PubMed Central

    2013-01-01

    Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488

  7. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection

    PubMed Central

    Ou, Yangming; Resnick, Susan M.; Gur, Ruben C.; Gur, Raquel E.; Satterthwaite, Theodore D.; Furth, Susan; Davatzikos, Christos

    2016-01-01

    Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images. PMID:26679328

  8. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    NASA Astrophysics Data System (ADS)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  9. Quantitative analysis of histopathological findings using image processing software.

    PubMed

    Horai, Yasushi; Kakimoto, Tetsuhiro; Takemoto, Kana; Tanaka, Masaharu

    2017-10-01

    In evaluating pathological changes in drug efficacy and toxicity studies, morphometric analysis can be quite robust. In this experiment, we examined whether morphometric changes of major pathological findings in various tissue specimens stained with hematoxylin and eosin could be recognized and quantified using image processing software. Using Tissue Studio, hypertrophy of hepatocytes and adrenocortical cells could be quantified based on the method of a previous report, but the regions of red pulp, white pulp, and marginal zones in the spleen could not be recognized when using one setting condition. Using Image-Pro Plus, lipid-derived vacuoles in the liver and mucin-derived vacuoles in the intestinal mucosa could be quantified using two criteria (area and/or roundness). Vacuoles derived from phospholipid could not be quantified when small lipid deposition coexisted in the liver and adrenal cortex. Mononuclear inflammatory cell infiltration in the liver could be quantified to some extent, except for specimens with many clustered infiltrating cells. Adipocyte size and the mean linear intercept could be quantified easily and efficiently using morphological processing and the macro tool equipped in Image-Pro Plus. These methodologies are expected to form a base system that can recognize morphometric features and analyze quantitatively pathological findings through the use of information technology.

  10. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing.

    PubMed

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-02-23

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.

  11. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing

    PubMed Central

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-01-01

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479

  12. A high performance biometric signal and image processing method to reveal blood perfusion towards 3D oxygen saturation mapping

    NASA Astrophysics Data System (ADS)

    Imms, Ryan; Hu, Sijung; Azorin-Peris, Vicente; Trico, Michaël.; Summers, Ron

    2014-03-01

    Non-contact imaging photoplethysmography (PPG) is a recent development in the field of physiological data acquisition, currently undergoing a large amount of research to characterize and define the range of its capabilities. Contact-based PPG techniques have been broadly used in clinical scenarios for a number of years to obtain direct information about the degree of oxygen saturation for patients. With the advent of imaging techniques, there is strong potential to enable access to additional information such as multi-dimensional blood perfusion and saturation mapping. The further development of effective opto-physiological monitoring techniques is dependent upon novel modelling techniques coupled with improved sensor design and effective signal processing methodologies. The biometric signal and imaging processing platform (bSIPP) provides a comprehensive set of features for extraction and analysis of recorded iPPG data, enabling direct comparison with other biomedical diagnostic tools such as ECG and EEG. Additionally, utilizing information about the nature of tissue structure has enabled the generation of an engineering model describing the behaviour of light during its travel through the biological tissue. This enables the estimation of the relative oxygen saturation and blood perfusion in different layers of the tissue to be calculated, which has the potential to be a useful diagnostic tool.

  13. Development of a table tennis robot for ball interception using visual feedback

    NASA Astrophysics Data System (ADS)

    Parnichkun, Manukid; Thalagoda, Janitha A.

    2016-07-01

    This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.

  14. Revealing 3D Ultrastructure and Morphology of Stem Cell Spheroids by Electron Microscopy.

    PubMed

    Jaros, Josef; Petrov, Michal; Tesarova, Marketa; Hampl, Ales

    2017-01-01

    Cell culture methods have been developed in efforts to produce biologically relevant systems for developmental and disease modeling, and appropriate analytical tools are essential. Knowledge of ultrastructural characteristics represents the basis to reveal in situ the cellular morphology, cell-cell interactions, organelle distribution, niches in which cells reside, and many more. The traditional method for 3D visualization of ultrastructural components, serial sectioning using transmission electron microscopy (TEM), is very labor-intensive due to contentious TEM slice preparation and subsequent image processing of the whole collection. In this chapter, we present serial block-face scanning electron microscopy, together with complex methodology for spheroid formation, contrasting of cellular compartments, image processing, and 3D visualization. The described technique is effective for detailed morphological analysis of stem cell spheroids, organoids, as well as organotypic cell cultures.

  15. Determination of injection molding process windows for optical lenses using response surface methodology.

    PubMed

    Tsai, Kuo-Ming; Wang, He-Yi

    2014-08-20

    This study focuses on injection molding process window determination for obtaining optimal imaging optical properties, astigmatism, coma, and spherical aberration using plastic lenses. The Taguchi experimental method was first used to identify the optimized combination of parameters and significant factors affecting the imaging optical properties of the lens. Full factorial experiments were then implemented based on the significant factors to build the response surface models. The injection molding process windows for lenses with optimized optical properties were determined based on the surface models, and confirmation experiments were performed to verify their validity. The results indicated that the significant factors affecting the optical properties of lenses are mold temperature, melt temperature, and cooling time. According to experimental data for the significant factors, the oblique ovals for different optical properties on the injection molding process windows based on melt temperature and cooling time can be obtained using the curve fitting approach. The confirmation experiments revealed that the average errors for astigmatism, coma, and spherical aberration are 3.44%, 5.62%, and 5.69%, respectively. The results indicated that the process windows proposed are highly reliable.

  16. Measurement of Galactic Logarithmic Spiral Arm Pitch Angle Using Two-dimensional Fast Fourier Transform Decomposition

    NASA Astrophysics Data System (ADS)

    Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio

    2012-04-01

    A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.

  17. Self-registering spread-spectrum barcode method

    DOEpatents

    Cummings, Eric B.; Even Jr., William R.

    2004-11-09

    A novel spread spectrum barcode methodology is disclosed that allows a barcode to be read in its entirety even when a significant fraction or majority of the barcode is obscured. The barcode methodology makes use of registration or clocking information that is distributed along with the encoded user data across the barcode image. This registration information allows for the barcode image to be corrected for imaging distortion such as zoom, rotation, tilt, curvature, and perspective.

  18. A Chip and Pixel Qualification Methodology on Imaging Sensors

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank

    2004-01-01

    This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.

  19. Dynamic Denoising of Tracking Sequences

    PubMed Central

    Michailovich, Oleg; Tannenbaum, Allen

    2009-01-01

    In this paper, we describe an approach to the problem of simultaneously enhancing image sequences and tracking the objects of interest represented by the latter. The enhancement part of the algorithm is based on Bayesian wavelet denoising, which has been chosen due to its exceptional ability to incorporate diverse a priori information into the process of image recovery. In particular, we demonstrate that, in dynamic settings, useful statistical priors can come both from some reasonable assumptions on the properties of the image to be enhanced as well as from the images that have already been observed before the current scene. Using such priors forms the main contribution of the present paper which is the proposal of the dynamic denoising as a tool for simultaneously enhancing and tracking image sequences. Within the proposed framework, the previous observations of a dynamic scene are employed to enhance its present observation. The mechanism that allows the fusion of the information within successive image frames is Bayesian estimation, while transferring the useful information between the images is governed by a Kalman filter that is used for both prediction and estimation of the dynamics of tracked objects. Therefore, in this methodology, the processes of target tracking and image enhancement “collaborate” in an interlacing manner, rather than being applied separately. The dynamic denoising is demonstrated on several examples of SAR imagery. The results demonstrated in this paper indicate a number of advantages of the proposed dynamic denoising over “static” approaches, in which the tracking images are enhanced independently of each other. PMID:18482881

  20. Effect of Experience of Use on The Process of Formation of Stereotype Images on Shapes of Products

    NASA Astrophysics Data System (ADS)

    Kwak, Yong-Min; Yamanaka, Toshimasa

    It is necessary to explain the terms used in this research to help the readers better understand the contents of this research. Originally stereotype meant the lead plate cast from a mold of letterpress printing, but now it is used as a term indicating a simplified and fixed notion toward certain group “Knowledge in fixed form” or a term indicating an image simplified and generalized over the members of certain group.[1] Generally, stereotype is used in negative cases, but has both sides of positive and negative view.[2, 3] I believe that a research on the factors of forming stereotype[4] images commonly felt by a large number of persons may suggest a new research methodology for the areas which require high level of creative thinking such as areas of design and researches on emotions. Stereotype images appear between persons, groups and images of countries, enterprises and other organizations. For example, as we usually hear words saying ‘He maybe oo because he is from oo’, we have strong images of characteristics commonly held by the persons who belong to certain categories after tying regions and persons to dividing categories.[5, 6] In this research, I define such images as the stereotype images. This kind of phenomenon appears for the articles used in daily lives. In this research, I established a hypothesis that stereotype images exist for products and underwent the process of verification through experiments.

  1. The Application of MRI for Depiction of Subtle Blood Brain Barrier Disruption in Stroke

    PubMed Central

    Israeli, David; Tanne, David; Daniels, Dianne; Last, David; Shneor, Ran; Guez, David; Landau, Efrat; Roth, Yiftach; Ocherashvilli, Aharon; Bakon, Mati; Hoffman, Chen; Weinberg, Amit; Volk, Talila; Mardor, Yael

    2011-01-01

    The development of imaging methodologies for detecting blood-brain-barrier (BBB) disruption may help predict stroke patient's propensity to develop hemorrhagic complications following reperfusion. We have developed a delayed contrast extravasation MRI-based methodology enabling real-time depiction of subtle BBB abnormalities in humans with high sensitivity to BBB disruption and high spatial resolution. The increased sensitivity to subtle BBB disruption is obtained by acquiring T1-weighted MRI at relatively long delays (~15 minutes) after contrast injection and subtracting from them images acquired immediately after contrast administration. In addition, the relatively long delays allow for acquisition of high resolution images resulting in high resolution BBB disruption maps. The sensitivity is further increased by image preprocessing with corrections for intensity variations and with whole body (rigid+elastic) registration. Since only two separate time points are required, the time between the two acquisitions can be used for acquiring routine clinical data, keeping the total imaging time to a minimum. A proof of concept study was performed in 34 patients with ischemic stroke and 2 patients with brain metastases undergoing high resolution T1-weighted MRI acquired at 3 time points after contrast injection. The MR images were pre-processed and subtracted to produce BBB disruption maps. BBB maps of patients with brain metastases and ischemic stroke presented different patterns of BBB opening. The significant advantage of the long extravasation time was demonstrated by a dynamic-contrast-enhancement study performed continuously for 18 min. The high sensitivity of our methodology enabled depiction of clear BBB disruption in 27% of the stroke patients who did not have abnormalities on conventional contrast-enhanced MRI. In 36% of the patients, who had abnormalities detectable by conventional MRI, the BBB disruption volumes were significantly larger in the maps than in conventional MRI. These results demonstrate the advantages of delayed contrast extravasation in increasing the sensitivity to subtle BBB disruption in ischemic stroke patients. The calculated disruption maps provide clear depiction of significant volumes of BBB disruption unattainable by conventional contrast-enhanced MRI. PMID:21209786

  2. The application of MRI for depiction of subtle blood brain barrier disruption in stroke.

    PubMed

    Israeli, David; Tanne, David; Daniels, Dianne; Last, David; Shneor, Ran; Guez, David; Landau, Efrat; Roth, Yiftach; Ocherashvilli, Aharon; Bakon, Mati; Hoffman, Chen; Weinberg, Amit; Volk, Talila; Mardor, Yael

    2010-12-26

    The development of imaging methodologies for detecting blood-brain-barrier (BBB) disruption may help predict stroke patient's propensity to develop hemorrhagic complications following reperfusion. We have developed a delayed contrast extravasation MRI-based methodology enabling real-time depiction of subtle BBB abnormalities in humans with high sensitivity to BBB disruption and high spatial resolution. The increased sensitivity to subtle BBB disruption is obtained by acquiring T1-weighted MRI at relatively long delays (~15 minutes) after contrast injection and subtracting from them images acquired immediately after contrast administration. In addition, the relatively long delays allow for acquisition of high resolution images resulting in high resolution BBB disruption maps. The sensitivity is further increased by image preprocessing with corrections for intensity variations and with whole body (rigid+elastic) registration. Since only two separate time points are required, the time between the two acquisitions can be used for acquiring routine clinical data, keeping the total imaging time to a minimum. A proof of concept study was performed in 34 patients with ischemic stroke and 2 patients with brain metastases undergoing high resolution T1-weighted MRI acquired at 3 time points after contrast injection. The MR images were pre-processed and subtracted to produce BBB disruption maps. BBB maps of patients with brain metastases and ischemic stroke presented different patterns of BBB opening. The significant advantage of the long extravasation time was demonstrated by a dynamic-contrast-enhancement study performed continuously for 18 min. The high sensitivity of our methodology enabled depiction of clear BBB disruption in 27% of the stroke patients who did not have abnormalities on conventional contrast-enhanced MRI. In 36% of the patients, who had abnormalities detectable by conventional MRI, the BBB disruption volumes were significantly larger in the maps than in conventional MRI. These results demonstrate the advantages of delayed contrast extravasation in increasing the sensitivity to subtle BBB disruption in ischemic stroke patients. The calculated disruption maps provide clear depiction of significant volumes of BBB disruption unattainable by conventional contrast-enhanced MRI.

  3. Real-Time Measurement of Width and Height of Weld Beads in GMAW Processes

    PubMed Central

    Pinto-Lopera, Jesús Emilio; S. T. Motta, José Mauricio; Absi Alfaro, Sadek Crisostomo

    2016-01-01

    Associated to the weld quality, the weld bead geometry is one of the most important parameters in welding processes. It is a significant requirement in a welding project, especially in automatic welding systems where a specific width, height, or penetration of weld bead is needed. This paper presents a novel technique for real-time measuring of the width and height of weld beads in gas metal arc welding (GMAW) using a single high-speed camera and a long-pass optical filter in a passive vision system. The measuring method is based on digital image processing techniques and the image calibration process is based on projective transformations. The measurement process takes less than 3 milliseconds per image, which allows a transfer rate of more than 300 frames per second. The proposed methodology can be used in any metal transfer mode of a gas metal arc welding process and does not have occlusion problems. The responses of the measurement system, presented here, are in a good agreement with off-line data collected by a common laser-based 3D scanner. Each measurement is compare using a statistical Welch’s t-test of the null hypothesis, which, in any case, does not exceed the threshold of significance level α = 0.01, validating the results and the performance of the proposed vision system. PMID:27649198

  4. Molecular imaging and the unification of multilevel mechanisms and data in medical physics.

    PubMed

    Nikiforidis, George C; Sakellaropoulos, George C; Kagadis, George C

    2008-08-01

    Molecular imaging (MI) constitutes a recently developed approach of imaging, where modalities and agents have been reinvented and used in novel combinations in order to expose and measure biologic processes occurring at molecular and cellular levels. It is an approach that bridges the gap between modalities acquiring data from high (e.g., computed tomography, magnetic resonance imaging, and positron-emitting isotopes) and low (e.g., PCR, microarrays) levels of a biological organization. While data integration methodologies will lead to improved diagnostic and prognostic performance, interdisciplinary collaboration, triggered by MI, will result in a better perception of the underlying biological mechanisms. Toward the development of a unifying theory describing these mechanisms, medical physicists can formulate new hypotheses, provide the physical constraints bounding them, and consequently design appropriate experiments. Their new scientific and working environment calls for interventions in their syllabi to educate scientists with enhanced capabilities for holistic views and synthesis.

  5. Image denoising in mixed Poisson-Gaussian noise.

    PubMed

    Luisier, Florian; Blu, Thierry; Unser, Michael

    2011-03-01

    We propose a general methodology (PURE-LET) to design and optimize a wide class of transform-domain thresholding algorithms for denoising images corrupted by mixed Poisson-Gaussian noise. We express the denoising process as a linear expansion of thresholds (LET) that we optimize by relying on a purely data-adaptive unbiased estimate of the mean-squared error (MSE), derived in a non-Bayesian framework (PURE: Poisson-Gaussian unbiased risk estimate). We provide a practical approximation of this theoretical MSE estimate for the tractable optimization of arbitrary transform-domain thresholding. We then propose a pointwise estimator for undecimated filterbank transforms, which consists of subband-adaptive thresholding functions with signal-dependent thresholds that are globally optimized in the image domain. We finally demonstrate the potential of the proposed approach through extensive comparisons with state-of-the-art techniques that are specifically tailored to the estimation of Poisson intensities. We also present denoising results obtained on real images of low-count fluorescence microscopy.

  6. Monocular correspondence detection for symmetrical objects by template matching

    NASA Astrophysics Data System (ADS)

    Vilmar, G.; Besslich, Philipp W., Jr.

    1990-09-01

    We describe a possibility to reconstruct 3-D information from a single view of an 3-D bilateral symmetric object. The symmetry assumption allows us to obtain a " second view" from a different viewpoint by a simple reflection of the monocular image. Therefore we have to solve the correspondence problem in a special case where known feature-based or area-based binocular approaches fail. In principle our approach is based on a frequency domain template matching of the features on the epipolar lines. During a training period our system " learns" the assignment of correspondence models to image features. The object shape is interpolated when no template matches to the image features. This fact is an important advantage of this methodology because no " real world" image holds the symmetry assumption perfectly. To simplify the training process we used single views on human faces (e. g. passport photos) but our system is trainable on any other kind of objects.

  7. Reversible integer wavelet transform for blind image hiding method

    PubMed Central

    Bibi, Nargis; Mahmood, Zahid; Akram, Tallha; Naqvi, Syed Rameez

    2017-01-01

    In this article, a blind data hiding reversible methodology to embed the secret data for hiding purpose into cover image is proposed. The key advantage of this research work is to resolve the privacy and secrecy issues raised during the data transmission over the internet. Firstly, data is decomposed into sub-bands using the integer wavelets. For decomposition, the Fresnelet transform is utilized which encrypts the secret data by choosing a unique key parameter to construct a dummy pattern. The dummy pattern is then embedded into an approximated sub-band of the cover image. Our proposed method reveals high-capacity and great imperceptibility of the secret embedded data. With the utilization of family of integer wavelets, the proposed novel approach becomes more efficient for hiding and retrieving process. It retrieved the secret hidden data from the embedded data blindly, without the requirement of original cover image. PMID:28498855

  8. A RESTful image gateway for multiple medical image repositories.

    PubMed

    Valente, Frederico; Viana-Ferreira, Carlos; Costa, Carlos; Oliveira, José Luis

    2012-05-01

    Mobile technologies are increasingly important components in telemedicine systems and are becoming powerful decision support tools. Universal access to data may already be achieved by resorting to the latest generation of tablet devices and smartphones. However, the protocols employed for communicating with image repositories are not suited to exchange data with mobile devices. In this paper, we present an extensible approach to solving the problem of querying and delivering data in a format that is suitable for the bandwidth and graphic capacities of mobile devices. We describe a three-tiered component-based gateway that acts as an intermediary between medical applications and a number of Picture Archiving and Communication Systems (PACS). The interface with the gateway is accomplished using Hypertext Transfer Protocol (HTTP) requests following a Representational State Transfer (REST) methodology, which relieves developers from dealing with complex medical imaging protocols and allows the processing of data on the server side.

  9. Observer efficiency in discrimination tasks simulating malignant and benign breast lesions imaged with ultrasound

    PubMed Central

    Abbey, Craig K.; Zemp, Roger J.; Liu, Jie; Lindfors, Karen K.; Insana, Michael F.

    2009-01-01

    We investigate and extend the ideal observer methodology developed by Smith and Wagner to detection and discrimination tasks related to breast sonography. We provide a numerical approach for evaluating the ideal observer acting on radio-frequency (RF) frame data, which involves inversion of large nonstationary covariance matrices, and we describe a power-series approach to computing this inverse. Considering a truncated power series suggests that the RF data be Wiener-filtered before forming the final envelope image. We have compared human performance for Wiener-filtered and conventional B-mode envelope images using psychophysical studies for 5 tasks related to breast cancer classification. We find significant improvements in visual detection and discrimination efficiency in four of these five tasks. We also use the Smith-Wagner approach to distinguish between human and processing inefficiencies, and find that generally the principle limitation comes from the information lost in computing the final envelope image. PMID:16468454

  10. Classification of endoscopic capsule images by using color wavelet features, higher order statistics and radial basis functions.

    PubMed

    Lima, C S; Barbosa, D; Ramos, J; Tavares, A; Monteiro, L; Carvalho, L

    2008-01-01

    This paper presents a system to support medical diagnosis and detection of abnormal lesions by processing capsule endoscopic images. Endoscopic images possess rich information expressed by texture. Texture information can be efficiently extracted from medium scales of the wavelet transform. The set of features proposed in this paper to code textural information is named color wavelet covariance (CWC). CWC coefficients are based on the covariances of second order textural measures, an optimum subset of them is proposed. Third and forth order moments are added to cope with distributions that tend to become non-Gaussian, especially in some pathological cases. The proposed approach is supported by a classifier based on radial basis functions procedure for the characterization of the image regions along the video frames. The whole methodology has been applied on real data containing 6 full endoscopic exams and reached 95% specificity and 93% sensitivity.

  11. Bayesian multi-scale smoothing of photon-limited images with applications to astronomy and medicine

    NASA Astrophysics Data System (ADS)

    White, John

    Multi-scale models for smoothing Poisson signals or images have gained much attention over the past decade. A new Bayesian model is developed using the concept of the Chinese restaurant process to find structures in two-dimensional images when performing image reconstruction or smoothing. This new model performs very well when compared to other leading methodologies for the same problem. It is developed and evaluated theoretically and empirically throughout Chapter 2. The newly developed Bayesian model is extended to three-dimensional images in Chapter 3. The third dimension has numerous different applications, such as different energy spectra, another spatial index, or possibly a temporal dimension. Empirically, this method shows promise in reducing error with the use of simulation studies. A further development removes background noise in the image. This removal can further reduce the error and is done using a modeling adjustment and post-processing techniques. These details are given in Chapter 4. Applications to real world problems are given throughout. Photon-based images are common in astronomical imaging due to the collection of different types of energy such as X-Rays. Applications to real astronomical images are given, and these consist of X-ray images from the Chandra X-ray observatory satellite. Diagnostic medicine uses many types of imaging such as magnetic resonance imaging and computed tomography that can also benefit from smoothing techniques such as the one developed here. Reducing the amount of radiation a patient takes will make images more noisy, but this can be mitigated through the use of image smoothing techniques. Both types of images represent the potential real world use for these methods.

  12. The 1990 Goddard Conference on Space Applications of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1990-01-01

    The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition.

  13. Range Image Processing for Local Navigation of an Autonomous Land Vehicle.

    DTIC Science & Technology

    1986-09-01

    such as doing long term exploration missions on the surface of the planets which mankind may wish to investigate . Certainly, mankind will soon return...intelligence programming, walking technology, and vision sensors to name but a few. 10 The purpose of this thesis will be to investigate , by simulation...bitmap graphics, both of which are important to this simulation. Finally, the methodology for displaying the symbolic information generated by the

  14. A Novel Semi-Supervised Methodology for Extracting Tumor Type-Specific MRS Sources in Human Brain Data

    PubMed Central

    Ortega-Martorell, Sandra; Ruiz, Héctor; Vellido, Alfredo; Olier, Iván; Romero, Enrique; Julià-Sapé, Margarida; Martín, José D.; Jarman, Ian H.; Arús, Carles; Lisboa, Paulo J. G.

    2013-01-01

    Background The clinical investigation of human brain tumors often starts with a non-invasive imaging study, providing information about the tumor extent and location, but little insight into the biochemistry of the analyzed tissue. Magnetic Resonance Spectroscopy can complement imaging by supplying a metabolic fingerprint of the tissue. This study analyzes single-voxel magnetic resonance spectra, which represent signal information in the frequency domain. Given that a single voxel may contain a heterogeneous mix of tissues, signal source identification is a relevant challenge for the problem of tumor type classification from the spectroscopic signal. Methodology/Principal Findings Non-negative matrix factorization techniques have recently shown their potential for the identification of meaningful sources from brain tissue spectroscopy data. In this study, we use a convex variant of these methods that is capable of handling negatively-valued data and generating sources that can be interpreted as tumor class prototypes. A novel approach to convex non-negative matrix factorization is proposed, in which prior knowledge about class information is utilized in model optimization. Class-specific information is integrated into this semi-supervised process by setting the metric of a latent variable space where the matrix factorization is carried out. The reported experimental study comprises 196 cases from different tumor types drawn from two international, multi-center databases. The results indicate that the proposed approach outperforms a purely unsupervised process by achieving near perfect correlation of the extracted sources with the mean spectra of the tumor types. It also improves tissue type classification. Conclusions/Significance We show that source extraction by unsupervised matrix factorization benefits from the integration of the available class information, so operating in a semi-supervised learning manner, for discriminative source identification and brain tumor labeling from single-voxel spectroscopy data. We are confident that the proposed methodology has wider applicability for biomedical signal processing. PMID:24376744

  15. Image-guided positioning and tracking.

    PubMed

    Ruan, Dan; Kupelian, Patrick; Low, Daniel A

    2011-01-01

    Radiation therapy aims at maximizing tumor control while minimizing normal tissue complication. The introduction of stereotactic treatment explores the volume effect and achieves dose escalation to tumor target with small margins. The use of ablative irradiation dose and sharp dose gradients requires accurate tumor definition and alignment between patient and treatment geometry. Patient geometry variation during treatment may significantly compromise the conformality of delivered dose and must be managed properly. Setup error and interfraction/intrafraction motion are incorporated in the target definition process by expanding the clinical target volume to planning target volume, whereas the alignment between patient and treatment geometry is obtained with an adaptive control process, by taking immediate actions in response to closely monitored patient geometry. This article focuses on the monitoring and adaptive response aspect of the problem. The term "image" in "image guidance" will be used in a most general sense, to be inclusive of some important point-based monitoring systems that can be considered as degenerate cases of imaging. Image-guided motion adaptive control, as a comprehensive system, involves a hierarchy of decisions, each of which balances simplicity versus flexibility and accuracy versus robustness. Patient specifics and machine specifics at the treatment facility also need to be incorporated into the decision-making process. Identifying operation bottlenecks from a system perspective and making informed compromises are crucial in the proper selection of image-guidance modality, the motion management mechanism, and the respective operation modes. Not intended as an exhaustive exposition, this article focuses on discussing the major issues and development principles for image-guided motion management systems. We hope these information and methodologies will facilitate conscientious practitioners to adopt image-guided motion management systems accounting for patient and institute specifics and to embrace advances in knowledge and new technologies subsequent to the publication of this article.

  16. A methodology for evaluating detection performance of ultrasonic array imaging algorithms for coarse-grained materials.

    PubMed

    Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S

    2014-12-01

    Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.

  17. Status and Perspectives of Neutron Imaging Facilities

    NASA Astrophysics Data System (ADS)

    Lehmann, E.; Trtik, P.; Ridikas, D.

    The methodology and the application range of neutron imaging techniques have been significantly improved at numerous facilities worldwide in the last decades. This progress has been achieved by new detector systems, the setup of dedicated, optimized and flexible beam lines and the much better understanding of the complete imaging process thanks to complementary simulations. Furthermore, new applications and research topics were found and implemented. However, since the quality and the number of neutron imaging facilities depend much on the access to suitable beam ports, there is still an enormous potential to implement state-of-the-art neutron imaging techniques at many more facilities. On the one hand, there are prominent and powerful sources which do not intend/accept the implementation of neutron imaging techniques due to the priorities set for neutron scattering and irradiation techniques exclusively. On the other hand, there are modern and useful devices which remain under-utilized and have either not the capacity or not the know-how to develop attractive user programs and/or industrial partnerships. In this overview of the international status of neutron imaging facilities, we will specify details about the current situation.

  18. Some new classification methods for hyperspectral remote sensing

    NASA Astrophysics Data System (ADS)

    Du, Pei-jun; Chen, Yun-hao; Jones, Simon; Ferwerda, Jelle G.; Chen, Zhi-jun; Zhang, Hua-peng; Tan, Kun; Yin, Zuo-xia

    2006-10-01

    Hyperspectral Remote Sensing (HRS) is one of the most significant recent achievements of Earth Observation Technology. Classification is the most commonly employed processing methodology. In this paper three new hyperspectral RS image classification methods are analyzed. These methods are: Object-oriented FIRS image classification, HRS image classification based on information fusion and HSRS image classification by Back Propagation Neural Network (BPNN). OMIS FIRS image is used as the example data. Object-oriented techniques have gained popularity for RS image classification in recent years. In such method, image segmentation is used to extract the regions from the pixel information based on homogeneity criteria at first, and spectral parameters like mean vector, texture, NDVI and spatial/shape parameters like aspect ratio, convexity, solidity, roundness and orientation for each region are calculated, finally classification of the image using the region feature vectors and also using suitable classifiers such as artificial neural network (ANN). It proves that object-oriented methods can improve classification accuracy since they utilize information and features both from the point and the neighborhood, and the processing unit is a polygon (in which all pixels are homogeneous and belong to the class). HRS image classification based on information fusion, divides all bands of the image into different groups initially, and extracts features from every group according to the properties of each group. Three levels of information fusion: data level fusion, feature level fusion and decision level fusion are used to HRS image classification. Artificial Neural Network (ANN) can perform well in RS image classification. In order to promote the advances of ANN used for HIRS image classification, Back Propagation Neural Network (BPNN), the most commonly used neural network, is used to HRS image classification.

  19. How to Reduce Head CT Orders in Children with Hydrocephalus Using the Lean Six Sigma Methodology: Experience at a Major Quaternary Care Academic Children's Center.

    PubMed

    Tekes, A; Jackson, E M; Ogborn, J; Liang, S; Bledsoe, M; Durand, D J; Jallo, G; Huisman, T A G M

    2016-06-01

    Lean Six Sigma methodology is increasingly used to drive improvement in patient safety, quality of care, and cost-effectiveness throughout the US health care delivery system. To demonstrate our value as specialists, radiologists can combine lean methodologies along with imaging expertise to optimize imaging elements-of-care pathways. In this article, we describe a Lean Six Sigma project with the goal of reducing the relative use of pediatric head CTs in our population of patients with hydrocephalus by 50% within 6 months. We applied a Lean Six Sigma methodology using a multidisciplinary team at a quaternary care academic children's center. The existing baseline imaging practice for hydrocephalus was outlined in a Kaizen session, and potential interventions were discussed. An improved radiation-free workflow with ultrafast MR imaging was created. Baseline data were collected for 3 months by using the departmental radiology information system. Data collection continued postintervention and during the control phase (each for 3 months). The percentage of neuroimaging per technique (head CT, head ultrasound, ultrafast brain MR imaging, and routine brain MR imaging) was recorded during each phase. The improved workflow resulted in a 75% relative reduction in the percentage of hydrocephalus imaging performed by CT between the pre- and postintervention/control phases (Z-test, P = .0001). Our lean interventions in the pediatric hydrocephalus care pathway resulted in a significant reduction in head CT orders and increased use of ultrafast brain MR imaging. © 2016 by American Journal of Neuroradiology.

  20. A Comparison of Hyperelastic Warping of PET Images with Tagged MRI for the Analysis of Cardiac Deformation

    DOE PAGES

    Veress, Alexander I.; Klein, Gregory; Gullberg, Grant T.

    2013-01-01

    Tmore » he objectives of the following research were to evaluate the utility of a deformable image registration technique known as hyperelastic warping for the measurement of local strains in the left ventricle through the analysis of clinical, gated PE image datasets. wo normal human male subjects were sequentially imaged with PE and tagged MRI imaging. Strain predictions were made for systolic contraction using warping analyses of the PE images and HARP based strain analyses of the MRI images. Coefficient of determination R 2 values were computed for the comparison of circumferential and radial strain predictions produced by each methodology. here was good correspondence between the methodologies, with R 2 values of 0.78 for the radial strains of both hearts and from an R 2 = 0.81 and R 2 = 0.83 for the circumferential strains. he strain predictions were not statistically different ( P ≤ 0.01 ) . A series of sensitivity results indicated that the methodology was relatively insensitive to alterations in image intensity, random image noise, and alterations in fiber structure. his study demonstrated that warping was able to provide strain predictions of systolic contraction of the LV consistent with those provided by tagged MRI Warping.« less

  1. Modeling 4D Pathological Changes by Leveraging Normative Models

    PubMed Central

    Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido

    2016-01-01

    With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606

  2. Mapping land cover from satellite images: A basic, low cost approach

    NASA Technical Reports Server (NTRS)

    Elifrits, C. D.; Barney, T. W.; Barr, D. J.; Johannsen, C. J.

    1978-01-01

    Simple, inexpensive methodologies developed for mapping general land cover and land use categories from LANDSAT images are reported. One methodology, a stepwise, interpretive, direct tracing technique was developed through working with university students from different disciplines with no previous experience in satellite image interpretation. The technique results in maps that are very accurate in relation to actual land cover and relative to the small investment in skill, time, and money needed to produce the products.

  3. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    NASA Astrophysics Data System (ADS)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  4. Lithographic image simulation for the 21st century with 19th-century tools

    NASA Astrophysics Data System (ADS)

    Gordon, Ronald L.; Rosenbluth, Alan E.

    2004-01-01

    Simulation of lithographic processes in semiconductor manufacturing has gone from a crude learning tool 20 years ago to a critical part of yield enhancement strategy today. Although many disparate models, championed by equally disparate communities, exist to describe various photoresist development phenomena, these communities would all agree that the one piece of the simulation picture that can, and must, be computed accurately is the image intensity in the photoresist. The imaging of a photomask onto a thin-film stack is one of the only phenomena in the lithographic process that is described fully by well-known, definitive physical laws. Although many approximations are made in the derivation of the Fourier transform relations between the mask object, the pupil, and the image, these and their impacts are well-understood and need little further investigation. The imaging process in optical lithography is modeled as a partially-coherent, Kohler illumination system. As Hopkins has shown, we can separate the computation into 2 pieces: one that takes information about the illumination source, the projection lens pupil, the resist stack, and the mask size or pitch, and the other that only needs the details of the mask structure. As the latter piece of the calculation can be expressed as a fast Fourier transform, it is the first piece that dominates. This piece involves computation of a potentially large number of numbers called Transmission Cross-Coefficients (TCCs), which are correlations of the pupil function weighted with the illumination intensity distribution. The advantage of performing the image calculations this way is that the computation of these TCCs represents an up-front cost, not to be repeated if one is only interested in changing the mask features, which is the case in Model-Based Optical Proximity Correction (MBOPC). The down side, however, is that the number of these expensive double integrals that must be performed increases as the square of the mask unit cell area; this number can cause even the fastest computers to balk if one needs to study medium- or long-range effects. One can reduce this computational burden by approximating with a smaller area, but accuracy is usually a concern, especially when building a model that will purportedly represent a manufacturing process. This work will review the current methodologies used to simulate the intensity distribution in air above the resist and address the above problems. More to the point, a methodology has been developed to eliminate the expensive numerical integrations in the TCC calculations, as the resulting integrals in many cases of interest can be either evaluated analytically, or replaced by analytical functions accurate to within machine precision. With the burden of computing these numbers lightened, more accurate representations of the image field can be realized, and better overall models are then possible.

  5. Reading Out Single-Molecule Digital RNA and DNA Isothermal Amplification in Nanoliter Volumes with Unmodified Camera Phones

    PubMed Central

    2016-01-01

    Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709

  6. Development of image processing method to detect noise in geostationary imagery

    NASA Astrophysics Data System (ADS)

    Khlopenkov, Konstantin V.; Doelling, David R.

    2016-10-01

    The Clouds and the Earth's Radiant Energy System (CERES) has incorporated imagery from 16 individual geostationary (GEO) satellites across five contiguous domains since March 2000. In order to derive broadband fluxes uniform across satellite platforms it is important to ensure a good quality of the input raw count data. GEO data obtained by older GOES imagers (such as MTSAT-1, Meteosat-5, Meteosat-7, GMS-5, and GOES-9) are known to frequently contain various types of noise caused by transmission errors, sync errors, stray light contamination, and others. This work presents an image processing methodology designed to detect most kinds of noise and corrupt data in all bands of raw imagery from modern and historic GEO satellites. The algorithm is based on a set of different approaches to detect abnormal image patterns, including inter-line and inter-pixel differences within a scanline, correlation between scanlines, analysis of spatial variance, and also a 2D Fourier analysis of the image spatial frequencies. In spite of computational complexity, the described method is highly optimized for performance to facilitate volume processing of multi-year data and runs in fully automated mode. Reliability of this noise detection technique has been assessed by human supervision for each GEO dataset obtained during selected time periods in 2005 and 2006. This assessment has demonstrated the overall detection accuracy of over 99.5% and the false alarm rate of under 0.3%. The described noise detection routine is currently used in volume processing of historical GEO imagery for subsequent production of global gridded data products and for cross-platform calibration.

  7. Securing Color Fidelity in 3D Architectural Heritage Scenarios.

    PubMed

    Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-10-25

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').

  8. Time-resolved polarization imaging by pump-probe (stimulated emission) fluorescence microscopy.

    PubMed Central

    Buehler, C; Dong, C Y; So, P T; French, T; Gratton, E

    2000-01-01

    We report the application of pump-probe fluorescence microscopy in time-resolved polarization imaging. We derived the equations governing the pump-probe stimulated emission process and characterized the pump and probe laser power levels for signal saturation. Our emphasis is to use this novel methodology to image polarization properties of fluorophores across entire cells. As a feasibility study, we imaged a 15-microm orange latex sphere and found that there is depolarization that is possibly due to energy transfer among fluorescent molecules inside the sphere. We also imaged a mouse fibroblast labeled with CellTracker Orange CMTMR (5-(and-6)-(((4-chloromethyl)benzoyl)amino)tetramethyl-rhodamine). We observed that Orange CMTMR complexed with gluthathione rotates fast, indicating the relatively low fluid-phase viscosity of the cytoplasmic microenvironment as seen by Orange CMTMR. The measured rotational correlation time ranged from approximately 30 to approximately 150 ps. This work demonstrates the effectiveness of stimulated emission measurements in acquiring high-resolution, time-resolved polarization information across the entire cell. PMID:10866979

  9. NIR time domain diffuse optical tomography experiments on human forearm

    NASA Astrophysics Data System (ADS)

    Zhao, Huijuan; Gao, Feng; Tanikawa, Yukari; Homma, Kazuhiro; Yamada, Yukio

    2003-07-01

    To date, the applications of near infrared (NIR) diffusion optical tomography (DOT) are mostly focused on the potential of imaging woman breast, human head hemodynamics and neonatal head. For the neonates, who are suffered from ischaemia or hemorrhages in brain, bedside monitoring of the cerebral perfusion situation, e.g., the blood oxygen saturation and blood volume, is necessary for avoiding permanent injure. NIR DOT is on the promising tools because it is noninvasive, smaller in size, and moveable. Prior to achieving the ultimate goal of imaging infant brain and woman breast using DOT, in this paper, the developed methodologies are justified by imaging in vivo human forearms. The absolute absorption- and scattering-coefficient images revealed the inner structure of the forearm and the bones were clearly distinguished from the muscle. The differential images showed the changes in oxy-hemoglobin, deoxy-hemoglobin and blood volume during the hand-gripping exercises, which are consistent with the physiological process reported on literatures.

  10. Securing Color Fidelity in 3D Architectural Heritage Scenarios

    PubMed Central

    Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-01-01

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’). PMID:29068359

  11. From Disabled Students to Disabled Brains: The Medicalizing Power of Rhetorical Images in the Israeli Learning Disabilities Field.

    PubMed

    Katchergin, Ofer

    2017-09-01

    The neurocentric worldview that identifies the essence of the human being with the material brain has become a central paradigm in current academic discourse. Israeli researchers also seek to understand educational principles and processes via neuroscientific models. On this background, the article uncovers the central role that visual brain images play in the learning-disabilities field in Israel. It examines the place brain images have in the professional imagination of didactic-diagnosticians as well as their influence on the diagnosticians' clinical attitudes. It relies on two theoretical fields: sociology and anthropology of the body and sociology of neuromedical knowledge. The research consists of three methodologies: ethnographic observations, in-depth interviews, and rhetorical analysis of visual and verbal texts. It uncovers the various rhetorical and ideological functions of brain images in the field. It also charts the repertoire of rhetorical devices which are utilized to strengthen the neuroreducionist messages contained in the images.

  12. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  13. WE-B-BRC-03: Risk in the Context of Medical Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samei, E.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  14. Automated recognition of the pericardium contour on processed CT images using genetic algorithms.

    PubMed

    Rodrigues, É O; Rodrigues, L O; Oliveira, L S N; Conci, A; Liatsis, P

    2017-08-01

    This work proposes the use of Genetic Algorithms (GA) in tracing and recognizing the pericardium contour of the human heart using Computed Tomography (CT) images. We assume that each slice of the pericardium can be modelled by an ellipse, the parameters of which need to be optimally determined. An optimal ellipse would be one that closely follows the pericardium contour and, consequently, separates appropriately the epicardial and mediastinal fats of the human heart. Tracing and automatically identifying the pericardium contour aids in medical diagnosis. Usually, this process is done manually or not done at all due to the effort required. Besides, detecting the pericardium may improve previously proposed automated methodologies that separate the two types of fat associated to the human heart. Quantification of these fats provides important health risk marker information, as they are associated with the development of certain cardiovascular pathologies. Finally, we conclude that GA offers satisfiable solutions in a feasible amount of processing time. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham

    2015-12-01

    Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less

  16. A Preliminary Comparison of Three Dimensional Particle Tracking and Sizing using Plenoptic Imaging and Digital In-line Holography [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guildenbecher, Daniel Robert; Munz, Elise Dahnke; Farias, Paul Abraham

    2015-12-01

    Digital in-line holography and plenoptic photography are two techniques for single-shot, volumetric measurement of 3D particle fields. Here we present a preliminary comparison of the two methods by applying plenoptic imaging to experimental configurations that have been previously investigated with digital in-line holography. These experiments include the tracking of secondary droplets from the impact of a water drop on a thin film of water and tracking of pellets from a shotgun. Both plenoptic imaging and digital in-line holography successfully quantify the 3D nature of these particle fields. This includes measurement of the 3D particle position, individual particle sizes, and three-componentmore » velocity vectors. For the initial processing methods presented here, both techniques give out-of-plane positional accuracy of approximately 1-2 particle diameters. For a fixed image sensor, digital holography achieves higher effective in-plane spatial resolutions. However, collimated and coherent illumination makes holography susceptible to image distortion through index of refraction gradients, as demonstrated in the shotgun experiments. On the other hand, plenotpic imaging allows for a simpler experimental configuration. Furthermore, due to the use of diffuse, white-light illumination, plenoptic imaging is less susceptible to image distortion in the shotgun experiments. Additional work is needed to better quantify sources of uncertainty, particularly in the plenoptic experiments, as well as develop data processing methodologies optimized for the plenoptic measurement.« less

  17. A forensic science perspective on the role of images in crime investigation and reconstruction.

    PubMed

    Milliet, Quentin; Delémont, Olivier; Margot, Pierre

    2014-12-01

    This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  19. Digital imaging information technology for biospeckle activity assessment relative to bacteria and parasites.

    PubMed

    Ramírez-Miquet, Evelio E; Cabrera, Humberto; Grassi, Hilda C; de J Andrades, Efrén; Otero, Isabel; Rodríguez, Dania; Darias, Juan G

    2017-08-01

    This paper reports on the biospeckle processing of biological activity using a visualization scheme based upon the digital imaging information technology. Activity relative to bacterial growth in agar plates and to parasites affected by a drug is monitored via the speckle patterns generated by a coherent source incident on the microorganisms. We present experimental results to demonstrate the potential application of this methodology for following the activity in time. The digital imaging information technology is an alternative visualization enabling the study of speckle dynamics, which is correlated to the activity of bacteria and parasites. In this method, the changes in Red-Green-Blue (RGB) color component density are considered as markers of the growth of bacteria and parasites motility in presence of a drug. The RGB data was used to generate a two-dimensional surface plot allowing an analysis of color distribution on the speckle images. The proposed visualization is compared to the outcomes of the generalized differences and the temporal difference. A quantification of the activity is performed using a parameterization of the temporal difference method. The adopted digital image processing technique has been found suitable to monitor motility and morphological changes in the bacterial population over time and to detect and distinguish a short term drug action on parasites.

  20. Balancing Science Objectives and Operational Constraints: A Mission Planner's Challenge

    NASA Technical Reports Server (NTRS)

    Weldy, Michelle

    1996-01-01

    The Air Force minute sensor technology integration (MSTI-3) satellite's primary mission is to characterize Earth's atmospheric background clutter. MSTI-3 will use three cameras for data collection, a mid-wave infrared imager, a short wave infrared imager, and a visible imaging spectrometer. Mission science objectives call for the collection of over 2 million images within the one year mission life. In addition, operational constraints limit camera usage to four operations of twenty minutes per day, with no more than 10,000 data and calibrating images collected per day. To balance the operational constraints and science objectives, the mission planning team has designed a planning process to e event schedules and sensor operation timelines. Each set of constraints, including spacecraft performance capabilities, the camera filters, the geographical regions, and the spacecraft-Sun-Earth geometries of interest, and remote tracking station deconflictions has been accounted for in this methodology. To aid in this process, the mission planning team is building a series of tools from commercial off-the-shelf software. These include the mission manifest which builds a daily schedule of events, and the MSTI Scene Simulator which helps build geometrically correct scans. These tools provide an efficient, responsive, and highly flexible architecture that maximizes data collection while minimizing mission planning time.

  1. Segmentation of lung fields using Chan-Vese active contour model in chest radiographs

    NASA Astrophysics Data System (ADS)

    Sohn, Kiwon

    2011-03-01

    A CAD tool for chest radiographs consists of several procedures and the very first step is segmentation of lung fields. We develop a novel methodology for segmentation of lung fields in chest radiographs that can satisfy the following two requirements. First, we aim to develop a segmentation method that does not need a training stage with manual estimation of anatomical features in a large training dataset of images. Secondly, for the ease of implementation, it is desirable to apply a well established model that is widely used for various image-partitioning practices. The Chan-Vese active contour model, which is based on Mumford-Shah functional in the level set framework, is applied for segmentation of lung fields. With the use of this model, segmentation of lung fields can be carried out without detailed prior knowledge on the radiographic anatomy of the chest, yet in some chest radiographs, the trachea regions are unfavorably segmented out in addition to the lung field contours. To eliminate artifacts from the trachea, we locate the upper end of the trachea, find a vertical center line of the trachea and delineate it, and then brighten the trachea region to make it less distinctive. The segmentation process is finalized by subsequent morphological operations. We randomly select 30 images from the Japanese Society of Radiological Technology image database to test the proposed methodology and the results are shown. We hope our segmentation technique can help to promote of CAD tools, especially for emerging chest radiographic imaging techniques such as dual energy radiography and chest tomosynthesis.

  2. Photogrammetric Techniques for Paleoanthropological Objects Preserving and Studying

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.; Leybova, N. A.; Galeev, R.; Novikov, M.; Gaboutchian, A. V.

    2018-05-01

    Paleo-anthropological research has its specificity closely related with studied objects. Their complicated shape arises from anatomical features of human skull and other skeletal bones. The degree of preservation is associated with the fragility of palaeo-anthropological material which usually has high historical and scientific value. The circumstances mentioned above enhance the relevance of photogrammetry implementation in anthropological studies. Thus, such combination of scientific methodologies with up-to-date technology creates a potential for improvement of various stages of palaeo-anthropological studies. This can be referred to accurate documenting of anthropological material and creation of databases accessible for wide range of users, predominantly research scientists and students; preservation of highly valuable samples and possibility of sharing information as 3D images or printed copies, improving co-operation of scientists world-wide; potential for replication of contact anthropometric studies on 3D images or printed copies providing for development of new biometric methods, and etc. This paper presents an approach based on photogrammetric techniques and non-contact measurements, providing technological and methodological development of paleo-anthropological studies, including data capturing, processing and representing.

  3. A Hidden Portrait by Edgar Degas

    NASA Astrophysics Data System (ADS)

    Thurrowgood, David; Paterson, David; de Jonge, Martin D.; Kirkham, Robin; Thurrowgood, Saul; Howard, Daryl L.

    2016-08-01

    The preservation and understanding of cultural heritage depends increasingly on in-depth chemical studies. Rapid technological advances are forging connections between scientists and arts communities, enabling revolutionary new techniques for non-invasive technical study of culturally significant, highly prized artworks. We have applied a non-invasive, rapid, high definition X-ray fluorescence (XRF) elemental mapping technique to a French Impressionist painting using a synchrotron radiation source, and show how this technology can advance scholarly art interpretation and preservation. We have obtained detailed technical understanding of a painting which could not be resolved by conventional techniques. Here we show 31.6 megapixel scanning XRF derived elemental maps and report a novel image processing methodology utilising these maps to produce a false colour representation of a “hidden” portrait by Edgar Degas. This work provides a cohesive methodology for both imaging and understanding the chemical composition of artworks, and enables scholarly understandings of cultural heritage, many of which have eluded conventional technologies. We anticipate that the outcome from this work will encourage the reassessment of some of the world’s great art treasures.

  4. Spectral mapping of brain functional connectivity from diffusion imaging.

    PubMed

    Becker, Cassiano O; Pequito, Sérgio; Pappas, George J; Miller, Michael B; Grafton, Scott T; Bassett, Danielle S; Preciado, Victor M

    2018-01-23

    Understanding the relationship between the dynamics of neural processes and the anatomical substrate of the brain is a central question in neuroscience. On the one hand, modern neuroimaging technologies, such as diffusion tensor imaging, can be used to construct structural graphs representing the architecture of white matter streamlines linking cortical and subcortical structures. On the other hand, temporal patterns of neural activity can be used to construct functional graphs representing temporal correlations between brain regions. Although some studies provide evidence that whole-brain functional connectivity is shaped by the underlying anatomy, the observed relationship between function and structure is weak, and the rules by which anatomy constrains brain dynamics remain elusive. In this article, we introduce a methodology to map the functional connectivity of a subject at rest from his or her structural graph. Using our methodology, we are able to systematically account for the role of structural walks in the formation of functional correlations. Furthermore, in our empirical evaluations, we observe that the eigenmodes of the mapped functional connectivity are associated with activity patterns associated with different cognitive systems.

  5. A Hidden Portrait by Edgar Degas

    PubMed Central

    Thurrowgood, David; Paterson, David; de Jonge, Martin D.; Kirkham, Robin; Thurrowgood, Saul; Howard, Daryl L.

    2016-01-01

    The preservation and understanding of cultural heritage depends increasingly on in-depth chemical studies. Rapid technological advances are forging connections between scientists and arts communities, enabling revolutionary new techniques for non-invasive technical study of culturally significant, highly prized artworks. We have applied a non-invasive, rapid, high definition X-ray fluorescence (XRF) elemental mapping technique to a French Impressionist painting using a synchrotron radiation source, and show how this technology can advance scholarly art interpretation and preservation. We have obtained detailed technical understanding of a painting which could not be resolved by conventional techniques. Here we show 31.6 megapixel scanning XRF derived elemental maps and report a novel image processing methodology utilising these maps to produce a false colour representation of a “hidden” portrait by Edgar Degas. This work provides a cohesive methodology for both imaging and understanding the chemical composition of artworks, and enables scholarly understandings of cultural heritage, many of which have eluded conventional technologies. We anticipate that the outcome from this work will encourage the reassessment of some of the world’s great art treasures. PMID:27490856

  6. Handheld ultrasound array imaging device

    NASA Astrophysics Data System (ADS)

    Hwang, Juin-Jet; Quistgaard, Jens

    1999-06-01

    A handheld ultrasound imaging device, one that weighs less than five pounds, has been developed for diagnosing trauma in the combat battlefield as well as a variety of commercial mobile diagnostic applications. This handheld device consists of four component ASICs, each is designed using the state of the art microelectronics technologies. These ASICs are integrated with a convex array transducer to allow high quality imaging of soft tissues and blood flow in real time. The device is designed to be battery driven or ac powered with built-in image storage and cineloop playback capability. Design methodologies of a handheld device are fundamentally different to those of a cart-based system. As system architecture, signal and image processing algorithm as well as image control circuit and software in this device is deigned suitably for large-scale integration, the image performance of this device is designed to be adequate to the intent applications. To elongate the battery life, low power design rules and power management circuits are incorporated in the design of each component ASIC. The performance of the prototype device is currently being evaluated for various applications such as a primary image screening tool, fetal imaging in Obstetrics, foreign object detection and wound assessment for emergency care, etc.

  7. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  8. A data grid for imaging-based clinical trials

    NASA Astrophysics Data System (ADS)

    Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.

    2007-03-01

    Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.

  9. Exogenous Molecular Probes for Targeted Imaging in Cancer: Focus on Multi-modal Imaging

    PubMed Central

    Joshi, Bishnu P.; Wang, Thomas D.

    2010-01-01

    Cancer is one of the major causes of mortality and morbidity in our healthcare system. Molecular imaging is an emerging methodology for the early detection of cancer, guidance of therapy, and monitoring of response. The development of new instruments and exogenous molecular probes that can be labeled for multi-modality imaging is critical to this process. Today, molecular imaging is at a crossroad, and new targeted imaging agents are expected to broadly expand our ability to detect and manage cancer. This integrated imaging strategy will permit clinicians to not only localize lesions within the body but also to manage their therapy by visualizing the expression and activity of specific molecules. This information is expected to have a major impact on drug development and understanding of basic cancer biology. At this time, a number of molecular probes have been developed by conjugating various labels to affinity ligands for targeting in different imaging modalities. This review will describe the current status of exogenous molecular probes for optical, scintigraphic, MRI and ultrasound imaging platforms. Furthermore, we will also shed light on how these techniques can be used synergistically in multi-modal platforms and how these techniques are being employed in current research. PMID:22180839

  10. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    NASA Astrophysics Data System (ADS)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  11. A computational pipeline for quantification of pulmonary infections in small animal models using serial PET-CT imaging

    PubMed Central

    2013-01-01

    Background Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases. Methods We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Results Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. Conclusions We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy. PMID:23879987

  12. Infrared Imaging of Carbon and Ceramic Composites: Data Reproducibility

    NASA Astrophysics Data System (ADS)

    Knight, B.; Howard, D. R.; Ringermacher, H. I.; Hudson, L. D.

    2010-02-01

    Infrared NDE techniques have proven to be superior for imaging of flaws in ceramic matrix composites (CMC) and carbon silicon carbide composites (C/SiC). Not only can one obtain accurate depth gauging of flaws such as delaminations and layered porosity in complex-shaped components such as airfoils and other aeronautical components, but also excellent reproducibility of image data is obtainable using the STTOF (Synthetic Thermal Time-of-Flight) methodology. The imaging of large complex shapes is fast and reliable. This methodology as applied to large C/SiC flight components at the NASA Dryden Flight Research Center will be described.

  13. Comprehensive analysis of line-edge and line-width roughness for EUV lithography

    NASA Astrophysics Data System (ADS)

    Bonam, Ravi; Liu, Chi-Chun; Breton, Mary; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Muthinti, Raja; Patlolla, Raghuveer; Huang, Huai

    2017-03-01

    Pattern transfer fidelity is always a major challenge for any lithography process and needs continuous improvement. Lithographic processes in semiconductor industry are primarily driven by optical imaging on photosensitive polymeric material (resists). Quality of pattern transfer can be assessed by quantifying multiple parameters such as, feature size uniformity (CD), placement, roughness, sidewall angles etc. Roughness in features primarily corresponds to variation of line edge or line width and has gained considerable significance, particularly due to shrinking feature sizes and variations of features in the same order. This has caused downstream processes (Etch (RIE), Chemical Mechanical Polish (CMP) etc.) to reconsider respective tolerance levels. A very important aspect of this work is relevance of roughness metrology from pattern formation at resist to subsequent processes, particularly electrical validity. A major drawback of current LER/LWR metric (sigma) is its lack of relevance across multiple downstream processes which effects material selection at various unit processes. In this work we present a comprehensive assessment of Line Edge and Line Width Roughness at multiple lithographic transfer processes. To simulate effect of roughness a pattern was designed with periodic jogs on the edges of lines with varying amplitudes and frequencies. There are numerous methodologies proposed to analyze roughness and in this work we apply them to programmed roughness structures to assess each technique's sensitivity. This work also aims to identify a relevant methodology to quantify roughness with relevance across downstream processes.

  14. Segmentation of medical images using explicit anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  15. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  16. Modeling the UO2 ex-AUC pellet process and predicting the fuel rod temperature distribution under steady-state operating condition

    NASA Astrophysics Data System (ADS)

    Hung, Nguyen Trong; Thuan, Le Ba; Thanh, Tran Chi; Nhuan, Hoang; Khoai, Do Van; Tung, Nguyen Van; Lee, Jin-Young; Jyothi, Rajesh Kumar

    2018-06-01

    Modeling uranium dioxide pellet process from ammonium uranyl carbonate - derived uranium dioxide powder (UO2 ex-AUC powder) and predicting fuel rod temperature distribution were reported in the paper. Response surface methodology (RSM) and FRAPCON-4.0 code were used to model the process and to predict the fuel rod temperature under steady-state operating condition. Fuel rod design of AP-1000 designed by Westinghouse Electric Corporation, in these the pellet fabrication parameters are from the study, were input data for the code. The predictive data were suggested the relationship between the fabrication parameters of UO2 pellets and their temperature image in nuclear reactor.

  17. Atomic force microscopy and spectroscopy to probe single membrane proteins in lipid bilayers.

    PubMed

    Sapra, K Tanuj

    2013-01-01

    The atomic force microscope (AFM) has opened vast avenues hitherto inaccessible to the biological scientist. The high temporal (millisecond) and spatial (nanometer) resolutions of the AFM are suited for studying many biological processes in their native conditions. The AFM cantilever stylus is aptly termed as a "lab on a tip" owing to its versatility as an imaging tool as well as a handle to manipulate single bonds and proteins. Recent examples assert that the AFM can be used to study the mechanical properties and monitor processes of single proteins and single cells, thus affording insight into important mechanistic details. This chapter specifically focuses on practical and analytical protocols of single-molecule AFM methodologies related to high-resolution imaging and single-molecule force spectroscopy of membrane proteins. Both these techniques are operator oriented, and require specialized working knowledge of the instrument, theoretical, and practical skills.

  18. Quantitative mass imaging of single biological macromolecules.

    PubMed

    Young, Gavin; Hundt, Nikolas; Cole, Daniel; Fineberg, Adam; Andrecka, Joanna; Tyler, Andrew; Olerinyova, Anna; Ansari, Ayla; Marklund, Erik G; Collier, Miranda P; Chandler, Shane A; Tkachenko, Olga; Allen, Joel; Crispin, Max; Billington, Neil; Takagi, Yasuharu; Sellers, James R; Eichmann, Cédric; Selenko, Philipp; Frey, Lukas; Riek, Roland; Galpin, Martin R; Struwe, Weston B; Benesch, Justin L P; Kukura, Philipp

    2018-04-27

    The cellular processes underpinning life are orchestrated by proteins and their interactions. The associated structural and dynamic heterogeneity, despite being key to function, poses a fundamental challenge to existing analytical and structural methodologies. We used interferometric scattering microscopy to quantify the mass of single biomolecules in solution with 2% sequence mass accuracy, up to 19-kilodalton resolution, and 1-kilodalton precision. We resolved oligomeric distributions at high dynamic range, detected small-molecule binding, and mass-imaged proteins with associated lipids and sugars. These capabilities enabled us to characterize the molecular dynamics of processes as diverse as glycoprotein cross-linking, amyloidogenic protein aggregation, and actin polymerization. Interferometric scattering mass spectrometry allows spatiotemporally resolved measurement of a broad range of biomolecular interactions, one molecule at a time. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  19. Setting up a proper power spectral density (PSD) and autocorrelation analysis for material and process characterization

    NASA Astrophysics Data System (ADS)

    Rutigliani, Vito; Lorusso, Gian Francesco; De Simone, Danilo; Lazzarino, Frederic; Rispens, Gijsbert; Papavieros, George; Gogolides, Evangelos; Constantoudis, Vassilios; Mack, Chris A.

    2018-03-01

    Power spectral density (PSD) analysis is playing more and more a critical role in the understanding of line-edge roughness (LER) and linewidth roughness (LWR) in a variety of applications across the industry. It is an essential step to get an unbiased LWR estimate, as well as an extremely useful tool for process and material characterization. However, PSD estimate can be affected by both random to systematic artifacts caused by image acquisition and measurement settings, which could irremediably alter its information content. In this paper, we report on the impact of various setting parameters (smoothing image processing filters, pixel size, and SEM noise levels) on the PSD estimate. We discuss also the use of PSD analysis tool in a variety of cases. Looking beyond the basic roughness estimate, we use PSD and autocorrelation analysis to characterize resist blur[1], as well as low and high frequency roughness contents and we apply this technique to guide the EUV material stack selection. Our results clearly indicate that, if properly used, PSD methodology is a very sensitive tool to investigate material and process variations

  20. Estimating Regional Mass Balance of Himalayan Glaciers Using Hexagon Imagery: An Automated Approach

    NASA Astrophysics Data System (ADS)

    Maurer, J. M.; Rupper, S.

    2013-12-01

    Currently there is much uncertainty regarding the present and future state of Himalayan glaciers, which supply meltwater for river systems vital to more than 1.4 billion people living throughout Asia. Previous assessments of regional glacier mass balance in the Himalayas using various remote sensing and field-based methods give inconsistent results, and most assessments are over relatively short (e.g., single decade) timescales. This study aims to quantify multi-decadal changes in volume and extent of Himalayan glaciers through efficient use of the large database of declassified 1970-80s era Hexagon stereo imagery. Automation of the DEM extraction process provides an effective workflow for many images to be processed and glacier elevation changes quantified with minimal user input. The tedious procedure of manual ground control point selection necessary for block-bundle adjustment (as ephemeral data is not available for the declassified images) is automated using the Maximally Stable Extremal Regions algorithm, which matches image elements between raw Hexagon images and georeferenced Landsat 15 meter panchromatic images. Additional automated Hexagon DEM processing, co-registration, and bias correction allow for direct comparison with modern ASTER and SRTM elevation data, thus quantifying glacier elevation and area changes over several decades across largely inaccessible mountainous regions. As consistent methodology is used for all glaciers, results will likely reveal significant spatial and temporal patterns in regional ice mass balance. Ultimately, these findings could have important implications for future water resource management in light of environmental change.

  1. De-noising of 3D multiple-coil MR images using modified LMMSE estimator.

    PubMed

    Yaghoobi, Nima; Hasanzadeh, Reza P R

    2018-06-20

    De-noising is a crucial topic in Magnetic Resonance Imaging (MRI) which focuses on less loss of Magnetic Resonance (MR) image information and details preservation during the noise suppression. Nowadays multiple-coil MRI system is preferred to single one due to its acceleration in the imaging process. Due to the fact that the model of noise in single-coil and multiple-coil MRI systems are different, the de-noising methods that mostly are adapted to single-coil MRI systems, do not work appropriately with multiple-coil one. The model of noise in single-coil MRI systems is Rician while in multiple-coil one (if no subsampling occurs in k-space or GRAPPA reconstruction process is being done in the coils), it obeys noncentral Chi (nc-χ). In this paper, a new filtering method based on the Linear Minimum Mean Square Error (LMMSE) estimator is proposed for multiple-coil MR Images ruined by nc-χ noise. In the presented method, to have an optimum similarity selection of voxels, the Bayesian Mean Square Error (BMSE) criterion is used and proved for nc-χ noise model and also a nonlocal voxel selection methodology is proposed for nc-χ distribution. The results illustrate robust and accurate performance compared to the related state-of-the-art methods, either on ideal nc-χ images or GRAPPA reconstructed ones. Copyright © 2018. Published by Elsevier Inc.

  2. Lesion registration for longitudinal disease tracking in an imaging informatics-based multiple sclerosis eFolder

    NASA Astrophysics Data System (ADS)

    Ma, Kevin; Liu, Joseph; Zhang, Xuejun; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent

    2016-03-01

    We have designed and developed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and data analysis. The system needs to quantify lesion volumes, identify and register lesion locations to track shifts in volume and quantity of lesions in a longitudinal study. In order to perform lesion registration, we have developed a brain warping and normalizing methodology using Statistical Parametric Mapping (SPM) MATLAB toolkit for brain MRI. Patients' brain MR images are processed via SPM's normalization processes, and the brain images are analyzed and warped according to the tissue probability map. Lesion identification and contouring are completed by neuroradiologists, and lesion volume quantification is completed by the eFolder's CAD program. Lesion comparison results in longitudinal studies show key growth and active regions. The results display successful lesion registration and tracking over a longitudinal study. Lesion change results are graphically represented in the web-based user interface, and users are able to correlate patient progress and changes in the MRI images. The completed lesion and disease tracking tool would enable the eFolder to provide complete patient profiles, improve the efficiency of patient care, and perform comprehensive data analysis through an integrated imaging informatics system.

  3. COMPUTATIONAL ANALYSIS BASED ON ARTIFICIAL NEURAL NETWORKS FOR AIDING IN DIAGNOSING OSTEOARTHRITIS OF THE LUMBAR SPINE.

    PubMed

    Veronezi, Carlos Cassiano Denipotti; de Azevedo Simões, Priscyla Waleska Targino; Dos Santos, Robson Luiz; da Rocha, Edroaldo Lummertz; Meláo, Suelen; de Mattos, Merisandra Côrtes; Cechinel, Cristian

    2011-01-01

    To ascertain the advantages of applying artificial neural networks to recognize patterns on lumbar spine radiographies in order to aid in the process of diagnosing primary osteoarthritis. This was a cross-sectional descriptive analytical study with a quantitative approach and an emphasis on diagnosis. The training set was composed of images collected between January and July 2009 from patients who had undergone lateral-view digital radiographies of the lumbar spine, which were provided by a radiology clinic located in the municipality of Criciúma (SC). Out of the total of 260 images gathered, those with distortions, those presenting pathological conditions that altered the architecture of the lumbar spine and those with patterns that were difficult to characterize were discarded, resulting in 206 images. The image data base (n = 206) was then subdivided, resulting in 68 radiographies for the training stage, 68 images for tests and 70 for validation. A hybrid neural network based on Kohonen self-organizing maps and on Multilayer Perceptron networks was used. After 90 cycles, the validation was carried out on the best results, achieving accuracy of 62.85%, sensitivity of 65.71% and specificity of 60%. Even though the effectiveness shown was moderate, this study is still innovative. The values show that the technique used has a promising future, pointing towards further studies on image and cycle processing methodology with a larger quantity of radiographies.

  4. Visual performance-based image enhancement methodology: an investigation of contrast enhancement algorithms

    NASA Astrophysics Data System (ADS)

    Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.

    2006-05-01

    While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.

  5. The other-race effect in face learning: Using naturalistic images to investigate face ethnicity effects in a learning paradigm.

    PubMed

    Hayward, William G; Favelle, Simone K; Oxner, Matt; Chu, Ming Hon; Lam, Sze Man

    2017-05-01

    The other-race effect in face identification has been reported in many situations and by many different ethnicities, yet it remains poorly understood. One reason for this lack of clarity may be a limitation in the methodologies that have been used to test it. Experiments typically use an old-new recognition task to demonstrate the existence of the other-race effect, but such tasks are susceptible to different social and perceptual influences, particularly in terms of the extent to which all faces are equally individuated at study. In this paper we report an experiment in which we used a face learning methodology to measure the other-race effect. We obtained naturalistic photographs of Chinese and Caucasian individuals, which allowed us to test the ability of participants to generalize their learning to new ecologically valid exemplars of a face identity. We show a strong own-race advantage in face learning, such that participants required many fewer trials to learn names of own-race individuals than those of other-race individuals and were better able to identify learned own-race individuals in novel naturalistic stimuli. Since our methodology requires individuation of all faces, and generalization over large image changes, our finding of an other-race effect can be attributed to a specific deficit in the sensitivity of perceptual and memory processes to other-race faces.

  6. Kingfisher: a system for remote sensing image database management

    NASA Astrophysics Data System (ADS)

    Bruzzo, Michele; Giordano, Ferdinando; Dellepiane, Silvana G.

    2003-04-01

    At present retrieval methods in remote sensing image database are mainly based on spatial-temporal information. The increasing amount of images to be collected by the ground station of earth observing systems emphasizes the need for database management with intelligent data retrieval capabilities. The purpose of the proposed method is to realize a new content based retrieval system for remote sensing images database with an innovative search tool based on image similarity. This methodology is quite innovative for this application, at present many systems exist for photographic images, as for example QBIC and IKONA, but they are not able to extract and describe properly remote image content. The target database is set by an archive of images originated from an X-SAR sensor (spaceborne mission, 1994). The best content descriptors, mainly texture parameters, guarantees high retrieval performances and can be extracted without losses independently of image resolution. The latter property allows DBMS (Database Management System) to process low amount of information, as in the case of quick-look images, improving time performance and memory access without reducing retrieval accuracy. The matching technique has been designed to enable image management (database population and retrieval) independently of dimensions (width and height). Local and global content descriptors are compared, during retrieval phase, with the query image and results seem to be very encouraging.

  7. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.

  8. Advanced microscopic methods for the detection of adhesion barriers in immunology in medical imaging

    NASA Astrophysics Data System (ADS)

    Lawrence, Shane

    2017-07-01

    Advanced methods of microscopy and advanced techniques of analysis stemming therefrom have developed greatly in the past few years.The use of single discrete methods has given way to the combination of methods which means an increase in data for processing to progress to the analysis and diagnosis of ailments and diseases which can be viewed by each and any method.This presentation shows the combination of such methods and gives example of the data which arises from each individual method and the combined methodology and suggests how such data can be streamlined to enable conclusions to be drawn about the particular biological and biochemical considerations that arise.In this particular project the subject of the methodology was human lactoferrin and the relation of the adhesion properties of hlf in the overcoming of barriers to adhesion mainly on the perimeter of the cellular unit and how this affects the process of immunity in any particular case.

  9. Flame filtering and perimeter localization of wildfires using aerial thermal imagery

    NASA Astrophysics Data System (ADS)

    Valero, Mario M.; Verstockt, Steven; Rios, Oriol; Pastor, Elsa; Vandecasteele, Florian; Planas, Eulàlia

    2017-05-01

    Airborne thermal infrared (TIR) imaging systems are being increasingly used for wild fire tactical monitoring since they show important advantages over spaceborne platforms and visible sensors while becoming much more affordable and much lighter than multispectral cameras. However, the analysis of aerial TIR images entails a number of difficulties which have thus far prevented monitoring tasks from being totally automated. One of these issues that needs to be addressed is the appearance of flame projections during the geo-correction of off-nadir images. Filtering these flames is essential in order to accurately estimate the geographical location of the fuel burning interface. Therefore, we present a methodology which allows the automatic localisation of the active fire contour free of flame projections. The actively burning area is detected in TIR georeferenced images through a combination of intensity thresholding techniques, morphological processing and active contours. Subsequently, flame projections are filtered out by the temporal frequency analysis of the appropriate contour descriptors. The proposed algorithm was tested on footages acquired during three large-scale field experimental burns. Results suggest this methodology may be suitable to automatise the acquisition of quantitative data about the fire evolution. As future work, a revision of the low-pass filter implemented for the temporal analysis (currently a median filter) was recommended. The availability of up-to-date information about the fire state would improve situational awareness during an emergency response and may be used to calibrate data-driven simulators capable of emitting short-term accurate forecasts of the subsequent fire evolution.

  10. Solar Data Mining at Georgia State University

    NASA Astrophysics Data System (ADS)

    Angryk, R.; Martens, P. C.; Schuh, M.; Aydin, B.; Kempton, D.; Banda, J.; Ma, R.; Naduvil-Vadukootu, S.; Akkineni, V.; Küçük, A.; Filali Boubrahimi, S.; Hamdi, S. M.

    2016-12-01

    In this talk we give an overview of research projects related to solar data analysis that are conducted at Georgia State University. We will provide update on multiple advances made by our research team on the analysis of image parameters, spatio-temporal patterns mining, temporal data analysis and our experiences with big, heterogeneous solar data visualization, analysis, processing and storage. We will talk about up-to-date data mining methodologies, and their importance for big data-driven solar physics research.

  11. Optical Spectroscopy and Imaging for the Noninvasive Evaluation of Engineered Tissues

    PubMed Central

    Rice, William L.; Hronik-Tupaj, Marie; Kaplan, David L.

    2008-01-01

    Optical spectroscopy and imaging approaches offer the potential to noninvasively assess different aspects of the cellular, extracellular matrix, and scaffold components of engineered tissues. In addition, the combination of multiple imaging modalities within a single instrument is highly feasible, allowing acquisition of complementary information related to the structure, organization, biochemistry, and physiology of the sample. The ability to characterize and monitor the dynamic interactions that take place as engineered tissues develop promises to enhance our understanding of the interdependence of processes that ultimately leads to functional tissue outcomes. It is expected that this information will impact significantly upon our abilities to optimize the design of biomaterial scaffolds, bioreactors, and cell systems. Here, we review the principles and performance characteristics of the main methodologies that have been exploited thus far, and we present examples of corresponding tissue engineering studies. PMID:18844604

  12. Use of simulated experiments for material characterization of brittle materials subjected to high strain rate dynamic tension

    PubMed Central

    Saletti, Dominique

    2017-01-01

    Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505

  13. Widefield quantitative multiplex surface enhanced Raman scattering imaging in vivo

    NASA Astrophysics Data System (ADS)

    McVeigh, Patrick Z.; Mallia, Rupananda J.; Veilleux, Israel; Wilson, Brian C.

    2013-04-01

    In recent years numerous studies have shown the potential advantages of molecular imaging in vitro and in vivo using contrast agents based on surface enhanced Raman scattering (SERS), however the low throughput of traditional point-scanned imaging methodologies have limited their use in biological imaging. In this work we demonstrate that direct widefield Raman imaging based on a tunable filter is capable of quantitative multiplex SERS imaging in vivo, and that this imaging is possible with acquisition times which are orders of magnitude lower than achievable with comparable point-scanned methodologies. The system, designed for small animal imaging, has a linear response from (0.01 to 100 pM), acquires typical in vivo images in <10 s, and with suitable SERS reporter molecules is capable of multiplex imaging without compensation for spectral overlap. To demonstrate the utility of widefield Raman imaging in biological applications, we show quantitative imaging of four simultaneous SERS reporter molecules in vivo with resulting probe quantification that is in excellent agreement with known quantities (R2>0.98).

  14. Controlled electrostatic methodology for imaging indentations in documents.

    PubMed

    Yaraskavitch, Luke; Graydon, Matthew; Tanaka, Tobin; Ng, Lay-Keow

    2008-05-20

    The electrostatic process for imaging indentations on documents using the ESDA device is investigated under controlled experimental settings. An in-house modified commercial xerographic developer housing is used to control the uniformity and volume of toner deposition, allowing for reproducible image development. Along with this novel development tool, an electrostatic voltmeter and fixed environmental conditions facilitate an optimization process. Sample documents are preconditioned in a humidity cabinet with microprocessor control, and the significant benefit of humidification above 70% RH on image quality is verified. Improving on the subjective methods of previous studies, image quality analysis is carried out in an objective and reproducible manner using the PIAS-II. For the seven commercial paper types tested, the optimum ESDA operating point is found to be at an electric potential near -400V at the Mylar surface; however, for most paper types, the optimum operating regime is found to be quite broad, spanning relatively small electric potentials between -200 and -550V. At -400V, the film right above an indented area generally carries a voltage which is 30-50V less negative than the non-indented background. In contrast with Seward's findings [G.H. Seward, Model for electrostatic imaging of forensic evidence via discharge through Mylar-paper path, J. Appl. Phys. 83 (3) (1998) 1450-1456; G.H. Seward, Practical implications of the charge transport model for electrostatic detection apparatus (ESDA), J. Forensic Sci. 44 (4) (1999) 832-836], a period of charge decay before image development is not required when operating in this optimal regime. A brief investigation of the role played by paper-to-paper friction during the indentation process is conducted using our optimized development method.

  15. Smart concrete slabs with embedded tubular PZT transducers for damage detection

    NASA Astrophysics Data System (ADS)

    Gao, Weihang; Huo, Linsheng; Li, Hongnan; Song, Gangbing

    2018-02-01

    The objective of this study is to develop a new concept and methodology of smart concrete slab (SCS) with embedded tubular lead zirconate titanate transducer array for image based damage detection. Stress waves, as the detecting signals, are generated by the embedded tubular piezoceramic transducers in the SCS. Tubular piezoceramic transducers are used due to their capacity of generating radially uniform stress waves in a two-dimensional concrete slab (such as bridge decks and walls), increasing the monitoring range. A circular type delay-and-sum (DAS) imaging algorithm is developed to image the active acoustic sources based on the direct response received by each sensor. After the scattering signals from the damage are obtained by subtracting the baseline response of the concrete structures from those of the defective ones, the elliptical type DAS imaging algorithm is employed to process the scattering signals and reconstruct the image of the damage. Finally, two experiments, including active acoustic source monitoring and damage imaging for concrete structures, are carried out to illustrate and demonstrate the effectiveness of the proposed method.

  16. Correlative 3D imaging of Whole Mammalian Cells with Light and Electron Microscopy

    PubMed Central

    Murphy, Gavin E.; Narayan, Kedar; Lowekamp, Bradley C.; Hartnell, Lisa M.; Heymann, Jurgen A. W.; Fu, Jing; Subramaniam, Sriram

    2011-01-01

    We report methodological advances that extend the current capabilities of ion-abrasion scanning electron microscopy (IA–SEM), also known as focused ion beam scanning electron microscopy, a newly emerging technology for high resolution imaging of large biological specimens in 3D. We establish protocols that enable the routine generation of 3D image stacks of entire plastic-embedded mammalian cells by IA-SEM at resolutions of ~10 to 20 nm at high contrast and with minimal artifacts from the focused ion beam. We build on these advances by describing a detailed approach for carrying out correlative live confocal microscopy and IA–SEM on the same cells. Finally, we demonstrate that by combining correlative imaging with newly developed tools for automated image processing, small 100 nm-sized entities such as HIV-1 or gold beads can be localized in SEM image stacks of whole mammalian cells. We anticipate that these methods will add to the arsenal of tools available for investigating mechanisms underlying host-pathogen interactions, and more generally, the 3D subcellular architecture of mammalian cells and tissues. PMID:21907806

  17. 3D image processing architecture for camera phones

    NASA Astrophysics Data System (ADS)

    Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje

    2011-03-01

    Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.

  18. WE-E-12A-01: Medical Physics 1.0 to 2.0: MRI, Displays, Informatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pickens, D; Flynn, M; Peck, D

    Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidence-based medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. MRI 2.0: This presentation will look into the future of clinicalmore » MR imaging and what the clinical medical physicist will need to be doing as the technology of MR imaging evolves. Many of the measurement techniques used today will need to be expanded to address the advent of higher field imaging systems and dedicated imagers for specialty applications. Included will be the need to address quality assurance and testing metrics for multi-channel MR imagers and hybrid devices such as MR/PET systems. New pulse sequences and acquisition methods, increasing use of MR spectroscopy, and real-time guidance procedures will place the burden on the medical physicist to define and use new tools to properly evaluate these systems, but the clinical applications must be understood so that these tools are use correctly. Finally, new rules, clinical requirements, and regulations will mean that the medical physicist must actively work to keep her/his sites compliant and must work closely with physicians to ensure best performance of these systems. Informatics Display 1.0 to 2.0: Medical displays are an integral part of medical imaging operation. The DICOM and AAPM (TG18) efforts have led to clear definitions of performance requirements of monochrome medical displays that can be followed by medical physicists to ensure proper performance. However, effective implementation of that oversight has been challenging due to the number and extend of medical displays in use at a facility. The advent of color display and mobile displays has added additional challenges to the task of the medical physicist. This informatics display lecture first addresses the current display guidelines (the 1.0 paradigm) and further outlines the initiatives and prospects for color and mobile displays (the 2.0 paradigm). Informatics Management 1.0 to 2.0: Imaging informatics is part of every radiology practice today. Imaging informatics covers everything from the ordering of a study, through the data acquisition and processing, display and archiving, reporting of findings and the billing for the services performed. The standardization of the processes used to manage the information and methodologies to integrate these standards is being developed and advanced continuously. These developments are done in an open forum and imaging organizations and professionals all have a part in the process. In the Informatics Management presentation, the flow of information and the integration of the standards used in the processes will be reviewed. The role of radiologists and physicists in the process will be discussed. Current methods (the 1.0 paradigm) and evolving methods (the 2.0 paradigm) for validation of informatics systems function will also be discussed. Learning Objectives: Identify requirements for improving quality assurance and compliance tools for advanced and hybrid MRI systems. Identify the need for new quality assurance metrics and testing procedures for advanced systems. Identify new hardware systems and new procedures needed to evaluate MRI systems. Understand the components of current medical physics expectation for medical displays. Understand the role and prospect fo medical physics for color and mobile display devices. Understand different areas of imaging informatics and the methodology for developing informatics standards. Understand the current status of informatics standards and the role of physicists and radiologists in the process, and the current technology for validating the function of these systems.« less

  19. Unified modeling language and design of a case-based retrieval system in medical imaging.

    PubMed

    LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P

    1998-01-01

    One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.

  20. Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.

    PubMed

    Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G

    2012-04-01

    The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.

  1. Full-field modal analysis during base motion excitation using high-speed 3D digital image correlation

    NASA Astrophysics Data System (ADS)

    Molina-Viedma, Ángel J.; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2017-10-01

    In recent years, many efforts have been made to exploit full-field measurement optical techniques for modal identification. Three-dimensional digital image correlation using high-speed cameras has been extensively employed for this purpose. Modal identification algorithms are applied to process the frequency response functions (FRF), which relate the displacement response of the structure to the excitation force. However, one of the most common tests for modal analysis involves the base motion excitation of a structural element instead of force excitation. In this case, the relationship between response and excitation is typically based on displacements, which are known as transmissibility functions. In this study, a methodology for experimental modal analysis using high-speed 3D digital image correlation and base motion excitation tests is proposed. In particular, a cantilever beam was excited from its base with a random signal, using a clamped edge join. Full-field transmissibility functions were obtained through the beam and converted into FRF for proper identification, considering a single degree-of-freedom theoretical conversion. Subsequently, modal identification was performed using a circle-fit approach. The proposed methodology facilitates the management of the typically large amounts of data points involved in the DIC measurement during modal identification. Moreover, it was possible to determine the natural frequencies, damping ratios and full-field mode shapes without requiring any additional tests. Finally, the results were experimentally validated by comparing them with those obtained by employing traditional accelerometers, analytical models and finite element method analyses. The comparison was performed by using the quantitative indicator modal assurance criterion. The results showed a high level of correspondence, consolidating the proposed experimental methodology.

  2. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  3. The influence of image valence on visual attention and perception of risk in drivers.

    PubMed

    Jones, M P; Chapman, P; Bailey, K

    2014-12-01

    Currently there is little research into the relationship between emotion and driving in the context of advertising and distraction. Research that has looked into this also has methodological limitations that could be affecting the results rather than emotional processing (Trick et al., 2012). The current study investigated the relationship between image valence and risk perception, eye movements and physiological reactions. Participants watched hazard perception clips which had emotional images from the international affective picture system overlaid onto them. They rated how hazardous or safe they felt, whilst eye movements, galvanic skin response and heart rate were recorded. Results suggested that participants were more aware of potential hazards when a neutral image had been shown, in comparison to positive and negative valenced images; that is, participants showed higher subjective ratings of risk, larger physiological responses and marginally longer fixation durations when viewing a hazard after a neutral image, but this effect was attenuated after emotional images. It appears that emotional images reduce sensitivity to potential hazards, and we suggest that future studies could apply these findings to higher fidelity paradigms such as driving simulators. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes

    NASA Astrophysics Data System (ADS)

    Ren, Z.; Jensch, P.; Ameling, W.

    1989-03-01

    The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.

  5. Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective

    NASA Astrophysics Data System (ADS)

    Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.

    2016-06-01

    We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).

  6. Spatio-temporal alignment of pedobarographic image sequences.

    PubMed

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2011-07-01

    This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P < 0.001) than the linear temporal model. This article represents an important step forward in the alignment of pedobarographic image data, since previous methods can only be applied on static images.

  7. Detection of explosives on the surface of banknotes by Raman hyperspectral imaging and independent component analysis.

    PubMed

    Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J

    2015-02-20

    The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Conceptual transitions in methods of skull-photo superimposition that impact the reliability of identification: a review.

    PubMed

    Jayaprakash, Paul T

    2015-01-01

    Establishing identification during skull-photo superimposition relies on correlating the salient morphological features of an unidentified skull with those of a face-image of a suspected dead individual using image overlay processes. Technical progression in the process of overlay has included the incorporation of video cameras, image-mixing devices and software that enables real-time vision-mixing. Conceptual transitions occur in the superimposition methods that involve 'life-size' images, that achieve orientation of the skull to the posture of the face in the photograph and that assess the extent of match. A recent report on the reliability of identification using the superimposition method adopted the currently prevalent methods and suggested an increased rate of failures when skulls were compared with related and unrelated face images. The reported reduction in the reliability of the superimposition method prompted a review of the transition in the concepts that are involved in skull-photo superimposition. The prevalent popular methods for visualizing the superimposed images at less than 'life-size', overlaying skull-face images by relying on the cranial and facial landmarks in the frontal plane when orienting the skull for matching and evaluating the match on a morphological basis by relying on mix-mode alone are the major departures in the methodology that may have reduced the identification reliability. The need to reassess the reliability of the method that incorporates the concepts which have been considered appropriate by the practitioners is stressed. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Phage display and molecular imaging: expanding fields of vision in living subjects.

    PubMed

    Cochran, R; Cochran, Frank

    2010-01-01

    In vivo molecular imaging enables non-invasive visualization of biological processes within living subjects, and holds great promise for diagnosis and monitoring of disease. The ability to create new agents that bind to molecular targets and deliver imaging probes to desired locations in the body is critically important to further advance this field. To address this need, phage display, an established technology for the discovery and development of novel binding agents, is increasingly becoming a key component of many molecular imaging research programs. This review discusses the expanding role played by phage display in the field of molecular imaging with a focus on in vivo applications. Furthermore, new methodological advances in phage display that can be directly applied to the discovery and development of molecular imaging agents are described. Various phage library selection strategies are summarized and compared, including selections against purified target, intact cells, and ex vivo tissue, plus in vivo homing strategies. An outline of the process for converting polypeptides obtained from phage display library selections into successful in vivo imaging agents is provided, including strategies to optimize in vivo performance. Additionally, the use of phage particles as imaging agents is also described. In the latter part of the review, a survey of phage-derived in vivo imaging agents is presented, and important recent examples are highlighted. Other imaging applications are also discussed, such as the development of peptide tags for site-specific protein labeling and the use of phage as delivery agents for reporter genes. The review concludes with a discussion of how phage display technology will continue to impact both basic science and clinical applications in the field of molecular imaging.

  10. A review on color normalization and color deconvolution methods in histopathology.

    PubMed

    Onder, Devrim; Zengin, Selen; Sarioglu, Sulen

    2014-01-01

    The histopathologists get the benefits of wide range of colored dyes to have much useful information about the lesions and the tissue compositions. Despite its advantages, the staining process comes up with quite complex variations in staining concentrations and correlations, tissue fixation types, and fixation time periods. Together with the improvements in computing power and with the development of novel image analysis methods, these imperfections have led to the emerging of several color normalization algorithms. This article is a review of the currently available digital color normalization methods for the bright field histopathology. We describe the proposed color normalization methodologies in detail together with the lesion and tissue types used in the corresponding experiments. We also present the quantitative validation approaches for each of the proposed methodology where available.

  11. 3D Volumetric Analysis of Fluid Inclusions Using Confocal Microscopy

    NASA Astrophysics Data System (ADS)

    Proussevitch, A.; Mulukutla, G.; Sahagian, D.; Bodnar, B.

    2009-05-01

    Fluid inclusions preserve valuable information regarding hydrothermal, metamorphic, and magmatic processes. The molar quantities of liquid and gaseous components in the inclusions can be estimated from their volumetric measurements at room temperatures combined with knowledge of the PVTX properties of the fluid and homogenization temperatures. Thus, accurate measurements of inclusion volumes and their two phase components are critical. One of the greatest advantages of the Laser Scanning Confocal Microscopy (LSCM) in application to fluid inclsion analsyis is that it is affordable for large numbers of samples, given the appropriate software analysis tools and methodology. Our present work is directed toward developing those tools and methods. For the last decade LSCM has been considered as a potential method for inclusion volume measurements. Nevertheless, the adequate and accurate measurement by LSCM has not yet been successful for fluid inclusions containing non-fluorescing fluids due to many technical challenges in image analysis despite the fact that the cost of collecting raw LSCM imagery has dramatically decreased in recent years. These problems mostly relate to image analysis methodology and software tools that are needed for pre-processing and image segmentation, which enable solid, liquid and gaseous components to be delineated. Other challenges involve image quality and contrast, which is controlled by fluorescence of the material (most aqueous fluid inclusions do not fluoresce at the appropriate laser wavelengths), material optical properties, and application of transmitted and/or reflected confocal illumination. In this work we have identified the key problems of image analysis and propose some potential solutions. For instance, we found that better contrast of pseudo-confocal transmitted light images could be overlayed with poor-contrast true-confocal reflected light images within the same stack of z-ordered slices. This approach allows one to narrow the interface boundaries between the phases before the application of segmentation routines. In turn, we found that an active contour segmentation technique works best for these types of geomaterials. The method was developed by adapting a medical software package implemented using the Insight Toolkit (ITK) set of algorithms developed for segmentation of anatomical structures. We have developed a manual analysis procedure with the potential of 2 micron resolution in 3D volume rendering that is specifically designed for application to fluid inclusion volume measurements.

  12. Methodological challenges of optical tweezers-based X-ray fluorescence imaging of biological model organisms at synchrotron facilities.

    PubMed

    Vergucht, Eva; Brans, Toon; Beunis, Filip; Garrevoet, Jan; Bauters, Stephen; De Rijcke, Maarten; Deruytter, David; Janssen, Colin; Riekel, Christian; Burghammer, Manfred; Vincze, Laszlo

    2015-07-01

    Recently, a radically new synchrotron radiation-based elemental imaging approach for the analysis of biological model organisms and single cells in their natural in vivo state was introduced. The methodology combines optical tweezers (OT) technology for non-contact laser-based sample manipulation with synchrotron radiation confocal X-ray fluorescence (XRF) microimaging for the first time at ESRF-ID13. The optical manipulation possibilities and limitations of biological model organisms, the OT setup developments for XRF imaging and the confocal XRF-related challenges are reported. In general, the applicability of the OT-based setup is extended with the aim of introducing the OT XRF methodology in all research fields where highly sensitive in vivo multi-elemental analysis is of relevance at the (sub)micrometre spatial resolution level.

  13. Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.

    PubMed

    Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael

    2017-12-01

    Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.

  14. Pansharpening Techniques to Detect Mass Monument Damaging in Iraq

    NASA Astrophysics Data System (ADS)

    Baiocchi, V.; Bianchi, A.; Maddaluno, C.; Vidale, M.

    2017-05-01

    The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it's not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.

  15. WE-AB-BRA-07: Quantitative Evaluation of 2D-2D and 2D-3D Image Guided Radiation Therapy for Clinical Trial Credentialing, NRG Oncology/RTOG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giaddui, T; Yu, J; Xiao, Y

    Purpose: 2D-2D kV image guided radiation therapy (IGRT) credentialing evaluation for clinical trial qualification was historically qualitative through submitting screen captures of the fusion process. However, as quantitative DICOM 2D-2D and 2D-3D image registration tools are implemented in clinical practice for better precision, especially in centers that treat patients with protons, better IGRT credentialing techniques are needed. The aim of this work is to establish methodologies for quantitatively reviewing IGRT submissions based on DICOM 2D-2D and 2D-3D image registration and to test the methodologies in reviewing 2D-2D and 2D-3D IGRT submissions for RTOG/NRG Oncology clinical trials qualifications. Methods: DICOM 2D-2Dmore » and 2D-3D automated and manual image registration have been tested using the Harmony tool in MIM software. 2D kV orthogonal portal images are fused with the reference digital reconstructed radiographs (DRR) in the 2D-2D registration while the 2D portal images are fused with DICOM planning CT image in the 2D-3D registration. The Harmony tool allows alignment of the two images used in the registration process and also calculates the required shifts. Shifts calculated using MIM are compared with those submitted by institutions for IGRT credentialing. Reported shifts are considered to be acceptable if differences are less than 3mm. Results: Several tests have been performed on the 2D-2D and 2D-3D registration. The results indicated good agreement between submitted and calculated shifts. A workflow for reviewing these IGRT submissions has been developed and will eventually be used to review IGRT submissions. Conclusion: The IROC Philadelphia RTQA center has developed and tested a new workflow for reviewing DICOM 2D-2D and 2D-3D IGRT credentialing submissions made by different cancer clinical centers, especially proton centers. NRG Center for Innovation in Radiation Oncology (CIRO) and IROC RTQA center continue their collaborative efforts to enhance quality assurance services and to be consistently adaptive to the new advances in radiation therapy. This project was supported by NCI grants U10CA180868, U10CA180822, U24CA180803, U24CA12014 and PA CURE Grant.« less

  16. The Handbook of Medical Image Perception and Techniques

    NASA Astrophysics Data System (ADS)

    Samei, Ehsan; Krupinski, Elizabeth

    2014-07-01

    1. Medical image perception Ehsan Samei and Elizabeth Krupinski; Part I. Historical Reflections and Theoretical Foundations: 2. A short history of image perception in medical radiology Harold Kundel and Calvin Nodine; 3. Spatial vision research without noise Arthur Burgess; 4. Signal detection theory, a brief history Arthur Burgess; 5. Signal detection in radiology Arthur Burgess; 6. Lessons from dinners with the giants of modern image science Robert Wagner; Part II. Science of Image Perception: 7. Perceptual factors in reading medical images Elizabeth Krupinski; 8. Cognitive factors in reading medical images David Manning; 9. Satisfaction of search in traditional radiographic imaging Kevin Berbaum, Edmund Franken, Robert Caldwell and Kevin Schartz; 10. The role of expertise in radiologic image interpretation Calvin Nodine and Claudia Mello-Thoms; 11. A primer of image quality and its perceptual relevance Robert Saunders and Ehsan Samei; 12. Beyond the limitations of human vision Maria Petrou; Part III. Perception Metrology: 13. Logistical issues in designing perception experiments Ehsan Samei and Xiang Li; 14. ROC analysis: basic concepts and practical applications Georgia Tourassi; 15. Multi-reader ROC Steve Hillis; 16. Recent developments in FROC methodology Dev Chakraborty; 17. Observer models as a surrogate to perception experiments Craig Abbey and Miguel Eckstein; 18. Implementation of observer models Matthew Kupinski; Part IV. Decision Support and Computer Aided Detection: 19. CAD: an image perception perspective Maryellen Giger and Weijie Chen; 20. Common designs of CAD studies Yulei Jiang; 21. Perceptual effect of CAD in reading chest images Matthew Freedman and Teresa Osicka; 22. Perceptual issues in mammography and CAD Michael Ulissey; 23. How perceptual factors affect the use and accuracy of CAD for interpretation of CT images Ronald Summers; 24. CAD: risks and benefits for radiologists' decisions Eugenio Alberdi, Andrey Povyakalo, Lorenzo Strigini and Peter Ayton; Part V. Optimization and Practical Issues: 25. Optimization of 2D and 3D radiographic systems Jeff Siewerdson; 26. Applications of AFC methodology in optimization of CT imaging systems Kent Ogden and Walter Huda; 27. Perceptual issues in reading mammograms Margarita Zuley; 28. Perceptual optimization of display processing techniques Richard Van Metter; 29. Optimization of display systems Elizabeth Krupinski and Hans Roehrig; 30. Ergonomic radiologist workplaces in the PACS environment Carl Zylack; Part VI. Epilogue: 31. Future prospects of medical image perception Ehsan Samei and Elizabeth Krupinski; Index.

  17. The Handbook of Medical Image Perception and Techniques

    NASA Astrophysics Data System (ADS)

    Samei, Ehsan; Krupinski, Elizabeth

    2009-12-01

    1. Medical image perception Ehsan Samei and Elizabeth Krupinski; Part I. Historical Reflections and Theoretical Foundations: 2. A short history of image perception in medical radiology Harold Kundel and Calvin Nodine; 3. Spatial vision research without noise Arthur Burgess; 4. Signal detection theory, a brief history Arthur Burgess; 5. Signal detection in radiology Arthur Burgess; 6. Lessons from dinners with the giants of modern image science Robert Wagner; Part II. Science of Image Perception: 7. Perceptual factors in reading medical images Elizabeth Krupinski; 8. Cognitive factors in reading medical images David Manning; 9. Satisfaction of search in traditional radiographic imaging Kevin Berbaum, Edmund Franken, Robert Caldwell and Kevin Schartz; 10. The role of expertise in radiologic image interpretation Calvin Nodine and Claudia Mello-Thoms; 11. A primer of image quality and its perceptual relevance Robert Saunders and Ehsan Samei; 12. Beyond the limitations of human vision Maria Petrou; Part III. Perception Metrology: 13. Logistical issues in designing perception experiments Ehsan Samei and Xiang Li; 14. ROC analysis: basic concepts and practical applications Georgia Tourassi; 15. Multi-reader ROC Steve Hillis; 16. Recent developments in FROC methodology Dev Chakraborty; 17. Observer models as a surrogate to perception experiments Craig Abbey and Miguel Eckstein; 18. Implementation of observer models Matthew Kupinski; Part IV. Decision Support and Computer Aided Detection: 19. CAD: an image perception perspective Maryellen Giger and Weijie Chen; 20. Common designs of CAD studies Yulei Jiang; 21. Perceptual effect of CAD in reading chest images Matthew Freedman and Teresa Osicka; 22. Perceptual issues in mammography and CAD Michael Ulissey; 23. How perceptual factors affect the use and accuracy of CAD for interpretation of CT images Ronald Summers; 24. CAD: risks and benefits for radiologists' decisions Eugenio Alberdi, Andrey Povyakalo, Lorenzo Strigini and Peter Ayton; Part V. Optimization and Practical Issues: 25. Optimization of 2D and 3D radiographic systems Jeff Siewerdson; 26. Applications of AFC methodology in optimization of CT imaging systems Kent Ogden and Walter Huda; 27. Perceptual issues in reading mammograms Margarita Zuley; 28. Perceptual optimization of display processing techniques Richard Van Metter; 29. Optimization of display systems Elizabeth Krupinski and Hans Roehrig; 30. Ergonomic radiologist workplaces in the PACS environment Carl Zylack; Part VI. Epilogue: 31. Future prospects of medical image perception Ehsan Samei and Elizabeth Krupinski; Index.

  18. Multi-Modal Curriculum Learning for Semi-Supervised Image Classification.

    PubMed

    Gong, Chen; Tao, Dacheng; Maybank, Stephen J; Liu, Wei; Kang, Guoliang; Yang, Jie

    2016-07-01

    Semi-supervised image classification aims to classify a large quantity of unlabeled images by typically harnessing scarce labeled images. Existing semi-supervised methods often suffer from inadequate classification accuracy when encountering difficult yet critical images, such as outliers, because they treat all unlabeled images equally and conduct classifications in an imperfectly ordered sequence. In this paper, we employ the curriculum learning methodology by investigating the difficulty of classifying every unlabeled image. The reliability and the discriminability of these unlabeled images are particularly investigated for evaluating their difficulty. As a result, an optimized image sequence is generated during the iterative propagations, and the unlabeled images are logically classified from simple to difficult. Furthermore, since images are usually characterized by multiple visual feature descriptors, we associate each kind of features with a teacher, and design a multi-modal curriculum learning (MMCL) strategy to integrate the information from different feature modalities. In each propagation, each teacher analyzes the difficulties of the currently unlabeled images from its own modality viewpoint. A consensus is subsequently reached among all the teachers, determining the currently simplest images (i.e., a curriculum), which are to be reliably classified by the multi-modal learner. This well-organized propagation process leveraging multiple teachers and one learner enables our MMCL to outperform five state-of-the-art methods on eight popular image data sets.

  19. Multi-resolution analysis for region of interest extraction in thermographic nondestructive evaluation

    NASA Astrophysics Data System (ADS)

    Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.

    2012-03-01

    Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.

  20. Global-Context Based Salient Region Detection in Nature Images

    NASA Astrophysics Data System (ADS)

    Bao, Hong; Xu, De; Tang, Yingjun

    Visually saliency detection provides an alternative methodology to image description in many applications such as adaptive content delivery and image retrieval. One of the main aims of visual attention in computer vision is to detect and segment the salient regions in an image. In this paper, we employ matrix decomposition to detect salient object in nature images. To efficiently eliminate high contrast noise regions in the background, we integrate global context information into saliency detection. Therefore, the most salient region can be easily selected as the one which is globally most isolated. The proposed approach intrinsically provides an alternative methodology to model attention with low implementation complexity. Experiments show that our approach achieves much better performance than that from the existing state-of-art methods.

  1. Integrated Survey Procedures for the Virtual Reading and Fruition of Historical Buildings

    NASA Astrophysics Data System (ADS)

    Scandurra, S.; Pulcrano, M.; Cirillo, V.; Campi, M.; di Luggo, A.; Zerlenga, O.

    2018-05-01

    This paper presents the developments of research related to the integration of digital survey methodologies with reference to image-based and range-based technologies. Starting from the processing of point clouds, the data were processed for both the geometric interpretation of the space as well as production of three-dimensional models that describe the constitutive and morphological relationships. The subject of the study was the church of San Carlo all'Arena in Naples (Italy), with a HBIM model being produced that is semantically consistent with the real building. Starting from the data acquired, a visualization system was created for the virtual exploration of the building.

  2. Application of Remote Sensing for the Analysis of Environmental Changes in Albania

    NASA Astrophysics Data System (ADS)

    Frasheri, N.; Beqiraj, G.; Bushati, S.; Frasheri, A.

    2016-08-01

    In the paper there is presented a review of remote sensing studies carried out for investigation of environmental changes in Albania. Using, often simple methodologies and general purpose image processing software, and exploiting free Internet archives of satellite imagery, significant results were obtained for hot areas of environmental changes. Such areas include sea coasts experiencing sea transgression, temporal variations of vegetation and aerosols, lakes, landslides and regional tectonics. Internet archives of European Space Agency ESA and USA Geological Service USGS are used.

  3. New technology and regional studies in human ecology: A Papua New Guinea example

    NASA Technical Reports Server (NTRS)

    Morren, George E. B., Jr.

    1991-01-01

    Two key issues in using technologies such as digital image processing and geographic information systems are a conceptually and methodologically valid research design and the exploitation of varied sources of data. With this realized, the new technologies offer anthropologists the opportunity to test hypotheses about spatial and temporal variations in the features of interest within a regionally coherent mosaic of social groups and landscapes. Current research on the Mountain OK of Papua New Guinea is described with reference to these issues.

  4. Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.

    2004-01-01

    Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.

  5. Space applications of artificial intelligence; 1990 Goddard Conference, Greenbelt, MD, May 1, 2, 1990, Selected Papers

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1990-01-01

    The papers presented at the 1990 Goddard Conference on Space Applications of Artificial Intelligence are given. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The proceedings fall into the following areas: Planning and Scheduling, Fault Monitoring/Diagnosis, Image Processing and Machine Vision, Robotics/Intelligent Control, Development Methodologies, Information Management, and Knowledge Acquisition.

  6. Clinical Physiologic Research Instrumentation: An Approach Using Modular Elements and Distributed Processing

    PubMed Central

    Hagen, R. W.; Ambos, H. D.; Browder, M. W.; Roloff, W. R.; Thomas, L. J.

    1979-01-01

    The Clinical Physiologic Research System (CPRS) developed from our experience in applying computers to medical instrumentation problems. This experience revealed a set of applications with a commonality in data acquisition, analysis, input/output, and control needs that could be met by a portable system. The CPRS demonstrates a practical methodology for integrating commercial instruments with distributed modular elements of local design in order to make facile responses to changing instrumentation needs in clinical environments. ImagesFigure 3

  7. Evaluation of dose from kV cone-beam computed tomography during radiotherapy: a comparison of methodologies

    NASA Astrophysics Data System (ADS)

    Buckley, J.; Wilkinson, D.; Malaroda, A.; Metcalfe, P.

    2017-01-01

    Three alternative methodologies to the Computed-Tomography Dose Index for the evaluation of Cone-Beam Computed Tomography dose are compared, the Cone-Beam Dose Index, IAEA Human Health Report No. 5 recommended methodology and the AAPM Task Group 111 recommended methodology. The protocols were evaluated for Pelvis and Thorax scan modes on Varian® On-Board Imager and Truebeam kV XI imaging systems. The weighted planar average dose was highest for the AAPM methodology across all scans, with the CBDI being the second highest overall. A 17.96% and 1.14% decrease from the TG-111 protocol to the IAEA and CBDI protocols for the Pelvis mode and 18.15% and 13.10% decrease for the Thorax mode were observed for the XI system. For the OBI system, the variation was 16.46% and 7.14% for Pelvis mode and 15.93% to the CBDI protocol in Thorax mode respectively.

  8. Prefrontal Cortex, Emotion, and Approach/Withdrawal Motivation

    PubMed Central

    Spielberg, Jeffrey M.; Stewart, Jennifer L.; Levin, Rebecca L.; Miller, Gregory A.; Heller, Wendy

    2010-01-01

    This article provides a selective review of the literature and current theories regarding the role of prefrontal cortex, along with some other critical brain regions, in emotion and motivation. Seemingly contradictory findings have often appeared in this literature. Research attempting to resolve these contradictions has been the basis of new areas of growth and has led to more sophisticated understandings of emotional and motivational processes as well as neural networks associated with these processes. Progress has, in part, depended on methodological advances that allow for increased resolution in brain imaging. A number of issues are currently in play, among them the role of prefrontal cortex in emotional or motivational processes. This debate fosters research that will likely lead to further refinement of conceptualizations of emotion, motivation, and the neural processes associated with them. PMID:20574551

  9. Prefrontal Cortex, Emotion, and Approach/Withdrawal Motivation.

    PubMed

    Spielberg, Jeffrey M; Stewart, Jennifer L; Levin, Rebecca L; Miller, Gregory A; Heller, Wendy

    2008-01-01

    This article provides a selective review of the literature and current theories regarding the role of prefrontal cortex, along with some other critical brain regions, in emotion and motivation. Seemingly contradictory findings have often appeared in this literature. Research attempting to resolve these contradictions has been the basis of new areas of growth and has led to more sophisticated understandings of emotional and motivational processes as well as neural networks associated with these processes. Progress has, in part, depended on methodological advances that allow for increased resolution in brain imaging. A number of issues are currently in play, among them the role of prefrontal cortex in emotional or motivational processes. This debate fosters research that will likely lead to further refinement of conceptualizations of emotion, motivation, and the neural processes associated with them.

  10. Joint Segmentation of Anatomical and Functional Images: Applications in Quantification of Lesions from PET, PET-CT, MRI-PET, and MRI-PET-CT Images

    PubMed Central

    Bagci, Ulas; Udupa, Jayaram K.; Mendhiratta, Neil; Foster, Brent; Xu, Ziyue; Yao, Jianhua; Chen, Xinjian; Mollura, Daniel J.

    2013-01-01

    We present a novel method for the joint segmentation of anatomical and functional images. Our proposed methodology unifies the domains of anatomical and functional images, represents them in a product lattice, and performs simultaneous delineation of regions based on random walk image segmentation. Furthermore, we also propose a simple yet effective object/background seed localization method to make the proposed segmentation process fully automatic. Our study uses PET, PET-CT, MRI-PET, and fused MRI-PET-CT scans (77 studies in all) from 56 patients who had various lesions in different body regions. We validated the effectiveness of the proposed method on different PET phantoms as well as on clinical images with respect to the ground truth segmentation provided by clinicians. Experimental results indicate that the presented method is superior to threshold and Bayesian methods commonly used in PET image segmentation, is more accurate and robust compared to the other PET-CT segmentation methods recently published in the literature, and also it is general in the sense of simultaneously segmenting multiple scans in real-time with high accuracy needed in routine clinical use. PMID:23837967

  11. The impact of the condenser on cytogenetic image quality in digital microscope system.

    PubMed

    Ren, Liqiang; Li, Zheng; Li, Yuhua; Zheng, Bin; Li, Shibo; Chen, Xiaodong; Liu, Hong

    2013-01-01

    Optimizing operational parameters of the digital microscope system is an important technique to acquire high quality cytogenetic images and facilitate the process of karyotyping so that the efficiency and accuracy of diagnosis can be improved. This study investigated the impact of the condenser on cytogenetic image quality and system working performance using a prototype digital microscope image scanning system. Both theoretical analysis and experimental validations through objectively evaluating a resolution test chart and subjectively observing large numbers of specimen were conducted. The results show that the optimal image quality and large depth of field (DOF) are simultaneously obtained when the numerical aperture of condenser is set as 60%-70% of the corresponding objective. Under this condition, more analyzable chromosomes and diagnostic information are obtained. As a result, the system shows higher working stability and less restriction for the implementation of algorithms such as autofocusing especially when the system is designed to achieve high throughput continuous image scanning. Although the above quantitative results were obtained using a specific prototype system under the experimental conditions reported in this paper, the presented evaluation methodologies can provide valuable guidelines for optimizing operational parameters in cytogenetic imaging using the high throughput continuous scanning microscopes in clinical practice.

  12. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  13. Knowledge Driven Image Mining with Mixture Density Mercer Kernals

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  14. A novel methodology for litho-to-etch pattern fidelity correction for SADP process

    NASA Astrophysics Data System (ADS)

    Chen, Shr-Jia; Chang, Yu-Cheng; Lin, Arthur; Chang, Yi-Shiang; Lin, Chia-Chi; Lai, Jun-Cheng

    2017-03-01

    For 2x nm node semiconductor devices and beyond, more aggressive resolution enhancement techniques (RETs) such as source-mask co-optimization (SMO), litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) are utilized for the low k1 factor lithography processes. In the SADP process, the pattern fidelity is extremely critical since a slight photoresist (PR) top-loss or profile roughness may impact the later core trim process, due to its sensitivity to environment. During the subsequent sidewall formation and core removal processes, the core trim profile weakness may worsen and induces serious defects that affect the final electrical performance. To predict PR top-loss, a rigorous lithography simulation can provide a reference to modify mask layouts; but it takes a much longer run time and is not capable of full-field mask data preparation. In this paper, we first brought out an algorithm which utilizes multi-intensity levels from conventional aerial image simulation to assess the physical profile through lithography to core trim etching steps. Subsequently, a novel correction method was utilized to improve the post-etch pattern fidelity without the litho. process window suffering. The results not only matched PR top-loss in rigorous lithography simulation, but also agreed with post-etch wafer data. Furthermore, this methodology can also be incorporated with OPC and post-OPC verification to improve core trim profile and final pattern fidelity at an early stage.

  15. Classification of riparian forest species and health condition using multi-temporal and hyperspatial imagery from unmanned aerial system.

    PubMed

    Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe

    2016-03-01

    Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach competes successfully with those developed using more expensive imagery, such as multi-spectral and hyperspectral airborne imagery. The high overall accuracy results obtained by the classification of the diseased alders open the door to applications dedicated to monitoring of the health conditions of riparian forest. Our methodological framework will allow UAS users to manage large imagery metric datasets derived from those dense time series.

  16. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    PubMed

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.

  17. Detection of eviscerated poultry spleen enlargement by machine vision

    NASA Astrophysics Data System (ADS)

    Tao, Yang; Shao, June J.; Skeeles, John K.; Chen, Yud-Ren

    1999-01-01

    The size of a poultry spleen is an indication of whether the bird is wholesomeness or has a virus-related disease. This study explored the possibility of detecting poultry spleen enlargement with a computer imaging system to assist human inspectors in food safety inspections. Images of 45-day-old hybrid turkey internal viscera were taken using fluorescent and UV lighting systems. Image processing algorithms including linear transformation, morphological operations, and statistical analyses were developed to distinguish the spleen from its surroundings and then to detect abnormal spleens. Experimental results demonstrated that the imaging method could effectively distinguish spleens from other organ and intestine. Based on a total sample of 57 birds, the classification rates were 92% from a self-test set, and 95% from an independent test set for the correct detection of normal and abnormal birds. The methodology indicated the feasibility of using automated machine vision systems in the future to inspect internal organs and check the wholesomeness of poultry carcasses.

  18. Diagnosis of skin cancer using image processing

    NASA Astrophysics Data System (ADS)

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué; Coronel-Beltrán, Ángel

    2014-10-01

    In this papera methodology for classifying skin cancerin images of dermatologie spots based on spectral analysis using the K-law Fourier non-lineartechnique is presented. The image is segmented and binarized to build the function that contains the interest area. The image is divided into their respective RGB channels to obtain the spectral properties of each channel. The green channel contains more information and therefore this channel is always chosen. This information is point to point multiplied by a binary mask and to this result a Fourier transform is applied written in nonlinear form. If the real part of this spectrum is positive, the spectral density takeunit values, otherwise are zero. Finally the ratio of the sum of the unit values of the spectral density with the sum of values of the binary mask are calculated. This ratio is called spectral index. When the value calculated is in the spectral index range three types of cancer can be detected. Values found out of this range are benign injure.

  19. Functional optical coherence tomography: principles and progress

    NASA Astrophysics Data System (ADS)

    Kim, Jina; Brown, William; Maher, Jason R.; Levinson, Howard; Wax, Adam

    2015-05-01

    In the past decade, several functional extensions of optical coherence tomography (OCT) have emerged, and this review highlights key advances in instrumentation, theoretical analysis, signal processing and clinical application of these extensions. We review five principal extensions: Doppler OCT (DOCT), polarization-sensitive OCT (PS-OCT), optical coherence elastography (OCE), spectroscopic OCT (SOCT), and molecular imaging OCT. The former three have been further developed with studies in both ex vivo and in vivo human tissues. This review emphasizes the newer techniques of SOCT and molecular imaging OCT, which show excellent potential for clinical application but have yet to be well reviewed in the literature. SOCT elucidates tissue characteristics, such as oxygenation and carcinogenesis, by detecting wavelength-dependent absorption and scattering of light in tissues. While SOCT measures endogenous biochemical distributions, molecular imaging OCT detects exogenous molecular contrast agents. These newer advances in functional OCT broaden the potential clinical application of OCT by providing novel ways to understand tissue activity that cannot be accomplished by other current imaging methodologies.

  20. Functional Optical Coherence Tomography: Principles and Progress

    PubMed Central

    Kim, Jina; Brown, William; Maher, Jason R.; Levinson, Howard; Wax, Adam

    2015-01-01

    In the past decade, several functional extensions of optical coherence tomography (OCT) have emerged, and this review highlights key advances in instrumentation, theoretical analysis, signal processing and clinical application of these extensions. We review five principal extensions: Doppler OCT (DOCT), polarization-sensitive OCT (PS-OCT), optical coherence elastography (OCE), spectroscopic OCT (SOCT), and molecular imaging OCT. The former three have been further developed with studies in both ex vivo and in vivo human tissues. This review emphasizes the newer techniques of SOCT and molecular imaging OCT, which show excellent potential for clinical application but have yet to be well reviewed in the literature. SOCT elucidates tissue characteristics, such as oxygenation and carcinogenesis, by detecting wavelength-dependent absorption and scattering of light in tissues. While SOCT measures endogenous biochemical distributions, molecular imaging OCT detects exogenous molecular contrast agents. These newer advances in functional OCT broaden the potential clinical application of OCT by providing novel ways to understand tissue activity that cannot be accomplished by other current imaging methodologies. PMID:25951836

  1. Image analysis technique as a tool to identify morphological changes in Trametes versicolor pellets according to exopolysaccharide or laccase production.

    PubMed

    Tavares, Ana P M; Silva, Rui P; Amaral, António L; Ferreira, Eugénio C; Xavier, Ana M R B

    2014-02-01

    Image analysis technique was applied to identify morphological changes of pellets from white-rot fungus Trametes versicolor on agitated submerged cultures during the production of exopolysaccharide (EPS) or ligninolytic enzymes. Batch tests with four different experimental conditions were carried out. Two different culture media were used, namely yeast medium or Trametes defined medium and the addition of lignolytic inducers as xylidine or pulp and paper industrial effluent were evaluated. Laccase activity, EPS production, and final biomass contents were determined for batch assays and the pellets morphology was assessed by image analysis techniques. The obtained data allowed establishing the choice of the metabolic pathways according to the experimental conditions, either for laccase enzymatic production in the Trametes defined medium, or for EPS production in the rich Yeast Medium experiments. Furthermore, the image processing and analysis methodology allowed for a better comprehension of the physiological phenomena with respect to the corresponding pellets morphological stages.

  2. Ultrafast Ultrasound Imaging With Cascaded Dual-Polarity Waves.

    PubMed

    Zhang, Yang; Guo, Yuexin; Lee, Wei-Ning

    2018-04-01

    Ultrafast ultrasound imaging using plane or diverging waves, instead of focused beams, has advanced greatly the development of novel ultrasound imaging methods for evaluating tissue functions beyond anatomical information. However, the sonographic signal-to-noise ratio (SNR) of ultrafast imaging remains limited due to the lack of transmission focusing, and thus insufficient acoustic energy delivery. We hereby propose a new ultrafast ultrasound imaging methodology with cascaded dual-polarity waves (CDWs), which consists of a pulse train with positive and negative polarities. A new coding scheme and a corresponding linear decoding process were thereby designed to obtain the recovered signals with increased amplitude, thus increasing the SNR without sacrificing the frame rate. The newly designed CDW ultrafast ultrasound imaging technique achieved higher quality B-mode images than coherent plane-wave compounding (CPWC) and multiplane wave (MW) imaging in a calibration phantom, ex vivo pork belly, and in vivo human back muscle. CDW imaging shows a significant improvement in the SNR (10.71 dB versus CPWC and 7.62 dB versus MW), penetration depth (36.94% versus CPWC and 35.14% versus MW), and contrast ratio in deep regions (5.97 dB versus CPWC and 5.05 dB versus MW) without compromising other image quality metrics, such as spatial resolution and frame rate. The enhanced image qualities and ultrafast frame rates offered by CDW imaging beget great potential for various novel imaging applications.

  3. Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel

    DOE PAGES

    Collette, R.; Douglas, J.; Patterson, L.; ...

    2015-05-01

    Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less

  4. Do photographic images of pain improve communication during pain consultations?

    PubMed

    Padfield, Deborah; Zakrzewska, Joanna M; Williams, Amanda C de C

    2015-01-01

    Visual images may facilitate the communication of pain during consultations. To assess whether photographic images of pain enrich the content and⁄or process of pain consultation by comparing patients' and clinicians' ratings of the consultation experience. Photographic images of pain previously co-created by patients with a photographer were provided to new patients attending pain clinic consultations. Seventeen patients selected and used images that best expressed their pain and were compared with 21 patients who were not shown images. Ten clinicians conducted assessments in each condition. After consultation, patients and clinicians completed ratings of aspects of communication and, when images were used, how they influenced the consultation. The majority of both patients and clinicians reported that images enhanced the consultation. Ratings of communication were generally high, with no differences between those with and without images (with the exception of confidence in treatment plan, which was rated more highly in the image group). However, patients' and clinicians' ratings of communication were inversely related only in consultations with images. Methodological shortcomings may underlie the present findings of no difference. It is also possible that using images raised patients' and clinicians' expectations and encouraged emotional disclosure, in response to which clinicians were dissatisfied with their performance. Using images in clinical encounters did not have a negative impact on the consultation, nor did it improve communication or satisfaction. These findings will inform future analysis of behaviour in the video-recorded consultations.

  5. Towards an in-plane methodology to track breast lesions using mammograms and patient-specific finite-element simulations

    NASA Astrophysics Data System (ADS)

    Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya

    2017-11-01

    In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.

  6. WE-B-BRC-00: Concepts in Risk-Based Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  7. WE-B-BRC-02: Risk Analysis and Incident Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fraass, B.

    Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less

  8. The challenges of studying visual expertise in medical image diagnosis.

    PubMed

    Gegenfurtner, Andreas; Kok, Ellen; van Geel, Koos; de Bruin, Anique; Jarodzka, Halszka; Szulewski, Adam; van Merriënboer, Jeroen Jg

    2017-01-01

    Visual expertise is the superior visual skill shown when executing domain-specific visual tasks. Understanding visual expertise is important in order to understand how the interpretation of medical images may be best learned and taught. In the context of this article, we focus on the visual skill of medical image diagnosis and, more specifically, on the methodological set-ups routinely used in visual expertise research. We offer a critique of commonly used methods and propose three challenges for future research to open up new avenues for studying characteristics of visual expertise in medical image diagnosis. The first challenge addresses theory development. Novel prospects in modelling visual expertise can emerge when we reflect on cognitive and socio-cultural epistemologies in visual expertise research, when we engage in statistical validations of existing theoretical assumptions and when we include social and socio-cultural processes in expertise development. The second challenge addresses the recording and analysis of longitudinal data. If we assume that the development of expertise is a long-term phenomenon, then it follows that future research can engage in advanced statistical modelling of longitudinal expertise data that extends the routine use of cross-sectional material through, for example, animations and dynamic visualisations of developmental data. The third challenge addresses the combination of methods. Alternatives to current practices can integrate qualitative and quantitative approaches in mixed-method designs, embrace relevant yet underused data sources and understand the need for multidisciplinary research teams. Embracing alternative epistemological and methodological approaches for studying visual expertise can lead to a more balanced and robust future for understanding superior visual skills in medical image diagnosis as well as other medical fields. © 2016 John Wiley & Sons Ltd and The Association for the Study of Medical Education.

  9. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography.

    PubMed

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-09-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.

  10. Evaluation of a 3D local multiresolution algorithm for the correction of partial volume effects in positron emission tomography

    PubMed Central

    Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris

    2011-01-01

    Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037

  11. The use of morphological characteristics and texture analysis in the identification of tissue composition in prostatic neoplasia.

    PubMed

    Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W

    2004-09-01

    Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.

  12. A Reliable Methodology for Determining Seed Viability by Using Hyperspectral Data from Two Sides of Wheat Seeds.

    PubMed

    Zhang, Tingting; Wei, Wensong; Zhao, Bin; Wang, Ranran; Li, Mingliu; Yang, Liming; Wang, Jianhua; Sun, Qun

    2018-03-08

    This study investigated the possibility of using visible and near-infrared (VIS/NIR) hyperspectral imaging techniques to discriminate viable and non-viable wheat seeds. Both sides of individual seeds were subjected to hyperspectral imaging (400-1000 nm) to acquire reflectance spectral data. Four spectral datasets, including the ventral groove side, reverse side, mean (the mean of two sides' spectra of every seed), and mixture datasets (two sides' spectra of every seed), were used to construct the models. Classification models, partial least squares discriminant analysis (PLS-DA), and support vector machines (SVM), coupled with some pre-processing methods and successive projections algorithm (SPA), were built for the identification of viable and non-viable seeds. Our results showed that the standard normal variate (SNV)-SPA-PLS-DA model had high classification accuracy for whole seeds (>85.2%) and for viable seeds (>89.5%), and that the prediction set was based on a mixed spectral dataset by only using 16 wavebands. After screening with this model, the final germination of the seed lot could be higher than 89.5%. Here, we develop a reliable methodology for predicting the viability of wheat seeds, showing that the VIS/NIR hyperspectral imaging is an accurate technique for the classification of viable and non-viable wheat seeds in a non-destructive manner.

  13. A Reliable Methodology for Determining Seed Viability by Using Hyperspectral Data from Two Sides of Wheat Seeds

    PubMed Central

    Zhang, Tingting; Wei, Wensong; Zhao, Bin; Wang, Ranran; Li, Mingliu; Yang, Liming; Wang, Jianhua; Sun, Qun

    2018-01-01

    This study investigated the possibility of using visible and near-infrared (VIS/NIR) hyperspectral imaging techniques to discriminate viable and non-viable wheat seeds. Both sides of individual seeds were subjected to hyperspectral imaging (400–1000 nm) to acquire reflectance spectral data. Four spectral datasets, including the ventral groove side, reverse side, mean (the mean of two sides’ spectra of every seed), and mixture datasets (two sides’ spectra of every seed), were used to construct the models. Classification models, partial least squares discriminant analysis (PLS-DA), and support vector machines (SVM), coupled with some pre-processing methods and successive projections algorithm (SPA), were built for the identification of viable and non-viable seeds. Our results showed that the standard normal variate (SNV)-SPA-PLS-DA model had high classification accuracy for whole seeds (>85.2%) and for viable seeds (>89.5%), and that the prediction set was based on a mixed spectral dataset by only using 16 wavebands. After screening with this model, the final germination of the seed lot could be higher than 89.5%. Here, we develop a reliable methodology for predicting the viability of wheat seeds, showing that the VIS/NIR hyperspectral imaging is an accurate technique for the classification of viable and non-viable wheat seeds in a non-destructive manner. PMID:29517991

  14. Simulating Colour Vision Deficiency from a Spectral Image.

    PubMed

    Shrestha, Raju

    2016-01-01

    People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.

  15. Image Representation and Interactivity: An Exploration of Utility Values, Information-Needs and Image Interactivity

    ERIC Educational Resources Information Center

    Lewis, Elise C.

    2011-01-01

    This study was designed to explore the relationships between users and interactive images. Three factors were identified and provided different perspectives on how users interact with images: image utility, information-need, and images with varying levels of interactivity. The study used a mixed methodology to gain a more comprehensive…

  16. Language systems in normal and aphasic human subjects: functional imaging studies and inferences from animal studies.

    PubMed

    Wise, Richard J S

    2003-01-01

    The old neurological model of language, based on the writings of Broca, Wernicke and Lichtheim in the 19th century, is now undergoing major modifications. Observations on the anatomy and physiology of auditory processing in non-human primates are giving strong indicators as to how speech perception is organised in the human brain. In the light of this knowledge, functional activation studies with positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) are achieving a new level of precision in the investigation of language organisation in the human brain, in a manner not possible with observations on patients with aphasic stroke. Although the use of functional imaging to inform methods of improving aphasia rehabilitation remains underdeveloped, there are strong indicators that this methodology will provide the means to research a very imperfectly developed area of therapy.

  17. Single-shot full resolution region-of-interest (ROI) reconstruction in image plane digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Singh, Mandeep; Khare, Kedar

    2018-05-01

    We describe a numerical processing technique that allows single-shot region-of-interest (ROI) reconstruction in image plane digital holographic microscopy with full pixel resolution. The ROI reconstruction is modelled as an optimization problem where the cost function to be minimized consists of an L2-norm squared data fitting term and a modified Huber penalty term that are minimized alternately in an adaptive fashion. The technique can provide full pixel resolution complex-valued images of the selected ROI which is not possible to achieve with the commonly used Fourier transform method. The technique can facilitate holographic reconstruction of individual cells of interest from a large field-of-view digital holographic microscopy data. The complementary phase information in addition to the usual absorption information already available in the form of bright field microscopy can make the methodology attractive to the biomedical user community.

  18. CT and MRI slice separation evaluation by LabView developed software.

    PubMed

    Acri, Giuseppe; Testagrossa, Barbara; Sestito, Angela; Bonanno, Lilla; Vermiglio, Giuseppe

    2018-02-01

    The efficient use of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice separation, during multislices acquisition, requires scan exploration of phantoms containing test objects. To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination the midpoint of full width at half maximum (FWHM) in real time while the distance from the profile midpoint of two progressive images is evaluated and measured. The results were compared with those obtained by processing the same phantom images with commercial software. To validate the proposed methodology the Fisher test was conducted on the resulting data sets. In all cases, there was no statistically significant variation between the commercial procedure and the LabView one, which can be used on any CT and MRI diagnostic devices. Copyright © 2017. Published by Elsevier GmbH.

  19. A comparison between one year of daily global irradiation from ground-based measurements versus meteosat images from seven locations in Tunisia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djemaa, A.B.; Delorme, C.

    1992-01-01

    Three numerical images from METEOSAT B2 per day have been processed over a period of 12 months, from October 1985 to September 1986, to estimate the daily values of available solar radiation in Tunisia. The methodology used, GISTEL, on the images of the visible' channel of METEOSAT, is described. Results are compared with measured radiation values from seven stations of the Institut de la Meteorologie de Tunisie.' Among more than 2,200 measured-estimated daily pairs, a high percentage, 89%, show a relative error of + or {minus}10%. Many figures concerning Sidi-Bou-Said, Kairouan, Thala, and Gafsa are presented to show the capabilitymore » of GISTEL to map the daily available solar radiation with a sufficient spatial resolution in countries where radiation measurements are too scarce.« less

  20. Measurement of Galactic Logarithmic Spiral Arm Pitch Angle Using Two-Dimensional Fast Fourier Transform Decomposition

    NASA Astrophysics Data System (ADS)

    Davis, Benjamin L.; Berrier, J. C.; Shields, D. W.; Kennefick, J.; Kennefick, D.; Seigar, M. S.; Lacy, C. H. S.; Puerari, I.

    2012-01-01

    A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing Two-Dimensional Fast Fourier Transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow the precise comparison of spiral galaxy evolution to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques. The authors gratefully acknowledge support for this work from NASA Grant NNX08AW03A.

Top