Sample records for complex image processing

  1. Measuring the complexity of design in real-time imaging software

    NASA Astrophysics Data System (ADS)

    Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.

    2007-02-01

    Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.

  2. Direct-to-digital holography reduction of reference hologram noise and fourier space smearing

    DOEpatents

    Voelkl, Edgar

    2006-06-27

    Systems and methods are described for reduction of reference hologram noise and reduction of Fourier space smearing, especially in the context of direct-to-digital holography (off-axis interferometry). A method of reducing reference hologram noise includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference image waves; and transforming the corresponding plurality of reference image waves into a reduced noise reference image wave. A method of reducing smearing in Fourier space includes: recording a plurality of reference holograms; processing the plurality of reference holograms into a corresponding plurality of reference complex image waves; transforming the corresponding plurality of reference image waves into a reduced noise reference complex image wave; recording a hologram of an object; processing the hologram of the object into an object complex image wave; and dividing the complex image wave of the object by the reduced noise reference complex image wave to obtain a reduced smearing object complex image wave.

  3. Complex Event Processing for Content-Based Text, Image, and Video Retrieval

    DTIC Science & Technology

    2016-06-01

    NY): Wiley- Interscience; 2000. Feldman R, Sanger J. The text mining handbook: advanced approaches in analyzing unstructured data. New York (NY...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval...ARL-TR-7705 ● JUNE 2016 US Army Research Laboratory Complex Event Processing for Content-Based Text , Image, and Video Retrieval

  4. Local wavelet transform: a cost-efficient custom processor for space image compression

    NASA Astrophysics Data System (ADS)

    Masschelein, Bart; Bormans, Jan G.; Lafruit, Gauthier

    2002-11-01

    Thanks to its intrinsic scalability features, the wavelet transform has become increasingly popular as decorrelator in image compression applications. Throuhgput, memory requirements and complexity are important parameters when developing hardware image compression modules. An implementation of the classical, global wavelet transform requires large memory sizes and implies a large latency between the availability of the input image and the production of minimal data entities for entropy coding. Image tiling methods, as proposed by JPEG2000, reduce the memory sizes and the latency, but inevitably introduce image artefacts. The Local Wavelet Transform (LWT), presented in this paper, is a low-complexity wavelet transform architecture using a block-based processing that results in the same transformed images as those obtained by the global wavelet transform. The architecture minimizes the processing latency with a limited amount of memory. Moreover, as the LWT is an instruction-based custom processor, it can be programmed for specific tasks, such as push-broom processing of infinite-length satelite images. The features of the LWT makes it appropriate for use in space image compression, where high throughput, low memory sizes, low complexity, low power and push-broom processing are important requirements.

  5. Content standards for medical image metadata

    NASA Astrophysics Data System (ADS)

    d'Ornellas, Marcos C.; da Rocha, Rafael P.

    2003-12-01

    Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.

  6. Image processing mini manual

    NASA Technical Reports Server (NTRS)

    Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill

    1992-01-01

    The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.

  7. Automated synthesis of image processing procedures using AI planning techniques

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mortensen, Helen

    1994-01-01

    This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.

  8. Steganalysis based on reducing the differences of image statistical characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao

    2018-04-01

    Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.

  9. Towards an Intelligent Planning Knowledge Base Development Environment

    NASA Technical Reports Server (NTRS)

    Chien, S.

    1994-01-01

    ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.

  10. Healthcare provider and patient perspectives on diagnostic imaging investigations.

    PubMed

    Makanjee, Chandra R; Bergh, Anne-Marie; Hoffmann, Willem A

    2015-05-20

    Much has been written about the patient-centred approach in doctor-patient consultations. Little is known about interactions and communication processes regarding healthcare providers' and patients' perspectives on expectations and experiences of diagnostic imaging investigations within the medical encounter. Patients journey through the health system from the point of referral to the imaging investigation itself and then to the post-imaging consultation. AIM AND SETTING: To explore healthcare provider and patient perspectives on interaction and communication processes during diagnostic imaging investigations as part of their clinical journey through a healthcare complex. A qualitative study was conducted, with two phases of data collection. Twenty-four patients were conveniently selected at a public district hospital complex and were followed throughout their journey in the hospital system, from admission to discharge. The second phase entailed focus group interviews conducted with providers in the district hospital and adjacent academic hospital (medical officers and family physicians, nurses, radiographers, radiology consultants and registrars). Two main themes guided our analysis: (1) provider perspectives; and (2) patient dispositions and reactions. Golden threads that cut across these themes are interactions and communication processes in the context of expectations, experiences of the imaging investigations and the outcomes thereof. Insights from this study provide a better understanding of the complexity of the processes and interactions between providers and patients during the imaging investigations conducted as part of their clinical pathway. The interactions and communication processes are provider-patient centred when a referral for a diagnostic imaging investigation is included.

  11. The application of a unique flow modeling technique to complex combustion systems

    NASA Astrophysics Data System (ADS)

    Waslo, J.; Hasegawa, T.; Hilt, M. B.

    1986-06-01

    This paper describes the application of a unique three-dimensional water flow modeling technique to the study of complex fluid flow patterns within an advanced gas turbine combustor. The visualization technique uses light scattering, coupled with real-time image processing, to determine flow fields. Additional image processing is used to make concentration measurements within the combustor.

  12. Amplitude image processing by diffractive optics.

    PubMed

    Cagigal, Manuel P; Valle, Pedro J; Canales, V F

    2016-02-22

    In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.

  13. Image-Processing Software For A Hypercube Computer

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.

    1992-01-01

    Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.

  14. Complex noise suppression using a sparse representation and 3D filtering of images

    NASA Astrophysics Data System (ADS)

    Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.

    2017-08-01

    A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.

  15. Sample preparation for SFM imaging of DNA, proteins, and DNA-protein complexes.

    PubMed

    Ristic, Dejan; Sanchez, Humberto; Wyman, Claire

    2011-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate, and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nanometer resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA-bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA, and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  16. Symmetrical group theory for mathematical complexity reduction of digital holograms

    NASA Astrophysics Data System (ADS)

    Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.

    2017-10-01

    This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.

  17. Cone beam volume tomography: an imaging option for diagnosis of complex mandibular third molar anatomical relationships.

    PubMed

    Danforth, Robert A; Peck, Jerry; Hall, Paul

    2003-11-01

    Complex impacted third molars present potential treatment complications and possible patient morbidity. Objectives of diagnostic imaging are to facilitate diagnosis, decision making, and enhance treatment outcomes. As cases become more complex, advanced multiplane imaging methods allowing for a 3-D view are more likely to meet these objectives than traditional 2-D radiography. Until recently, advanced imaging options were somewhat limited to standard film tomography or medical CT, but development of cone beam volume tomography (CBVT) multiplane 3-D imaging systems specifically for dental use now provides an alternative imaging option. Two cases were utilized to compare the role of CBVT to these other imaging options and to illustrate how multiplane visualization can assist the pretreatment evaluation and decision-making process for complex impacted mandibular third molar cases.

  18. A FUNCTIONAL NEUROIMAGING INVESTIGATION OF THE ROLES OF STRUCTURAL COMPLEXITY AND TASK-DEMAND DURING AUDITORY SENTENCE PROCESSING

    PubMed Central

    Love, Tracy; Haist, Frank; Nicol, Janet; Swinney, David

    2009-01-01

    Using functional magnetic resonance imaging (fMRI), this study directly examined an issue that bridges the potential language processing and multi-modal views of the role of Broca’s area: the effects of task-demands in language comprehension studies. We presented syntactically simple and complex sentences for auditory comprehension under three different (differentially complex) task-demand conditions: passive listening, probe verification, and theme judgment. Contrary to many language imaging findings, we found that both simple and complex syntactic structures activated left inferior frontal cortex (L-IFC). Critically, we found activation in these frontal regions increased together with increased task-demands. Specifically, tasks that required greater manipulation and comparison of linguistic material recruited L-IFC more strongly; independent of syntactic structure complexity. We argue that much of the presumed syntactic effects previously found in sentence imaging studies of L-IFC may, among other things, reflect the tasks employed in these studies and that L-IFC is a region underlying mnemonic and other integrative functions, on which much language processing may rely. PMID:16881268

  19. Imaging of DNA and Protein by SFM and Combined SFM-TIRF Microscopy.

    PubMed

    Grosbart, Małgorzata; Ristić, Dejan; Sánchez, Humberto; Wyman, Claire

    2018-01-01

    Direct imaging is invaluable for understanding the mechanism of complex genome transactions where proteins work together to organize, transcribe, replicate and repair DNA. Scanning (or atomic) force microscopy is an ideal tool for this, providing 3D information on molecular structure at nm resolution from defined components. This is a convenient and practical addition to in vitro studies as readily obtainable amounts of purified proteins and DNA are required. The images reveal structural details on the size and location of DNA bound proteins as well as protein-induced arrangement of the DNA, which are directly correlated in the same complexes. In addition, even from static images, the different forms observed and their relative distributions can be used to deduce the variety and stability of different complexes that are necessarily involved in dynamic processes. Recently available instruments that combine fluorescence with topographic imaging allow the identification of specific molecular components in complex assemblies, which broadens the applications and increases the information obtained from direct imaging of molecular complexes. We describe here basic methods for preparing samples of proteins, DNA and complexes of the two for topographic imaging and quantitative analysis. We also describe special considerations for combined fluorescence and topographic imaging of molecular complexes.

  20. An image overall complexity evaluation method based on LSD line detection

    NASA Astrophysics Data System (ADS)

    Li, Jianan; Duan, Jin; Yang, Xu; Xiao, Bo

    2017-04-01

    In the artificial world, whether it is the city's traffic roads or engineering buildings contain a lot of linear features. Therefore, the research on the image complexity of linear information has become an important research direction in digital image processing field. This paper, by detecting the straight line information in the image and using the straight line as the parameter index, establishing the quantitative and accurate mathematics relationship. In this paper, we use LSD line detection algorithm which has good straight-line detection effect to detect the straight line, and divide the detected line by the expert consultation strategy. Then we use the neural network to carry on the weight training and get the weight coefficient of the index. The image complexity is calculated by the complexity calculation model. The experimental results show that the proposed method is effective. The number of straight lines in the image, the degree of dispersion, uniformity and so on will affect the complexity of the image.

  1. Digital imaging technology assessment: Digital document storage project

    NASA Technical Reports Server (NTRS)

    1989-01-01

    An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.

  2. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  3. Medical image processing using neural networks based on multivalued and universal binary neurons

    NASA Astrophysics Data System (ADS)

    Aizenberg, Igor N.; Aizenberg, Naum N.; Gotko, Eugen S.; Sochka, Vladimir A.

    1998-06-01

    Cellular Neural Networks (CNN) has become a very good mean for solution of the different kind of image processing problems. CNN based on multi-valued neurons (CNN-MVN) and CNN based on universal binary neurons (CNN-UBN) are the specific kinds of the CNN. MVN and UBN are neurons with complex-valued weights, and complex internal arithmetic. Their main feature is possibility of implementation of the arbitrary mapping between inputs and output described by the MVN, and arbitrary (not only threshold) Boolean function (UBN). Great advantage of the CNN is possibility of implementation of the any linear and many non-linear filters in spatial domain. Together with noise removing using CNN it is possible to implement filters, which can amplify high and medium frequencies. These filters are a very good mean for solution of the enhancement problem, and problem of details extraction against complex background. So, CNN make it possible to organize all the processing process from filtering until extraction of the important details. Organization of this process for medical image processing is considered in the paper. A major attention will be concentrated on the processing of the x-ray and ultrasound images corresponding to different oncology (or closed to oncology) pathologies. Additionally we will consider new structure of the neural network for solution of the problem of differential diagnostics of breast cancer.

  4. Satellite on-board real-time SAR processor prototype

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.

  5. Semi-automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    PubMed Central

    Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867

  6. [Optimizing histological image data for 3-D reconstruction using an image equalizer].

    PubMed

    Roth, A; Melzer, K; Annacker, K; Lipinski, H G; Wiemann, M; Bingmann, D

    2002-01-01

    Bone cells form a wired network within the extracellular bone matrix. To analyse this complex 3D structure, we employed a confocal fluorescence imaging procedure to visualize live bone cells within their native surrounding. By means of newly developed image processing software, the "Image-Equalizer", we aimed to enhanced the contrast and eliminize artefacts in such a way that cell bodies as well as fine interconnecting processes were visible.

  7. Image wavelet decomposition and applications

    NASA Technical Reports Server (NTRS)

    Treil, N.; Mallat, S.; Bajcsy, R.

    1989-01-01

    The general problem of computer vision has been investigated for more that 20 years and is still one of the most challenging fields in artificial intelligence. Indeed, taking a look at the human visual system can give us an idea of the complexity of any solution to the problem of visual recognition. This general task can be decomposed into a whole hierarchy of problems ranging from pixel processing to high level segmentation and complex objects recognition. Contrasting an image at different representations provides useful information such as edges. An example of low level signal and image processing using the theory of wavelets is introduced which provides the basis for multiresolution representation. Like the human brain, we use a multiorientation process which detects features independently in different orientation sectors. So, images of the same orientation but of different resolutions are contrasted to gather information about an image. An interesting image representation using energy zero crossings is developed. This representation is shown to be experimentally complete and leads to some higher level applications such as edge and corner finding, which in turn provides two basic steps to image segmentation. The possibilities of feedback between different levels of processing are also discussed.

  8. False Memories for Shape Activate the Lateral Occipital Complex

    ERIC Educational Resources Information Center

    Karanian, Jessica M.; Slotnick, Scott D.

    2017-01-01

    Previous functional magnetic resonance imaging evidence has shown that false memories arise from higher-level conscious processing regions rather than lower-level sensory processing regions. In the present study, we assessed whether the lateral occipital complex (LOC)--a lower-level conscious shape processing region--was associated with false…

  9. Imaging complex objects using learning tomography

    NASA Astrophysics Data System (ADS)

    Lim, JooWon; Goy, Alexandre; Shoreh, Morteza Hasani; Unser, Michael; Psaltis, Demetri

    2018-02-01

    Optical diffraction tomography (ODT) can be described using the scattering process through an inhomogeneous media. An inherent nonlinearity exists relating the scattering medium and the scattered field due to multiple scattering. Multiple scattering is often assumed to be negligible in weakly scattering media. This assumption becomes invalid as the sample gets more complex resulting in distorted image reconstructions. This issue becomes very critical when we image a complex sample. Multiple scattering can be simulated using the beam propagation method (BPM) as the forward model of ODT combined with an iterative reconstruction scheme. The iterative error reduction scheme and the multi-layer structure of BPM are similar to neural networks. Therefore we refer to our imaging method as learning tomography (LT). To fairly assess the performance of LT in imaging complex samples, we compared LT with the conventional iterative linear scheme using Mie theory which provides the ground truth. We also demonstrate the capacity of LT to image complex samples using experimental data of a biological cell.

  10. ART AND SCIENCE OF IMAGE MAPS.

    USGS Publications Warehouse

    Kidwell, Richard D.; McSweeney, Joseph A.

    1985-01-01

    The visual image of reflected light is influenced by the complex interplay of human color discrimination, spatial relationships, surface texture, and the spectral purity of light, dyes, and pigments. Scientific theories of image processing may not always achieve acceptable results as the variety of factors, some psychological, are in part, unpredictable. Tonal relationships that affect digital image processing and the transfer functions used to transform from the continuous-tone source image to a lithographic image, may be interpreted for an insight of where art and science fuse in the production process. The application of art and science in image map production at the U. S. Geological Survey is illustrated and discussed.

  11. Semi-Automated Neuron Boundary Detection and Nonbranching Process Segmentation in Electron Microscopy Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.

    2013-01-01

    Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less

  12. Image-Based Grouping during Binocular Rivalry Is Dictated by Eye-Of-Origin

    PubMed Central

    Stuit, Sjoerd M.; Paffen, Chris L. E.; van der Smagt, Maarten J.; Verstraten, Frans A. J.

    2014-01-01

    Prolonged viewing of dichoptically presented images with different content results in perceptual alternations known as binocular rivalry. This phenomenon is thought to be the result of competition at a local level, where local rivalry zones interact to give rise to a single, global dominant percept. Certain perceived combinations that result from this local competition are known to last longer than others, which is referred to as grouping during binocular rivalry. In recent years, the phenomenon has been suggested to be the result of competition at both eye- and image-based processing levels, although the exact contribution from each level remains elusive. Here we use a paradigm designed specifically to quantify the contribution of eye- and image-based processing to grouping during rivalry. In this paradigm we used sine-wave gratings as well as upright and inverted faces, with and without binocular disparity-based occlusion. These stimuli and conditions were used because they are known to result in processing at different stages throughout the visual processing hierarchy. Specifically, more complex images were included in order to maximize the potential contribution of image-based grouping. In spite of this, our results show that increasing image complexity did not lead to an increase in the contribution of image-based processing to grouping during rivalry. In fact, the results show that grouping was primarily affected by the eye-of-origin of the image parts, irrespective of stimulus type. We suggest that image content affects grouping during binocular rivalry at low-level processing stages, where it is intertwined with eye-of-origin information. PMID:24987847

  13. Autonomous control systems: applications to remote sensing and image processing

    NASA Astrophysics Data System (ADS)

    Jamshidi, Mohammad

    2001-11-01

    One of the main challenges of any control (or image processing) paradigm is being able to handle complex systems under unforeseen uncertainties. A system may be called complex here if its dimension (order) is too high and its model (if available) is nonlinear, interconnected, and information on the system is uncertain such that classical techniques cannot easily handle the problem. Examples of complex systems are power networks, space robotic colonies, national air traffic control system, and integrated manufacturing plant, the Hubble Telescope, the International Space Station, etc. Soft computing, a consortia of methodologies such as fuzzy logic, neuro-computing, genetic algorithms and genetic programming, has proven to be powerful tools for adding autonomy and semi-autonomy to many complex systems. For such systems the size of soft computing control architecture will be nearly infinite. In this paper new paradigms using soft computing approaches are utilized to design autonomous controllers and image enhancers for a number of application areas. These applications are satellite array formations for synthetic aperture radar interferometry (InSAR) and enhancement of analog and digital images.

  14. A Protocol for Using Förster Resonance Energy Transfer (FRET)-force Biosensors to Measure Mechanical Forces across the Nuclear LINC Complex.

    PubMed

    Arsenovic, Paul T; Bathula, Kranthidhar; Conway, Daniel E

    2017-04-11

    The LINC complex has been hypothesized to be the critical structure that mediates the transfer of mechanical forces from the cytoskeleton to the nucleus. Nesprin-2G is a key component of the LINC complex that connects the actin cytoskeleton to membrane proteins (SUN domain proteins) in the perinuclear space. These membrane proteins connect to lamins inside the nucleus. Recently, a Förster Resonance Energy Transfer (FRET)-force probe was cloned into mini-Nesprin-2G (Nesprin-TS (tension sensor)) and used to measure tension across Nesprin-2G in live NIH3T3 fibroblasts. This paper describes the process of using Nesprin-TS to measure LINC complex forces in NIH3T3 fibroblasts. To extract FRET information from Nesprin-TS, an outline of how to spectrally unmix raw spectral images into acceptor and donor fluorescent channels is also presented. Using open-source software (ImageJ), images are pre-processed and transformed into ratiometric images. Finally, FRET data of Nesprin-TS is presented, along with strategies for how to compare data across different experimental groups.

  15. Medical image segmentation based on SLIC superpixels model

    NASA Astrophysics Data System (ADS)

    Chen, Xiang-ting; Zhang, Fan; Zhang, Ruo-ya

    2017-01-01

    Medical imaging has been widely used in clinical practice. It is an important basis for medical experts to diagnose the disease. However, medical images have many unstable factors such as complex imaging mechanism, the target displacement will cause constructed defect and the partial volume effect will lead to error and equipment wear, which increases the complexity of subsequent image processing greatly. The segmentation algorithm which based on SLIC (Simple Linear Iterative Clustering, SLIC) superpixels is used to eliminate the influence of constructed defect and noise by means of the feature similarity in the preprocessing stage. At the same time, excellent clustering effect can reduce the complexity of the algorithm extremely, which provides an effective basis for the rapid diagnosis of experts.

  16. Using Storyboarding to Model Gene Expression

    ERIC Educational Resources Information Center

    Korb, Michele; Colton, Shannon; Vogt, Gina

    2015-01-01

    Students often find it challenging to create images of complex, abstract biological processes. Using modified storyboards, which contain predrawn images, students can visualize the process and anchor ideas from activities, labs, and lectures. Storyboards are useful in assessing students' understanding of content in larger contexts. They enable…

  17. Low-complexity video encoding method for wireless image transmission in capsule endoscope.

    PubMed

    Takizawa, Kenichi; Hamaguchi, Kiyoshi

    2010-01-01

    This paper presents a low-complexity video encoding method applicable for wireless image transmission in capsule endoscopes. This encoding method is based on Wyner-Ziv theory, in which side information available at a transmitter is treated as side information at its receiver. Therefore complex processes in video encoding, such as estimation of the motion vector, are moved to the receiver side, which has a larger-capacity battery. As a result, the encoding process is only to decimate coded original data through channel coding. We provide a performance evaluation for a low-density parity check (LDPC) coding method in the AWGN channel.

  18. Unsupervised background-constrained tank segmentation of infrared images in complex background based on the Otsu method.

    PubMed

    Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan

    2016-01-01

    In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.

  19. Image processing for optical mapping.

    PubMed

    Ravindran, Prabu; Gupta, Aditya

    2015-01-01

    Optical Mapping is an established single-molecule, whole-genome analysis system, which has been used to gain a comprehensive understanding of genomic structure and to study structural variation of complex genomes. A critical component of Optical Mapping system is the image processing module, which extracts single molecule restriction maps from image datasets of immobilized, restriction digested and fluorescently stained large DNA molecules. In this review, we describe robust and efficient image processing techniques to process these massive datasets and extract accurate restriction maps in the presence of noise, ambiguity and confounding artifacts. We also highlight a few applications of the Optical Mapping system.

  20. Segregating the core computational faculty of human language from working memory.

    PubMed

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D

    2009-05-19

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically.

  1. Imaging of the meninges and the extra-axial spaces.

    PubMed

    Kirmi, Olga; Sheerin, Fintan; Patel, Neel

    2009-12-01

    The separate meningeal layers and extraaxial spaces are complex and can only be differentiated by pathologic processes on imaging. Differentiation of the location of such processes can be achieved using different imaging modalities. In this pictorial review we address the imaging techniques, enhancement and location patterns, and disease spread that will promote accurate localization of the pathology, thus improving accuracy of diagnosis. Typical and unusual magnetic resonance (MR), computed tomography (CT), and ultrasound imaging findings of many conditions affecting these layers and spaces are described.

  2. Feature evaluation of complex hysteresis smoothing and its practical applications to noisy SEM images.

    PubMed

    Suzuki, Kazuhiko; Oho, Eisaku

    2013-01-01

    Quality of a scanning electron microscopy (SEM) image is strongly influenced by noise. This is a fundamental drawback of the SEM instrument. Complex hysteresis smoothing (CHS) has been previously developed for noise removal of SEM images. This noise removal is performed by monitoring and processing properly the amplitude of the SEM signal. As it stands now, CHS may not be so utilized, though it has several advantages for SEM. For example, the resolution of image processed by CHS is basically equal to that of the original image. In order to find wide application of the CHS method in microscopy, the feature of CHS, which has not been so clarified until now is evaluated correctly. As the application of the result obtained by the feature evaluation, cursor width (CW), which is the sole processing parameter of CHS, is determined more properly using standard deviation of noise Nσ. In addition, disadvantage that CHS cannot remove the noise with excessively large amplitude is improved by a certain postprocessing. CHS is successfully applicable to SEM images with various noise amplitudes. © Wiley Periodicals, Inc.

  3. BioImageXD: an open, general-purpose and high-throughput image-processing platform.

    PubMed

    Kankaanpää, Pasi; Paavolainen, Lassi; Tiitta, Silja; Karjalainen, Mikko; Päivärinne, Joacim; Nieminen, Jonna; Marjomäki, Varpu; Heino, Jyrki; White, Daniel J

    2012-06-28

    BioImageXD puts open-source computer science tools for three-dimensional visualization and analysis into the hands of all researchers, through a user-friendly graphical interface tuned to the needs of biologists. BioImageXD has no restrictive licenses or undisclosed algorithms and enables publication of precise, reproducible and modifiable workflows. It allows simple construction of processing pipelines and should enable biologists to perform challenging analyses of complex processes. We demonstrate its performance in a study of integrin clustering in response to selected inhibitors.

  4. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: Root1.

    PubMed

    Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M

    2017-01-01

    The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.

  5. Predictive images of postoperative levator resection outcome using image processing software.

    PubMed

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.

  6. Predictive images of postoperative levator resection outcome using image processing software

    PubMed Central

    Mawatari, Yuki; Fukushima, Mikiko

    2016-01-01

    Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008

  7. An improved method for polarimetric image restoration in interferometry

    NASA Astrophysics Data System (ADS)

    Pratley, Luke; Johnston-Hollitt, Melanie

    2016-11-01

    Interferometric radio astronomy data require the effects of limited coverage in the Fourier plane to be accounted for via a deconvolution process. For the last 40 years this process, known as `cleaning', has been performed almost exclusively on all Stokes parameters individually as if they were independent scalar images. However, here we demonstrate for the case of the linear polarization P, this approach fails to properly account for the complex vector nature resulting in a process which is dependent on the axes under which the deconvolution is performed. We present here an improved method, `Generalized Complex CLEAN', which properly accounts for the complex vector nature of polarized emission and is invariant under rotations of the deconvolution axes. We use two Australia Telescope Compact Array data sets to test standard and complex CLEAN versions of the Högbom and SDI (Steer-Dwedney-Ito) CLEAN algorithms. We show that in general the complex CLEAN version of each algorithm produces more accurate clean components with fewer spurious detections and lower computation cost due to reduced iterations than the current methods. In particular, we find that the complex SDI CLEAN produces the best results for diffuse polarized sources as compared with standard CLEAN algorithms and other complex CLEAN algorithms. Given the move to wide-field, high-resolution polarimetric imaging with future telescopes such as the Square Kilometre Array, we suggest that Generalized Complex CLEAN should be adopted as the deconvolution method for all future polarimetric surveys and in particular that the complex version of an SDI CLEAN should be used.

  8. Flash trajectory imaging of target 3D motion

    NASA Astrophysics Data System (ADS)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  9. Optimized design of embedded DSP system hardware supporting complex algorithms

    NASA Astrophysics Data System (ADS)

    Li, Yanhua; Wang, Xiangjun; Zhou, Xinling

    2003-09-01

    The paper presents an optimized design method for a flexible and economical embedded DSP system that can implement complex processing algorithms as biometric recognition, real-time image processing, etc. It consists of a floating-point DSP, 512 Kbytes data RAM, 1 Mbytes FLASH program memory, a CPLD for achieving flexible logic control of input channel and a RS-485 transceiver for local network communication. Because of employing a high performance-price ratio DSP TMS320C6712 and a large FLASH in the design, this system permits loading and performing complex algorithms with little algorithm optimization and code reduction. The CPLD provides flexible logic control for the whole DSP board, especially in input channel, and allows convenient interface between different sensors and DSP system. The transceiver circuit can transfer data between DSP and host computer. In the paper, some key technologies are also introduced which make the whole system work efficiently. Because of the characters referred above, the hardware is a perfect flat for multi-channel data collection, image processing, and other signal processing with high performance and adaptability. The application section of this paper presents how this hardware is adapted for the biometric identification system with high identification precision. The result reveals that this hardware is easy to interface with a CMOS imager and is capable of carrying out complex biometric identification algorithms, which require real-time process.

  10. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  11. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  12. Numerical image manipulation and display in solar astronomy

    NASA Technical Reports Server (NTRS)

    Levine, R. H.; Flagg, J. C.

    1977-01-01

    The paper describes the system configuration and data manipulation capabilities of a solar image display system which allows interactive analysis of visual images and on-line manipulation of digital data. Image processing features include smoothing or filtering of images stored in the display, contrast enhancement, and blinking or flickering images. A computer with a core memory of 28,672 words provides the capacity to perform complex calculations based on stored images, including computing histograms, selecting subsets of images for further analysis, combining portions of images to produce images with physical meaning, and constructing mathematical models of features in an image. Some of the processing modes are illustrated by some image sequences from solar observations.

  13. Analysis and Recognition of Curve Type as The Basis of Object Recognition in Image

    NASA Astrophysics Data System (ADS)

    Nugraha, Nurma; Madenda, Sarifuddin; Indarti, Dina; Dewi Agushinta, R.; Ernastuti

    2016-06-01

    An object in an image when analyzed further will show the characteristics that distinguish one object with another object in an image. Characteristics that are used in object recognition in an image can be a color, shape, pattern, texture and spatial information that can be used to represent objects in the digital image. The method has recently been developed for image feature extraction on objects that share characteristics curve analysis (simple curve) and use the search feature of chain code object. This study will develop an algorithm analysis and the recognition of the type of curve as the basis for object recognition in images, with proposing addition of complex curve characteristics with maximum four branches that will be used for the process of object recognition in images. Definition of complex curve is the curve that has a point of intersection. By using some of the image of the edge detection, the algorithm was able to do the analysis and recognition of complex curve shape well.

  14. Hierarchical storage of large volume of multidector CT data using distributed servers

    NASA Astrophysics Data System (ADS)

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris; Bandon, David

    2006-03-01

    Multidector scanners and hybrid multimodality scanners have the ability to generate large number of high-resolution images resulting in very large data sets. In most cases, these datasets are generated for the sole purpose of generating secondary processed images and 3D rendered images as well as oblique and curved multiplanar reformatted images. It is therefore not essential to archive the original images after they have been processed. We have developed an architecture of distributed archive servers for temporary storage of large image datasets for 3D rendering and image processing without the need for long term storage in PACS archive. With the relatively low cost of storage devices it is possible to configure these servers to hold several months or even years of data, long enough for allowing subsequent re-processing if required by specific clinical situations. We tested the latest generation of RAID servers provided by Apple computers with a capacity of 5 TBytes. We implemented a peer-to-peer data access software based on our Open-Source image management software called OsiriX, allowing remote workstations to directly access DICOM image files located on the server through a new technology called "bonjour". This architecture offers a seamless integration of multiple servers and workstations without the need for central database or complex workflow management tools. It allows efficient access to image data from multiple workstation for image analysis and visualization without the need for image data transfer. It provides a convenient alternative to centralized PACS architecture while avoiding complex and time-consuming data transfer and storage.

  15. Using AI Planning Techniques to Automatically Generate Image Processing Procedures: A Preliminary Report

    NASA Technical Reports Server (NTRS)

    Chien, S.

    1994-01-01

    This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.

  16. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  17. Functional Magnetic Resonance Imaging

    ERIC Educational Resources Information Center

    Voos, Avery; Pelphrey, Kevin

    2013-01-01

    Functional magnetic resonance imaging (fMRI), with its excellent spatial resolution and ability to visualize networks of neuroanatomical structures involved in complex information processing, has become the dominant technique for the study of brain function and its development. The accessibility of in-vivo pediatric brain-imaging techniques…

  18. Applications of digital image processing techniques to problems of data registration and correlation

    NASA Technical Reports Server (NTRS)

    Green, W. B.

    1978-01-01

    An overview is presented of the evolution of the computer configuration at JPL's Image Processing Laboratory (IPL). The development of techniques for the geometric transformation of digital imagery is discussed and consideration is given to automated and semiautomated image registration, and the registration of imaging and nonimaging data. The increasing complexity of image processing tasks at IPL is illustrated with examples of various applications from the planetary program and earth resources activities. It is noted that the registration of existing geocoded data bases with Landsat imagery will continue to be important if the Landsat data is to be of genuine use to the user community.

  19. Noise-Coupled Image Rejection Architecture of Complex Bandpass ΔΣAD Modulator

    NASA Astrophysics Data System (ADS)

    San, Hao; Kobayashi, Haruo

    This paper proposes a new realization technique of image rejection function by noise-coupling architecture, which is used for a complex bandpass ΔΣAD modulator. The complex bandpass ΔΣAD modulator processes just input I and Q signals, not image signals, and the AD conversion can be realized with low power dissipation. It realizes an asymmetric noise-shaped spectra, which is desirable for such low-IF receiver applications. However, the performance of the complex bandpass ΔΣAD modulator suffers from the mismatch between internal analog I and Q paths. I/Q path mismatch causes an image signal, and the quantization noise of the mirror image band aliases into the desired signal band, which degrades the SQNDR (Signal to Quantization Noise and Distortion Ratio) of the modulator. In our proposed modulator architecture, an extra notch for image rejection is realized by noise-coupled topology. We just add some passive capacitors and switches to the modulator; the additional integrator circuit composed of an operational amplifier in the conventional image rejection realization is not necessary. Therefore, the performance of the complex modulator can be effectively raised without additional power dissipation. We have performed simulation with MATLAB to confirm the validity of the proposed architecture. The simulation results show that the proposed architecture can achieve the realization of image-rejection effectively, and improve the SQNDR of the complex bandpass ΔΣAD modulator.

  20. Segregating the core computational faculty of human language from working memory

    PubMed Central

    Makuuchi, Michiru; Bahlmann, Jörg; Anwander, Alfred; Friederici, Angela D.

    2009-01-01

    In contrast to simple structures in animal vocal behavior, hierarchical structures such as center-embedded sentences manifest the core computational faculty of human language. Previous artificial grammar learning studies found that the left pars opercularis (LPO) subserves the processing of hierarchical structures. However, it is not clear whether this area is activated by the structural complexity per se or by the increased memory load entailed in processing hierarchical structures. To dissociate the effect of structural complexity from the effect of memory cost, we conducted a functional magnetic resonance imaging study of German sentence processing with a 2-way factorial design tapping structural complexity (with/without hierarchical structure, i.e., center-embedding of clauses) and working memory load (long/short distance between syntactically dependent elements; i.e., subject nouns and their respective verbs). Functional imaging data revealed that the processes for structure and memory operate separately but co-operatively in the left inferior frontal gyrus; activities in the LPO increased as a function of structural complexity, whereas activities in the left inferior frontal sulcus (LIFS) were modulated by the distance over which the syntactic information had to be transferred. Diffusion tensor imaging showed that these 2 regions were interconnected through white matter fibers. Moreover, functional coupling between the 2 regions was found to increase during the processing of complex, hierarchically structured sentences. These results suggest a neuroanatomical segregation of syntax-related aspects represented in the LPO from memory-related aspects reflected in the LIFS, which are, however, highly interconnected functionally and anatomically. PMID:19416819

  1. Research and implementation of the algorithm for unwrapped and distortion correction basing on CORDIC for panoramic image

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhai; Li, Kejie; Wu, Xiaobing; Zhang, Shujiang

    2008-03-01

    The unwrapped and correcting algorithm based on Coordinate Rotation Digital Computer (CORDIC) and bilinear interpolation algorithm was presented in this paper, with the purpose of processing dynamic panoramic annular image. An original annular panoramic image captured by panoramic annular lens (PAL) can be unwrapped and corrected to conventional rectangular image without distortion, which is much more coincident with people's vision. The algorithm for panoramic image processing is modeled by VHDL and implemented in FPGA. The experimental results show that the proposed panoramic image algorithm for unwrapped and distortion correction has the lower computation complexity and the architecture for dynamic panoramic image processing has lower hardware cost and power consumption. And the proposed algorithm is valid.

  2. Comparative performance evaluation of transform coding in image pre-processing

    NASA Astrophysics Data System (ADS)

    Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha

    2017-07-01

    We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.

  3. Imaging proteins at the single-molecule level.

    PubMed

    Longchamp, Jean-Nicolas; Rauschenbach, Stephan; Abb, Sabine; Escher, Conrad; Latychevskaia, Tatiana; Kern, Klaus; Fink, Hans-Werner

    2017-02-14

    Imaging single proteins has been a long-standing ambition for advancing various fields in natural science, as for instance structural biology, biophysics, and molecular nanotechnology. In particular, revealing the distinct conformations of an individual protein is of utmost importance. Here, we show the imaging of individual proteins and protein complexes by low-energy electron holography. Samples of individual proteins and protein complexes on ultraclean freestanding graphene were prepared by soft-landing electrospray ion beam deposition, which allows chemical- and conformational-specific selection and gentle deposition. Low-energy electrons do not induce radiation damage, which enables acquiring subnanometer resolution images of individual proteins (cytochrome C and BSA) as well as of protein complexes (hemoglobin), which are not the result of an averaging process.

  4. Imaging proteins at the single-molecule level

    PubMed Central

    Longchamp, Jean-Nicolas; Rauschenbach, Stephan; Abb, Sabine; Escher, Conrad; Latychevskaia, Tatiana; Kern, Klaus; Fink, Hans-Werner

    2017-01-01

    Imaging single proteins has been a long-standing ambition for advancing various fields in natural science, as for instance structural biology, biophysics, and molecular nanotechnology. In particular, revealing the distinct conformations of an individual protein is of utmost importance. Here, we show the imaging of individual proteins and protein complexes by low-energy electron holography. Samples of individual proteins and protein complexes on ultraclean freestanding graphene were prepared by soft-landing electrospray ion beam deposition, which allows chemical- and conformational-specific selection and gentle deposition. Low-energy electrons do not induce radiation damage, which enables acquiring subnanometer resolution images of individual proteins (cytochrome C and BSA) as well as of protein complexes (hemoglobin), which are not the result of an averaging process. PMID:28087691

  5. Referential processing: reciprocity and correlates of naming and imaging.

    PubMed

    Paivio, A; Clark, J M; Digdon, N; Bons, T

    1989-03-01

    To shed light on the referential processes that underlie mental translation between representations of objects and words, we studied the reciprocity and determinants of naming and imaging reaction times (RT). Ninety-six subjects pressed a key when they had covertly named 248 pictures or imaged to their names. Mean naming and imagery RTs for each item were correlated with one another, and with properties of names, images, and their interconnections suggested by prior research and dual coding theory. Imagery RTs correlated .56 (df = 246) with manual naming RTs and .58 with voicekey naming RTs from prior studies. A factor analysis of the RTs and of 31 item characteristics revealed 7 dimensions. Imagery and naming RTs loaded on a common referential factor that included variables related to both directions of processing (e.g., missing names and missing images). Naming RTs also loaded on a nonverbal-to-verbal factor that included such variables as number of different names, whereas imagery RTs loaded on a verbal-to-nonverbal factor that included such variables as rated consistency of imagery. The other factors were verbal familiarity, verbal complexity, nonverbal familiarity, and nonverbal complexity. The findings confirm the reciprocity of imaging and naming, and their relation to constructs associated with distinct phases of referential processing.

  6. a Comparative Case Study of Reflection Seismic Imaging Method

    NASA Astrophysics Data System (ADS)

    Alamooti, M.; Aydin, A.

    2017-12-01

    Seismic imaging is the most common means of gathering information about subsurface structural features. The accuracy of seismic images may be highly variable depending on the complexity of the subsurface and on how seismic data is processed. One of the crucial steps in this process, especially in layered sequences with complicated structure, is the time and/or depth migration of seismic data.The primary purpose of the migration is to increase the spatial resolution of seismic images by repositioning the recorded seismic signal back to its original point of reflection in time/space, which enhances information about complex structure. In this study, our objective is to process a seismic data set (courtesy of the University of South Carolina) to generate an image on which the Magruder fault near Allendale SC can be clearly distinguished and its attitude can be accurately depicted. The data was gathered by common mid-point method with 60 geophones equally spaced along an about 550 m long traverse over a nearly flat ground. The results obtained from the application of different migration algorithms (including finite-difference and Kirchhoff) are compared in time and depth domains to investigate the efficiency of each algorithm in reducing the processing time and improving the accuracy of seismic images in reflecting the correct position of the Magruder fault.

  7. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    USGS Publications Warehouse

    Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.

    2015-01-01

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  8. Advances in interpretation of subsurface processes with time-lapse electrical imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.

    2015-03-15

    Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.

  9. Applications of Fractal Analytical Techniques in the Estimation of Operational Scale

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.; Quattrochi, Dale A.

    2000-01-01

    The observational scale and the resolution of remotely sensed imagery are essential considerations in the interpretation process. Many atmospheric, hydrologic, and other natural and human-influenced spatial phenomena are inherently scale dependent and are governed by different physical processes at different spatial domains. This spatial and operational heterogeneity constrains the ability to compare interpretations of phenomena and processes observed in higher spatial resolution imagery to similar interpretations obtained from lower resolution imagery. This is a particularly acute problem, since longterm global change investigations will require high spatial resolution Earth Observing System (EOS), Landsat 7, or commercial satellite data to be combined with lower resolution imagery from older sensors such as Landsat TM and MSS. Fractal analysis is a useful technique for identifying the effects of scale changes on remotely sensed imagery. The fractal dimension of an image is a non-integer value between two and three which indicates the degree of complexity in the texture and shapes depicted in the image. A true fractal surface exhibits self-similarity, a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution, and the slope of the fractal dimension-resolution relationship would be zero. Most geographical phenomena, however, are not self-similar at all scales, but they can be modeled by a stochastic fractal in which the scaling properties of the image exhibit patterns that can be described by statistics such as area-perimeter ratios and autocovariances. Stochastic fractal sets relax the self-similarity assumption and measure many scales and resolutions to represent the varying form of a phenomenon as the pixel size is increased in a convolution process. We have observed that for images of homogeneous land covers, the fractal dimension varies linearly with changes in resolution or pixel size over the range of past, current, and planned space-borne sensors. This relationship differs significantly in images of agricultural, urban, and forest land covers, with urban areas retaining the same level of complexity, forested areas growing smoother, and agricultural areas growing more complex as small pixels are aggregated into larger, mixed pixels. Images of scenes having a mixture of land covers have fractal dimensions that exhibit a non-linear, complex relationship to pixel size. Measuring the fractal dimension of a difference image derived from two images of the same area obtained on different dates showed that the fractal dimension increased steadily, then exhibited a sharp decrease at increasing levels of pixel aggregation. This breakpoint of the fractal dimension/resolution plot is related to the spatial domain or operational scale of the phenomenon exhibiting the predominant visible difference between the two images (in this case, mountain snow cover). The degree to which an image departs from a theoretical ideal fractal surface provides clues as to how much information is altered or lost in the processes of rescaling and rectification. The measured fractal dimension of complex, composite land covers such as urban areas also provides a useful textural index that can assist image classification of complex scenes.

  10. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  11. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  12. More Olympica Fossae

    NASA Image and Video Library

    2016-02-22

    This image from NASA 2001 Mars Odyssey spacecraft shows a different part of Olympica Fossae. In this region lava channels dominate. The complex interaction of volcanic and tectonic processes is illustrated by the central feature in this image.

  13. Strength and coherence of binocular rivalry depends on shared stimulus complexity.

    PubMed

    Alais, David; Melcher, David

    2007-01-01

    Presenting incompatible images to the eyes results in alternations of conscious perception, a phenomenon known as binocular rivalry. We examined rivalry using either simple stimuli (oriented gratings) or coherent visual objects (faces, houses etc). Two rivalry characteristics were measured: Depth of rivalry suppression and coherence of alternations. Rivalry between coherent visual objects exhibits deep suppression and coherent rivalry, whereas rivalry between gratings exhibits shallow suppression and piecemeal rivalry. Interestingly, rivalry between a simple and a complex stimulus displays the same characteristics (shallow and piecemeal) as rivalry between two simple stimuli. Thus, complex stimuli fail to rival globally unless the fellow stimulus is also global. We also conducted a face adaptation experiment. Adaptation to rivaling faces improved subsequent face discrimination (as expected), but adaptation to a rivaling face/grating pair did not. To explain this, we suggest rivalry must be an early and local process (at least initially), instigated by the failure of binocular fusion, which can then become globally organized by feedback from higher-level areas when both rivalry stimuli are global, so that rivalry tends to oscillate coherently. These globally assembled images then flow through object processing areas, with the dominant image gaining in relative strength in a form of 'biased competition', therefore accounting for the deeper suppression of global images. In contrast, when only one eye receives a global image, local piecemeal suppression from the fellow eye overrides the organizing effects of global feedback to prevent coherent image formation. This indicates the primacy of local over global processes in rivalry.

  14. Visualizing medium and biodistribution in complex cell culture bioreactors using in vivo imaging.

    PubMed

    Ratcliffe, E; Thomas, R J; Stacey, A J

    2014-01-01

    There is a dearth of technology and methods to aid process characterization, control and scale-up of complex culture platforms that provide niche micro-environments for some stem cell-based products. We have demonstrated a novel use of 3d in vivo imaging systems to visualize medium flow and cell distribution within a complex culture platform (hollow fiber bioreactor) to aid characterization of potential spatial heterogeneity and identify potential routes of bioreactor failure or sources of variability. This can then aid process characterization and control of such systems with a view to scale-up. Two potential sources of variation were observed with multiple bioreactors repeatedly imaged using two different imaging systems: shortcutting of medium between adjacent inlet and outlet ports with the potential to create medium gradients within the bioreactor, and localization of bioluminescent murine 4T1-luc2 cells upon inoculation with the potential to create variable seeding densities at different points within the cell growth chamber. The ability of the imaging technique to identify these key operational bioreactor characteristics demonstrates an emerging technique in troubleshooting and engineering optimization of bioreactor performance. © 2013 American Institute of Chemical Engineers.

  15. Neural Correlates in the Processing of Phoneme-Level Complexity in Vowel Production

    ERIC Educational Resources Information Center

    Park, Haeil; Iverson, Gregory K.; Park, Hae-Jeong

    2011-01-01

    We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX:…

  16. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  17. Uncooled infrared imaging using bimaterial microcantilever arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grbovic, Dragoslav; Lavrik, Nickolay V; Rajic, Slobodan

    2006-01-01

    We report on the fabrication and characterization of microcantilever based uncooled focal plane array (FPA) for infrared imaging. By combining a streamlined design of microcantilever thermal transducers with a highly efficient optical readout, we minimized the fabrication complexity while achieving a competitive level of imaging performance. The microcantilever FPAs were fabricated using a straightforward fabrication process that involved only three photolithographic steps (i.e. three masks). A designed and constructed prototype of an IR imager employed a simple optical readout based on a noncoherent low-power light source. The main figures of merit of the IR imager were found to be comparablemore » to those of uncooled MEMS infrared detectors with substantially higher degree of fabrication complexity. In particular, the NETD and the response time of the implemented MEMS IR detector were measured to be as low as 0.5K and 6 ms, respectively. The potential of the implemented designs can also be concluded from the fact that the constructed prototype enabled IR imaging of close to room temperature objects without the use of any advanced data processing. The most unique and practically valuable feature of the implemented FPAs, however, is their scalability to high resolution formats, such as 2000x2000, without progressively growing device complexity and cost.« less

  18. Removal of intensity bias in magnitude spin-echo MRI images by nonlinear diffusion filtering

    NASA Astrophysics Data System (ADS)

    Samsonov, Alexei A.; Johnson, Chris R.

    2004-05-01

    MRI data analysis is routinely done on the magnitude part of complex images. While both real and imaginary image channels contain Gaussian noise, magnitude MRI data are characterized by Rice distribution. However, conventional filtering methods often assume image noise to be zero mean and Gaussian distributed. Estimation of an underlying image using magnitude data produces biased result. The bias may lead to significant image errors, especially in areas of low signal-to-noise ratio (SNR). The incorporation of the Rice PDF into a noise filtering procedure can significantly complicate the method both algorithmically and computationally. In this paper, we demonstrate that inherent image phase smoothness of spin-echo MRI images could be utilized for separate filtering of real and imaginary complex image channels to achieve unbiased image denoising. The concept is demonstrated with a novel nonlinear diffusion filtering scheme developed for complex image filtering. In our proposed method, the separate diffusion processes are coupled through combined diffusion coefficients determined from the image magnitude. The new method has been validated with simulated and real MRI data. The new method has provided efficient denoising and bias removal in conventional and black-blood angiography MRI images obtained using fast spin echo acquisition protocols.

  19. Asymmetric multiple information cryptosystem based on chaotic spiral phase mask and random spectrum decomposition

    NASA Astrophysics Data System (ADS)

    Rafiq Abuturab, Muhammad

    2018-01-01

    A new asymmetric multiple information cryptosystem based on chaotic spiral phase mask (CSPM) and random spectrum decomposition is put forwarded. In the proposed system, each channel of secret color image is first modulated with a CSPM and then gyrator transformed. The gyrator spectrum is randomly divided into two complex-valued masks. The same procedure is applied to multiple secret images to get their corresponding first and second complex-valued masks. Finally, first and second masks of each channel are independently added to produce first and second complex ciphertexts, respectively. The main feature of the proposed method is the different secret images encrypted by different CSPMs using different parameters as the sensitive decryption/private keys which are completely unknown to unauthorized users. Consequently, the proposed system would be resistant to potential attacks. Moreover, the CSPMs are easier to position in the decoding process owing to their own centering mark on axis focal ring. The retrieved secret images are free from cross-talk noise effects. The decryption process can be implemented by optical experiment. Numerical simulation results demonstrate the viability and security of the proposed method.

  20. Realization of the FPGA-based reconfigurable computing environment by the example of morphological processing of a grayscale image

    NASA Astrophysics Data System (ADS)

    Shatravin, V.; Shashev, D. V.

    2018-05-01

    Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.

  1. Image processing for cryogenic transmission electron microscopy of symmetry-mismatched complexes.

    PubMed

    Huiskonen, Juha T

    2018-02-08

    Cryogenic transmission electron microscopy (cryo-TEM) is a high-resolution biological imaging method, whereby biological samples, such as purified proteins, macromolecular complexes, viral particles, organelles and cells, are embedded in vitreous ice preserving their native structures. Due to sensitivity of biological materials to the electron beam of the microscope, only relatively low electron doses can be applied during imaging. As a result, the signal arising from the structure of interest is overpowered by noise in the images. To increase the signal-to-noise ratio, different image processing-based strategies that aim at coherent averaging of signal have been devised. In such strategies, images are generally assumed to arise from multiple identical copies of the structure. Prior to averaging, the images must be grouped according to the view of the structure they represent and images representing the same view must be simultaneously aligned relatively to each other. For computational reconstruction of the three-dimensional structure, images must contain different views of the original structure. Structures with multiple symmetry-related substructures are advantageous in averaging approaches because each image provides multiple views of the substructures. However, the symmetry assumption may be valid for only parts of the structure, leading to incoherent averaging of the other parts. Several image processing approaches have been adapted to tackle symmetry-mismatched substructures with increasing success. Such structures are ubiquitous in nature and further computational method development is needed to understanding their biological functions. ©2018 The Author(s).

  2. Local spatio-temporal analysis in vision systems

    NASA Astrophysics Data System (ADS)

    Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David

    1994-07-01

    The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.

  3. The Goddard Profiling Algorithm (GPROF): Description and Current Applications

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Yang, Song; Stout, John E.; Grecu, Mircea

    2004-01-01

    Atmospheric scientists use different methods for interpreting satellite data. In the early days of satellite meteorology, the analysis of cloud pictures from satellites was primarily subjective. As computer technology improved, satellite pictures could be processed digitally, and mathematical algorithms were developed and applied to the digital images in different wavelength bands to extract information about the atmosphere in an objective way. The kind of mathematical algorithm one applies to satellite data may depend on the complexity of the physical processes that lead to the observed image, and how much information is contained in the satellite images both spatially and at different wavelengths. Imagery from satellite-borne passive microwave radiometers has limited horizontal resolution, and the observed microwave radiances are the result of complex physical processes that are not easily modeled. For this reason, a type of algorithm called a Bayesian estimation method is utilized to interpret passive microwave imagery in an objective, yet computationally efficient manner.

  4. Skin cancer texture analysis of OCT images based on Haralick, fractal dimension and the complex directional field features

    NASA Astrophysics Data System (ADS)

    Raupov, Dmitry S.; Myakinin, Oleg O.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Zakharov, Valery P.; Khramov, Alexander G.

    2016-04-01

    Optical coherence tomography (OCT) is usually employed for the measurement of tumor topology, which reflects structural changes of a tissue. We investigated the possibility of OCT in detecting changes using a computer texture analysis method based on Haralick texture features, fractal dimension and the complex directional field method from different tissues. These features were used to identify special spatial characteristics, which differ healthy tissue from various skin cancers in cross-section OCT images (B-scans). Speckle reduction is an important pre-processing stage for OCT image processing. In this paper, an interval type-II fuzzy anisotropic diffusion algorithm for speckle noise reduction in OCT images was used. The Haralick texture feature set includes contrast, correlation, energy, and homogeneity evaluated in different directions. A box-counting method is applied to compute fractal dimension of investigated tissues. Additionally, we used the complex directional field calculated by the local gradient methodology to increase of the assessment quality of the diagnosis method. The complex directional field (as well as the "classical" directional field) can help describe an image as set of directions. Considering to a fact that malignant tissue grows anisotropically, some principal grooves may be observed on dermoscopic images, which mean possible existence of principal directions on OCT images. Our results suggest that described texture features may provide useful information to differentiate pathological from healthy patients. The problem of recognition melanoma from nevi is decided in this work due to the big quantity of experimental data (143 OCT-images include tumors as Basal Cell Carcinoma (BCC), Malignant Melanoma (MM) and Nevi). We have sensitivity about 90% and specificity about 85%. Further research is warranted to determine how this approach may be used to select the regions of interest automatically.

  5. Advanced Secure Optical Image Processing for Communications

    NASA Astrophysics Data System (ADS)

    Al Falou, Ayman

    2018-04-01

    New image processing tools and data-processing network systems have considerably increased the volume of transmitted information such as 2D and 3D images with high resolution. Thus, more complex networks and long processing times become necessary, and high image quality and transmission speeds are requested for an increasing number of applications. To satisfy these two requests, several either numerical or optical solutions were offered separately. This book explores both alternatives and describes research works that are converging towards optical/numerical hybrid solutions for high volume signal and image processing and transmission. Without being limited to hybrid approaches, the latter are particularly investigated in this book in the purpose of combining the advantages of both techniques. Additionally, pure numerical or optical solutions are also considered since they emphasize the advantages of one of the two approaches separately.

  6. Real-time blind image deconvolution based on coordinated framework of FPGA and DSP

    NASA Astrophysics Data System (ADS)

    Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun

    2015-10-01

    Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Donald F.; Schulz, Carl; Konijnenburg, Marco

    High-resolution Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometry imaging enables the spatial mapping and identification of biomolecules from complex surfaces. The need for long time-domain transients, and thus large raw file sizes, results in a large amount of raw data (“big data”) that must be processed efficiently and rapidly. This can be compounded by largearea imaging and/or high spatial resolution imaging. For FT-ICR, data processing and data reduction must not compromise the high mass resolution afforded by the mass spectrometer. The continuous mode “Mosaic Datacube” approach allows high mass resolution visualization (0.001 Da) of mass spectrometry imaging data, butmore » requires additional processing as compared to featurebased processing. We describe the use of distributed computing for processing of FT-ICR MS imaging datasets with generation of continuous mode Mosaic Datacubes for high mass resolution visualization. An eight-fold improvement in processing time is demonstrated using a Dutch nationally available cloud service.« less

  8. Semi-automatic image analysis methodology for the segmentation of bubbles and drops in complex dispersions occurring in bioreactors

    NASA Astrophysics Data System (ADS)

    Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.

    2006-09-01

    Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.

  9. Atypical Brain Activation during Simple & Complex Levels of Processing in Adult ADHD: An fMRI Study

    ERIC Educational Resources Information Center

    Hale, T. Sigi; Bookheimer, Susan; McGough, James J.; Phillips, Joseph M.; McCracken, James T.

    2007-01-01

    Objective: Executive dysfunction in ADHD is well supported. However, recent studies suggest that more fundamental impairments may be contributing. We assessed brain function in adults with ADHD during simple and complex forms of processing. Method: We used functional magnetic resonance imaging with forward and backward digit spans to investigate…

  10. The approximate entropy concept extended to three dimensions for calibrated, single parameter structural complexity interrogation of volumetric images.

    PubMed

    Moore, Christopher; Marchant, Thomas

    2017-07-12

    Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.

  11. Block-based scalable wavelet image codec

    NASA Astrophysics Data System (ADS)

    Bao, Yiliang; Kuo, C.-C. Jay

    1999-10-01

    This paper presents a high performance block-based wavelet image coder which is designed to be of very low implementational complexity yet with rich features. In this image coder, the Dual-Sliding Wavelet Transform (DSWT) is first applied to image data to generate wavelet coefficients in fixed-size blocks. Here, a block only consists of wavelet coefficients from a single subband. The coefficient blocks are directly coded with the Low Complexity Binary Description (LCBiD) coefficient coding algorithm. Each block is encoded using binary context-based bitplane coding. No parent-child correlation is exploited in the coding process. There is also no intermediate buffering needed in between DSWT and LCBiD. The compressed bit stream generated by the proposed coder is both SNR and resolution scalable, as well as highly resilient to transmission errors. Both DSWT and LCBiD process the data in blocks whose size is independent of the size of the original image. This gives more flexibility in the implementation. The codec has a very good coding performance even the block size is (16,16).

  12. The approximate entropy concept extended to three dimensions for calibrated, single parameter structural complexity interrogation of volumetric images

    NASA Astrophysics Data System (ADS)

    Moore, Christopher; Marchant, Thomas

    2017-08-01

    Reconstructive volumetric imaging permeates medical practice because of its apparently clear depiction of anatomy. However, the tell tale signs of abnormality and its delineation for treatment demand experts work at the threshold of visibility for hints of structure. Hitherto, a suitable assistive metric that chimes with clinical experience has been absent. This paper develops the complexity measure approximate entropy (ApEn) from its 1D physiological origin into a three-dimensional (3D) algorithm to fill this gap. The first 3D algorithm for this is presented in detail. Validation results for known test arrays are followed by a comparison of fan-beam and cone-beam x-ray computed tomography image volumes used in image guided radiotherapy for cancer. Results show the structural detail down to individual voxel level, the strength of which is calibrated by the ApEn process itself. The potential for application in machine assisted manual interaction and automated image processing and interrogation, including radiomics associated with predictive outcome modeling, is discussed.

  13. [Research applications in digital radiology. Big data and co].

    PubMed

    Müller, H; Hanbury, A

    2016-02-01

    Medical imaging produces increasingly complex images (e.g. thinner slices and higher resolution) with more protocols, so that image reading has also become much more complex. More information needs to be processed and usually the number of radiologists available for these tasks has not increased to the same extent. The objective of this article is to present current research results from projects on the use of image data for clinical decision support. An infrastructure that can allow large volumes of data to be accessed is presented. In this way the best performing tools can be identified without the medical data having to leave secure servers. The text presents the results of the VISCERAL and Khresmoi EU-funded projects, which allow the analysis of previous cases from institutional archives to support decision-making and for process automation. The results also represent a secure evaluation environment for medical image analysis. This allows the use of data extracted from past cases to solve information needs occurring when diagnosing new cases. The presented research prototypes allow direct extraction of knowledge from the visual data of the images and to use this for decision support or process automation. Real clinical use has not been tested but several subjective user tests showed the effectiveness and efficiency of the process. The future in radiology will clearly depend on better use of the important knowledge in clinical image archives to automate processes and aid decision-making via big data analysis. This can help concentrate the work of radiologists towards the most important parts of diagnostics.

  14. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  15. Looking back to inform the future: The role of cognition in forest disturbance characterization from remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Bianchetti, Raechel Anne

    Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.

  16. The human immunodeficiency virus type 1 long terminal repeat specifies two different transcription complexes, only one of which is regulated by Tat.

    PubMed Central

    Lu, X; Welsh, T M; Peterlin, B M

    1993-01-01

    The human immunodeficiency virus type 1 long terminal repeat sets up two different transcription complexes, which have been called processive and nonprocessive complexes. By mutating and substituting cis-acting sequences, we mapped elements of the human immunodeficiency virus long terminal repeat that are responsible for creating each transcription complex. Whereas processive complexes are efficiently assembled by upstream promoter elements in the absence of the TATA box, nonprocessive complexes absolutely require the TATA box. Moreover, the TATA box alone can set up these nonprocessive complexes, and nonprocessive but not processive complexes are trans activated by Tat. Finally, a strong DNA-binding site between the TATA box and trans-activation-responsive region interferes with either the assembly or movement of these nonprocessive complexes and diminishes the effects of Tat. Thus, Tat affects a critical step in the formation of elongation-competent transcription complexes. Images PMID:8445708

  17. Application of AIS Technology to Forest Mapping

    NASA Technical Reports Server (NTRS)

    Yool, S. R.; Star, J. L.

    1985-01-01

    Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.

  18. Researching on the process of remote sensing video imagery

    NASA Astrophysics Data System (ADS)

    Wang, He-rao; Zheng, Xin-qi; Sun, Yi-bo; Jia, Zong-ren; Wang, He-zhan

    Unmanned air vehicle remotely-sensed imagery on the low-altitude has the advantages of higher revolution, easy-shooting, real-time accessing, etc. It's been widely used in mapping , target identification, and other fields in recent years. However, because of conditional limitation, the video images are unstable, the targets move fast, and the shooting background is complex, etc., thus it is difficult to process the video images in this situation. In other fields, especially in the field of computer vision, the researches on video images are more extensive., which is very helpful for processing the remotely-sensed imagery on the low-altitude. Based on this, this paper analyzes and summarizes amounts of video image processing achievement in different fields, including research purposes, data sources, and the pros and cons of technology. Meantime, this paper explores the technology methods more suitable for low-altitude video image processing of remote sensing.

  19. Terahertz reflection imaging using Kirchhoff migration.

    PubMed

    Dorney, T D; Johnson, J L; Van Rudd, J; Baraniuk, R G; Symes, W W; Mittleman, D M

    2001-10-01

    We describe a new imaging method that uses single-cycle pulses of terahertz (THz) radiation. This technique emulates data-collection and image-processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a simple migration procedure to solve the inverse problem; this permits us to reconstruct the location and shape of targets. These results demonstrate the feasibility of the THz system as a test-bed for the exploration of new seismic processing methods involving complex model systems.

  20. Automated pathologies detection in retina digital images based on complex continuous wavelet transform phase angles.

    PubMed

    Lahmiri, Salim; Gargour, Christian S; Gabrea, Marcel

    2014-10-01

    An automated diagnosis system that uses complex continuous wavelet transform (CWT) to process retina digital images and support vector machines (SVMs) for classification purposes is presented. In particular, each retina image is transformed into two one-dimensional signals by concatenating image rows and columns separately. The mathematical norm of phase angles found in each one-dimensional signal at each level of CWT decomposition are relied on to characterise the texture of normal images against abnormal images affected by exudates, drusen and microaneurysms. The leave-one-out cross-validation method was adopted to conduct experiments and the results from the SVM show that the proposed approach gives better results than those obtained by other methods based on the correct classification rate, sensitivity and specificity.

  1. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  2. Can Spectro-Temporal Complexity Explain the Autistic Pattern of Performance on Auditory Tasks?

    ERIC Educational Resources Information Center

    Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter

    2006-01-01

    To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…

  3. Division in a Binary Representation for Complex Numbers

    ERIC Educational Resources Information Center

    Blest, David C.; Jamil, Tariq

    2003-01-01

    Computer operations involving complex numbers, essential in such applications as Fourier transforms or image processing, are normally performed in a "divide-and-conquer" approach dealing separately with real and imaginary parts. A number of proposals have treated complex numbers as a single unit but all have foundered on the problem of the…

  4. Opaque for the Reader but Transparent for the Brain: Neural Signatures of Morphological Complexity

    ERIC Educational Resources Information Center

    Meinzer, Marcus; Lahiri, Aditi; Flaisch, Tobias; Hannemann, Ronny; Eulitz, Carsten

    2009-01-01

    Within linguistics, words with a complex internal structure are commonly assumed to be decomposed into their constituent morphemes (e.g., un-help-ful). Nevertheless, an ongoing debate concerns the brain structures that subserve this process. Using functional magnetic resonance imaging, the present study varied the internal complexity of derived…

  5. An integral design strategy combining optical system and image processing to obtain high resolution images

    NASA Astrophysics Data System (ADS)

    Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun

    2016-05-01

    In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.

  6. An architecture for a brain-image database

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.

    2000-01-01

    The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.

  7. Dual-Tree Complex Wavelet Transform and Image Block Residual-Based Multi-Focus Image Fusion in Visual Sensor Networks

    PubMed Central

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-01-01

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs. PMID:25587878

  8. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks.

    PubMed

    Yang, Yong; Tong, Song; Huang, Shuying; Lin, Pan

    2014-11-26

    This paper presents a novel framework for the fusion of multi-focus images explicitly designed for visual sensor network (VSN) environments. Multi-scale based fusion methods can often obtain fused images with good visual effect. However, because of the defects of the fusion rules, it is almost impossible to completely avoid the loss of useful information in the thus obtained fused images. The proposed fusion scheme can be divided into two processes: initial fusion and final fusion. The initial fusion is based on a dual-tree complex wavelet transform (DTCWT). The Sum-Modified-Laplacian (SML)-based visual contrast and SML are employed to fuse the low- and high-frequency coefficients, respectively, and an initial composited image is obtained. In the final fusion process, the image block residuals technique and consistency verification are used to detect the focusing areas and then a decision map is obtained. The map is used to guide how to achieve the final fused image. The performance of the proposed method was extensively tested on a number of multi-focus images, including no-referenced images, referenced images, and images with different noise levels. The experimental results clearly indicate that the proposed method outperformed various state-of-the-art fusion methods, in terms of both subjective and objective evaluations, and is more suitable for VSNs.

  9. Use of One Time Pad Algorithm for Bit Plane Security Improvement

    NASA Astrophysics Data System (ADS)

    Suhardi; Suwilo, Saib; Budhiarti Nababan, Erna

    2017-12-01

    BPCS (Bit-Plane Complexity Segmentation) which is one of the steganography techniques that utilizes the human vision characteristics that cannot see the change in binary patterns that occur in the image. This technique performs message insertion by making a switch to a high-complexity bit-plane or noise-like regions with bits of secret messages. Bit messages that were previously stored precisely result the message extraction process to be done easily by rearranging a set of previously stored characters in noise-like region in the image. Therefore the secret message becomes easily known by others. In this research, the process of replacing bit plane with message bits is modified by utilizing One Time Pad cryptography technique which aims to increase security in bit plane. In the tests performed, the combination of One Time Pad cryptographic algorithm to the steganography technique of BPCS works well in the insertion of messages into the vessel image, although in insertion into low-dimensional images is poor. The comparison of the original image with the stegoimage looks identical and produces a good quality image with a mean value of PSNR above 30db when using a largedimensional image as the cover messages.

  10. Astigmatism compensation in digital holographic microscopy using complex-amplitude correlation

    NASA Astrophysics Data System (ADS)

    Tamrin, Khairul Fikri; Rahmatullah, Bahbibi; Samuri, Suzani Mohamad

    2015-07-01

    Digital holographic microscopy (DHM) is a promising tool for a three-dimensional imaging of microscopic particles. It offers the possibility of wavefront processing by manipulating amplitude and phase of the recorded digital holograms. With a view to compensate for aberration in the reconstructed particle images, this paper discusses a new approach of aberration compensation based on complex amplitude correlation and the use of a priori information. The approach is applied to holograms of microscopic particles flowing inside a cylindrical micro-channel recorded using an off-axis digital holographic microscope. The approach results in improvements in the image and signal qualities.

  11. InSAR Deformation Time Series Processed On-Demand in the Cloud

    NASA Astrophysics Data System (ADS)

    Horn, W. B.; Weeden, R.; Dimarchi, H.; Arko, S. A.; Hogenson, K.

    2017-12-01

    During this past year, ASF has developed a cloud-based on-demand processing system known as HyP3 (http://hyp3.asf.alaska.edu/), the Hybrid Pluggable Processing Pipeline, for Synthetic Aperture Radar (SAR) data. The system makes it easy for a user who doesn't have the time or inclination to install and use complex SAR processing software to leverage SAR data in their research or operations. One such processing algorithm is generation of a deformation time series product, which is a series of images representing ground displacements over time, which can be computed using a time series of interferometric SAR (InSAR) products. The set of software tools necessary to generate this useful product are difficult to install, configure, and use. Moreover, for a long time series with many images, the processing of just the interferograms can take days. Principally built by three undergraduate students at the ASF DAAC, the deformation time series processing relies the new Amazon Batch service, which enables processing of jobs with complex interconnected dependencies in a straightforward and efficient manner. In the case of generating a deformation time series product from a stack of single-look complex SAR images, the system uses Batch to serialize the up-front processing, interferogram generation, optional tropospheric correction, and deformation time series generation. The most time consuming portion is the interferogram generation, because even for a fairly small stack of images many interferograms need to be processed. By using AWS Batch, the interferograms are all generated in parallel; the entire process completes in hours rather than days. Additionally, the individual interferograms are saved in Amazon's cloud storage, so that when new data is acquired in the stack, an updated time series product can be generated with minimal addiitonal processing. This presentation will focus on the development techniques and enabling technologies that were used in developing the time series processing in the ASF HyP3 system. Data and process flow from job submission through to order completion will be shown, highlighting the benefits of the cloud for each step.

  12. Research of flaw image collecting and processing technology based on multi-baseline stereo imaging

    NASA Astrophysics Data System (ADS)

    Yao, Yong; Zhao, Jiguang; Pang, Xiaoyan

    2008-03-01

    Aiming at the practical situations such as accurate optimal design, complex algorithms and precise technical demands of gun bore flaw image collecting, the design frame of a 3-D image collecting and processing system based on multi-baseline stereo imaging was presented in this paper. This system mainly including computer, electrical control box, stepping motor and CCD camera and it can realize function of image collection, stereo matching, 3-D information reconstruction and after-treatments etc. Proved by theoretical analysis and experiment results, images collected by this system were precise and it can slake efficiently the uncertainty problem produced by universally veins or repeated veins. In the same time, this system has faster measure speed and upper measure precision.

  13. A new machine classification method applied to human peripheral blood leukocytes

    NASA Technical Reports Server (NTRS)

    Rorvig, Mark E.; Fitzpatrick, Steven J.; Vitthal, Sanjay; Ladoulis, Charles T.

    1994-01-01

    Human beings judge images by complex mental processes, whereas computing machines extract features. By reducing scaled human judgments and machine extracted features to a common metric space and fitting them by regression, the judgments of human experts rendered on a sample of images may be imposed on an image population to provide automatic classification.

  14. New Processing of Spaceborne Imaging Radar-C (SIR-C) Data

    NASA Astrophysics Data System (ADS)

    Meyer, F. J.; Gracheva, V.; Arko, S. A.; Labelle-Hamer, A. L.

    2017-12-01

    The Spaceborne Imaging Radar-C (SIR-C) was a radar system, which successfully operated on two separate shuttle missions in April and October 1994. During these two missions, a total of 143 hours of radar data were recorded. SIR-C was the first multifrequency and polarimetric spaceborne radar system, operating in dual frequency (L- and C- band) and with quad-polarization. SIR-C had a variety of different operating modes, which are innovative even from today's point of view. Depending on the mode, it was possible to acquire data with different polarizations and carrier frequency combinations. Additionally, different swaths and bandwidths could be used during the data collection and it was possible to receive data with two antennas in the along-track direction.The United States Geological Survey (USGS) distributes the synthetic aperture radar (SAR) images as single-look complex (SLC) and multi-look complex (MLC) products. Unfortunately, since June 2005 the SIR-C processor has been inoperable and not repairable. All acquired SLC and MLC images were processed with a course resolution of 100 m with the goal of generating a quick look. These images are however not well suited for scientific analysis. Only a small percentage of the acquired data has been processed as full resolution SAR images and the unprocessed high resolution data cannot be processed any more at the moment.At the Alaska Satellite Facility (ASF) a new processor was developed to process binary SIR-C data to full resolution SAR images. ASF is planning to process the entire recoverable SIR-C archive to full resolution SLCs, MLCs and high resolution geocoded image products. ASF will make these products available to the science community through their existing data archiving and distribution system.The final paper will describe the new processor and analyze the challenges of reprocessing the SIR-C data.

  15. Binary video codec for data reduction in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Ahmad, Naeem; Imran, Muhammad; O'Nils, Mattias

    2013-02-01

    Wireless Visual Sensor Networks (WVSN) is formed by deploying many Visual Sensor Nodes (VSNs) in the field. Typical applications of WVSN include environmental monitoring, health care, industrial process monitoring, stadium/airports monitoring for security reasons and many more. The energy budget in the outdoor applications of WVSN is limited to the batteries and the frequent replacement of batteries is usually not desirable. So the processing as well as the communication energy consumption of the VSN needs to be optimized in such a way that the network remains functional for longer duration. The images captured by VSN contain huge amount of data and require efficient computational resources for processing the images and wide communication bandwidth for the transmission of the results. Image processing algorithms must be designed and developed in such a way that they are computationally less complex and must provide high compression rate. For some applications of WVSN, the captured images can be segmented into bi-level images and hence bi-level image coding methods will efficiently reduce the information amount in these segmented images. But the compression rate of the bi-level image coding methods is limited by the underlined compression algorithm. Hence there is a need for designing other intelligent and efficient algorithms which are computationally less complex and provide better compression rate than that of bi-level image coding methods. Change coding is one such algorithm which is computationally less complex (require only exclusive OR operations) and provide better compression efficiency compared to image coding but it is effective for applications having slight changes between adjacent frames of the video. The detection and coding of the Region of Interest (ROIs) in the change frame efficiently reduce the information amount in the change frame. But, if the number of objects in the change frames is higher than a certain level then the compression efficiency of both the change coding and ROI coding becomes worse than that of image coding. This paper explores the compression efficiency of the Binary Video Codec (BVC) for the data reduction in WVSN. We proposed to implement all the three compression techniques i.e. image coding, change coding and ROI coding at the VSN and then select the smallest bit stream among the results of the three compression techniques. In this way the compression performance of the BVC will never become worse than that of image coding. We concluded that the compression efficiency of BVC is always better than that of change coding and is always better than or equal that of ROI coding and image coding.

  16. Multidimensional, mapping-based complex wavelet transforms.

    PubMed

    Fernandes, Felix C A; van Spaendonck, Rutger L C; Burrus, C Sidney

    2005-01-01

    Although the discrete wavelet transform (DWT) is a powerful tool for signal and image processing, it has three serious disadvantages: shift sensitivity, poor directionality, and lack of phase information. To overcome these disadvantages, we introduce multidimensional, mapping-based, complex wavelet transforms that consist of a mapping onto a complex function space followed by a DWT of the complex mapping. Unlike other popular transforms that also mitigate DWT shortcomings, the decoupled implementation of our transforms has two important advantages. First, the controllable redundancy of the mapping stage offers a balance between degree of shift sensitivity and transform redundancy. This allows us to create a directional, nonredundant, complex wavelet transform with potential benefits for image coding systems. To the best of our knowledge, no other complex wavelet transform is simultaneously directional and nonredundant. The second advantage of our approach is the flexibility to use any DWT in the transform implementation. As an example, we exploit this flexibility to create the complex double-density DWT: a shift-insensitive, directional, complex wavelet transform with a low redundancy of (3M - 1)/(2M - 1) in M dimensions. No other transform achieves all these properties at a lower redundancy, to the best of our knowledge. By exploiting the advantages of our multidimensional, mapping-based complex wavelet transforms in seismic signal-processing applications, we have demonstrated state-of-the-art results.

  17. Optical coherence tomography for embryonic imaging: a review

    PubMed Central

    Raghunathan, Raksha; Singh, Manmohan; Dickinson, Mary E.; Larin, Kirill V.

    2016-01-01

    Abstract. Embryogenesis is a highly complex and dynamic process, and its visualization is crucial for understanding basic physiological processes during development and for identifying and assessing possible defects, malformations, and diseases. While traditional imaging modalities, such as ultrasound biomicroscopy, micro-magnetic resonance imaging, and micro-computed tomography, have long been adapted for embryonic imaging, these techniques generally have limitations in their speed, spatial resolution, and contrast to capture processes such as cardiodynamics during embryogenesis. Optical coherence tomography (OCT) is a noninvasive imaging modality with micrometer-scale spatial resolution and imaging depth up to a few millimeters in tissue. OCT has bridged the gap between ultrahigh resolution imaging techniques with limited imaging depth like confocal microscopy and modalities, such as ultrasound sonography, which have deeper penetration but poorer spatial resolution. Moreover, the noninvasive nature of OCT has enabled live imaging of embryos without any external contrast agents. We review how OCT has been utilized to study developing embryos and also discuss advances in techniques used in conjunction with OCT to understand embryonic development. PMID:27228503

  18. Contextual analysis of immunological response through whole-organ fluorescent imaging.

    PubMed

    Woodruff, Matthew C; Herndon, Caroline N; Heesters, B A; Carroll, Michael C

    2013-09-01

    As fluorescent microscopy has developed, significant insights have been gained into the establishment of immune response within secondary lymphoid organs, particularly in draining lymph nodes. While established techniques such as confocal imaging and intravital multi-photon microscopy have proven invaluable, they provide limited insight into the architectural and structural context in which these responses occur. To interrogate the role of the lymph node environment in immune response effectively, a new set of imaging tools taking into account broader architectural context must be implemented into emerging immunological questions. Using two different methods of whole-organ imaging, optical clearing and three-dimensional reconstruction of serially sectioned lymph nodes, fluorescent representations of whole lymph nodes can be acquired at cellular resolution. Using freely available post-processing tools, images of unlimited size and depth can be assembled into cohesive, contextual snapshots of immunological response. Through the implementation of robust iterative analysis techniques, these highly complex three-dimensional images can be objectified into sortable object data sets. These data can then be used to interrogate complex questions at the cellular level within the broader context of lymph node biology. By combining existing imaging technology with complex methods of sample preparation and capture, we have developed efficient systems for contextualizing immunological phenomena within lymphatic architecture. In combination with robust approaches to image analysis, these advances provide a path to integrating scientific understanding of basic lymphatic biology into the complex nature of immunological response.

  19. Affective facilitation of early visual cortex during rapid picture presentation at 6 and 15 Hz

    PubMed Central

    Bekhtereva, Valeria

    2015-01-01

    The steady-state visual evoked potential (SSVEP), a neurophysiological marker of attentional resource allocation with its generators in early visual cortex, exhibits enhanced amplitude for emotional compared to neutral complex pictures. Emotional cue extraction for complex images is linked to the N1-EPN complex with a peak latency of ∼140–160 ms. We tested whether neural facilitation in early visual cortex with affective pictures requires emotional cue extraction of individual images, even when a stream of images of the same valence category is presented. Images were shown at either 6 Hz (167 ms, allowing for extraction) or 15 Hz (67 ms per image, causing disruption of processing by the following image). Results showed SSVEP amplitude enhancement for emotional compared to neutral images at a presentation rate of 6 Hz but no differences at 15 Hz. This was not due to featural differences between the two valence categories. Results strongly suggest that individual images need to be displayed for sufficient time allowing for emotional cue extraction to drive affective neural modulation in early visual cortex. PMID:25971598

  20. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    PubMed

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  1. Using a Smartphone Camera for Nanosatellite Attitude Determination

    NASA Astrophysics Data System (ADS)

    Shimmin, R.

    2014-09-01

    The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.

  2. Laser beam complex amplitude measurement by phase diversity.

    PubMed

    Védrenne, Nicolas; Mugnier, Laurent M; Michau, Vincent; Velluet, Marie-Thérèse; Bierent, Rudolph

    2014-02-24

    The control of the optical quality of a laser beam requires a complex amplitude measurement able to deal with strong modulus variations and potentially highly perturbed wavefronts. The method proposed here consists in an extension of phase diversity to complex amplitude measurements that is effective for highly perturbed beams. Named camelot for Complex Amplitude MEasurement by a Likelihood Optimization Tool, it relies on the acquisition and processing of few images of the beam section taken along the optical path. The complex amplitude of the beam is retrieved from the images by the minimization of a Maximum a Posteriori error metric between the images and a model of the beam propagation. The analytical formalism of the method and its experimental validation are presented. The modulus of the beam is compared to a measurement of the beam profile, the phase of the beam is compared to a conventional phase diversity estimate. The precision of the experimental measurements is investigated by numerical simulations.

  3. A global "imaging'' view on systems approaches in immunology.

    PubMed

    Ludewig, Burkhard; Stein, Jens V; Sharpe, James; Cervantes-Barragan, Luisa; Thiel, Volker; Bocharov, Gennady

    2012-12-01

    The immune system exhibits an enormous complexity. High throughput methods such as the "-omic'' technologies generate vast amounts of data that facilitate dissection of immunological processes at ever finer resolution. Using high-resolution data-driven systems analysis, causal relationships between complex molecular processes and particular immunological phenotypes can be constructed. However, processes in tissues, organs, and the organism itself (so-called higher level processes) also control and regulate the molecular (lower level) processes. Reverse systems engineering approaches, which focus on the examination of the structure, dynamics and control of the immune system, can help to understand the construction principles of the immune system. Such integrative mechanistic models can properly describe, explain, and predict the behavior of the immune system in health and disease by combining both higher and lower level processes. Moving from molecular and cellular levels to a multiscale systems understanding requires the development of methodologies that integrate data from different biological levels into multiscale mechanistic models. In particular, 3D imaging techniques and 4D modeling of the spatiotemporal dynamics of immune processes within lymphoid tissues are central for such integrative approaches. Both dynamic and global organ imaging technologies will be instrumental in facilitating comprehensive multiscale systems immunology analyses as discussed in this review. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Process perspective on image quality evaluation

    NASA Astrophysics Data System (ADS)

    Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte

    2008-01-01

    The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.

  5. Higher resolution satellite remote sensing and the impact on image mapping

    USGS Publications Warehouse

    Watkins, Allen H.; Thormodsgard, June M.

    1987-01-01

    Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.

  6. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  7. Granular computing with multiple granular layers for brain big data processing.

    PubMed

    Wang, Guoyin; Xu, Ji

    2014-12-01

    Big data is the term for a collection of datasets so huge and complex that it becomes difficult to be processed using on-hand theoretical models and technique tools. Brain big data is one of the most typical, important big data collected using powerful equipments of functional magnetic resonance imaging, multichannel electroencephalography, magnetoencephalography, Positron emission tomography, near infrared spectroscopic imaging, as well as other various devices. Granular computing with multiple granular layers, referred to as multi-granular computing (MGrC) for short hereafter, is an emerging computing paradigm of information processing, which simulates the multi-granular intelligent thinking model of human brain. It concerns the processing of complex information entities called information granules, which arise in the process of data abstraction and derivation of information and even knowledge from data. This paper analyzes three basic mechanisms of MGrC, namely granularity optimization, granularity conversion, and multi-granularity joint computation, and discusses the potential of introducing MGrC into intelligent processing of brain big data.

  8. High performance embedded system for real-time pattern matching

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-02-01

    In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.

  9. VoxelMages: a general-purpose graphical interface for designing geometries and processing DICOM images for PENELOPE.

    PubMed

    Giménez-Alventosa, V; Ballester, F; Vijande, J

    2016-12-01

    The design and construction of geometries for Monte Carlo calculations is an error-prone, time-consuming, and complex step in simulations describing particle interactions and transport in the field of medical physics. The software VoxelMages has been developed to help the user in this task. It allows to design complex geometries and to process DICOM image files for simulations with the general-purpose Monte Carlo code PENELOPE in an easy and straightforward way. VoxelMages also allows to import DICOM-RT structure contour information as delivered by a treatment planning system. Its main characteristics, usage and performance benchmarking are described in detail. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    PubMed Central

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches. PMID:25110741

  11. Multispectral image compression based on DSC combined with CCSDS-IDC.

    PubMed

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  12. Advances in molecular labeling, high throughput imaging and machine intelligence portend powerful functional cellular biochemistry tools.

    PubMed

    Price, Jeffrey H; Goodacre, Angela; Hahn, Klaus; Hodgson, Louis; Hunter, Edward A; Krajewski, Stanislaw; Murphy, Robert F; Rabinovich, Andrew; Reed, John C; Heynen, Susanne

    2002-01-01

    Cellular behavior is complex. Successfully understanding systems at ever-increasing complexity is fundamental to advances in modern science and unraveling the functional details of cellular behavior is no exception. We present a collection of prospectives to provide a glimpse of the techniques that will aid in collecting, managing and utilizing information on complex cellular processes via molecular imaging tools. These include: 1) visualizing intracellular protein activity with fluorescent markers, 2) high throughput (and automated) imaging of multilabeled cells in statistically significant numbers, and 3) machine intelligence to analyze subcellular image localization and pattern. Although not addressed here, the importance of combining cell-image-based information with detailed molecular structure and ligand-receptor binding models cannot be overlooked. Advanced molecular imaging techniques have the potential to impact cellular diagnostics for cancer screening, clinical correlations of tissue molecular patterns for cancer biology, and cellular molecular interactions for accelerating drug discovery. The goal of finally understanding all cellular components and behaviors will be achieved by advances in both instrumentation engineering (software and hardware) and molecular biochemistry. Copyright 2002 Wiley-Liss, Inc.

  13. Bio-inspired approach to multistage image processing

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan

    2017-08-01

    Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.

  14. HIV-1 Tat protein promotes formation of more-processive elongation complexes.

    PubMed Central

    Marciniak, R A; Sharp, P A

    1991-01-01

    The Tat protein of HIV-1 trans-activates transcription in vitro in a cell-free extract of HeLa nuclei. Quantitative analysis of the efficiency of elongation revealed that a majority of the elongation complexes generated by the HIV-1 promoter were not highly processive and terminated within the first 500 nucleotides. Tat trans-activation of transcription from the HIV-1 promoter resulted from an increase in processive character of the elongation complexes. More specifically, the analysis suggests that there exist two classes of elongation complexes initiating from the HIV promoter: a less-processive form and a more-processive form. Addition of purified Tat protein was found to increase the abundance of the more-processive class of elongation complex. The purine nucleoside analog, 5,6-dichloro-1-beta-D-ribofuranosylbenzimidazole (DRB) inhibits transcription in this reaction by decreasing the efficiency of elongation. Surprisingly, stimulation of transcription elongation by Tat was preferentially inhibited by the addition of DRB. Images PMID:1756726

  15. Sparsity Aware Adaptive Radar Sensor Imaging in Complex Scattering Environments

    DTIC Science & Technology

    2015-06-15

    while meeting the requirement on the peak to average power ratio. Third, we study impact of waveform encoding on nonlinear electromagnetic tomographic...Enyue Lu. Time Domain Electromagnetic Tomography Using Propagation and Backpropagation Method, IEEE International Conference on Image Processing...Received Paper 3.00 4.00 Yuanwei Jin, Chengdon Dong, Enyue Lu. Waveform Encoding for Nonlinear Electromagnetic Tomographic Imaging, IEEE Global

  16. Visualisation of urban airborne laser scanning data with occlusion images

    NASA Astrophysics Data System (ADS)

    Hinks, Tommy; Carr, Hamish; Gharibi, Hamid; Laefer, Debra F.

    2015-06-01

    Airborne Laser Scanning (ALS) was introduced to provide rapid, high resolution scans of landforms for computational processing. More recently, ALS has been adapted for scanning urban areas. The greater complexity of urban scenes necessitates the development of novel methods to exploit urban ALS to best advantage. This paper presents occlusion images: a novel technique that exploits the geometric complexity of the urban environment to improve visualisation of small details for better feature recognition. The algorithm is based on an inversion of traditional occlusion techniques.

  17. White blood cell segmentation by circle detection using electromagnetism-like optimization.

    PubMed

    Cuevas, Erik; Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability.

  18. White Blood Cell Segmentation by Circle Detection Using Electromagnetism-Like Optimization

    PubMed Central

    Oliva, Diego; Díaz, Margarita; Zaldivar, Daniel; Pérez-Cisneros, Marco; Pajares, Gonzalo

    2013-01-01

    Medical imaging is a relevant field of application of image processing algorithms. In particular, the analysis of white blood cell (WBC) images has engaged researchers from fields of medicine and computer vision alike. Since WBCs can be approximated by a quasicircular form, a circular detector algorithm may be successfully applied. This paper presents an algorithm for the automatic detection of white blood cells embedded into complicated and cluttered smear images that considers the complete process as a circle detection problem. The approach is based on a nature-inspired technique called the electromagnetism-like optimization (EMO) algorithm which is a heuristic method that follows electromagnetism principles for solving complex optimization problems. The proposed approach uses an objective function which measures the resemblance of a candidate circle to an actual WBC. Guided by the values of such objective function, the set of encoded candidate circles are evolved by using EMO, so that they can fit into the actual blood cells contained in the edge map of the image. Experimental results from blood cell images with a varying range of complexity are included to validate the efficiency of the proposed technique regarding detection, robustness, and stability. PMID:23476713

  19. Multiple hypothesis tracking for cluttered biological image sequences.

    PubMed

    Chenouard, Nicolas; Bloch, Isabelle; Olivo-Marin, Jean-Christophe

    2013-11-01

    In this paper, we present a method for simultaneously tracking thousands of targets in biological image sequences, which is of major importance in modern biology. The complexity and inherent randomness of the problem lead us to propose a unified probabilistic framework for tracking biological particles in microscope images. The framework includes realistic models of particle motion and existence and of fluorescence image features. For the track extraction process per se, the very cluttered conditions motivate the adoption of a multiframe approach that enforces tracking decision robustness to poor imaging conditions and to random target movements. We tackle the large-scale nature of the problem by adapting the multiple hypothesis tracking algorithm to the proposed framework, resulting in a method with a favorable tradeoff between the model complexity and the computational cost of the tracking procedure. When compared to the state-of-the-art tracking techniques for bioimaging, the proposed algorithm is shown to be the only method providing high-quality results despite the critically poor imaging conditions and the dense target presence. We thus demonstrate the benefits of advanced Bayesian tracking techniques for the accurate computational modeling of dynamical biological processes, which is promising for further developments in this domain.

  20. Proposed imaging of the ultrafast electronic motion in samples using x-ray phase contrast.

    PubMed

    Dixit, Gopal; Slowik, Jan Malte; Santra, Robin

    2013-03-29

    Tracing the motion of electrons has enormous relevance to understanding ubiquitous phenomena in ultrafast science, such as the dynamical evolution of the electron density during complex chemical and biological processes. Scattering of ultrashort x-ray pulses from an electronic wave packet would appear to be the most obvious approach to image the electronic motion in real time and real space with the notion that such scattering patterns, in the far-field regime, encode the instantaneous electron density of the wave packet. However, recent results by Dixit et al. [Proc. Natl. Acad. Sci. U.S.A. 109, 11636 (2012)] have put this notion into question and have shown that the scattering in the far-field regime probes spatiotemporal density-density correlations. Here, we propose a possible way to image the instantaneous electron density of the wave packet via ultrafast x-ray phase contrast imaging. Moreover, we show that inelastic scattering processes, which plague ultrafast scattering in the far-field regime, do not contribute in ultrafast x-ray phase contrast imaging as a consequence of an interference effect. We illustrate our general findings by means of a wave packet that lies in the time and energy range of the dynamics of valence electrons in complex molecular and biological systems. This present work offers a potential to image not only instantaneous snapshots of nonstationary electron dynamics, but also the laplacian of these snapshots which provide information about the complex bonding and topology of the charge distributions in the systems.

  1. Proposed Imaging of the Ultrafast Electronic Motion in Samples using X-Ray Phase Contrast

    NASA Astrophysics Data System (ADS)

    Dixit, Gopal; Slowik, Jan Malte; Santra, Robin

    2013-03-01

    Tracing the motion of electrons has enormous relevance to understanding ubiquitous phenomena in ultrafast science, such as the dynamical evolution of the electron density during complex chemical and biological processes. Scattering of ultrashort x-ray pulses from an electronic wave packet would appear to be the most obvious approach to image the electronic motion in real time and real space with the notion that such scattering patterns, in the far-field regime, encode the instantaneous electron density of the wave packet. However, recent results by Dixit et al. [Proc. Natl. Acad. Sci. U.S.A. 109, 11 636 (2012)] have put this notion into question and have shown that the scattering in the far-field regime probes spatiotemporal density-density correlations. Here, we propose a possible way to image the instantaneous electron density of the wave packet via ultrafast x-ray phase contrast imaging. Moreover, we show that inelastic scattering processes, which plague ultrafast scattering in the far-field regime, do not contribute in ultrafast x-ray phase contrast imaging as a consequence of an interference effect. We illustrate our general findings by means of a wave packet that lies in the time and energy range of the dynamics of valence electrons in complex molecular and biological systems. This present work offers a potential to image not only instantaneous snapshots of nonstationary electron dynamics, but also the Laplacian of these snapshots which provide information about the complex bonding and topology of the charge distributions in the systems.

  2. Image processing of metal surface with structured light

    NASA Astrophysics Data System (ADS)

    Luo, Cong; Feng, Chang; Wang, Congzheng

    2014-09-01

    In structured light vision measurement system, the ideal image of structured light strip, in addition to black background , contains only the gray information of the position of the stripe. However, the actual image contains image noise, complex background and so on, which does not belong to the stripe, and it will cause interference to useful information. To extract the stripe center of mental surface accurately, a new processing method was presented. Through adaptive median filtering, the noise can be preliminary removed, and the noise which introduced by CCD camera and measured environment can be further removed with difference image method. To highlight fine details and enhance the blurred regions between the stripe and noise, the sharping algorithm is used which combine the best features of Laplacian operator and Sobel operator. Morphological opening operation and closing operation are used to compensate the loss of information.Experimental results show that this method is effective in the image processing, not only to restrain the information but also heighten contrast. It is beneficial for the following processing.

  3. Research of processes of reception and analysis of dynamic digital medical images in hardware/software complexes used for diagnostics and treatment of cardiovascular diseases

    NASA Astrophysics Data System (ADS)

    Karmazikov, Y. V.; Fainberg, E. M.

    2005-06-01

    Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.

  4. Stochastic simulation by image quilting of process-based geological models

    NASA Astrophysics Data System (ADS)

    Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef

    2017-09-01

    Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.

  5. Advanced metrology by offline SEM data processing

    NASA Astrophysics Data System (ADS)

    Lakcher, Amine; Schneider, Loïc.; Le-Gratiet, Bertrand; Ducoté, Julien; Farys, Vincent; Besacier, Maxime

    2017-06-01

    Today's technology nodes contain more and more complex designs bringing increasing challenges to chip manufacturing process steps. It is necessary to have an efficient metrology to assess process variability of these complex patterns and thus extract relevant data to generate process aware design rules and to improve OPC models. Today process variability is mostly addressed through the analysis of in-line monitoring features which are often designed to support robust measurements and as a consequence are not always very representative of critical design rules. CD-SEM is the main CD metrology technique used in chip manufacturing process but it is challenged when it comes to measure metrics like tip to tip, tip to line, areas or necking in high quantity and with robustness. CD-SEM images contain a lot of information that is not always used in metrology. Suppliers have provided tools that allow engineers to extract the SEM contours of their features and to convert them into a GDS. Contours can be seen as the signature of the shape as it contains all the dimensional data. Thus the methodology is to use the CD-SEM to take high quality images then generate SEM contours and create a data base out of them. Contours are used to feed an offline metrology tool that will process them to extract different metrics. It was shown in two previous papers that it is possible to perform complex measurements on hotspots at different process steps (lithography, etch, copper CMP) by using SEM contours with an in-house offline metrology tool. In the current paper, the methodology presented previously will be expanded to improve its robustness and combined with the use of phylogeny to classify the SEM images according to their geometrical proximities.

  6. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  7. CryoEM and image sorting for flexible protein/DNA complexes.

    PubMed

    Villarreal, Seth A; Stewart, Phoebe L

    2014-07-01

    Intrinsically disordered regions of proteins and conformational flexibility within complexes can be critical for biological function. However, disorder, flexibility, and heterogeneity often hinder structural analyses. CryoEM and single particle image processing techniques offer the possibility of imaging samples with significant flexibility. Division of particle images into more homogenous subsets after data acquisition can help compensate for heterogeneity within the sample. We present the utility of an eigenimage sorting analysis for examining two protein/DNA complexes with significant conformational flexibility and heterogeneity. These complexes are integral to the non-homologous end joining pathway, and are involved in the repair of double strand breaks of DNA. Both complexes include the DNA-dependent protein kinase catalytic subunit (DNA-PKcs) and biotinylated DNA with bound streptavidin, with one complex containing the Ku heterodimer. Initial 3D reconstructions of the two DNA-PKcs complexes resembled a cryoEM structure of uncomplexed DNA-PKcs without additional density clearly attributable to the remaining components. Application of eigenimage sorting allowed division of the DNA-PKcs complex datasets into more homogeneous subsets. This led to visualization of density near the base of the DNA-PKcs that can be attributed to DNA, streptavidin, and Ku. However, comparison of projections of the subset structures with 2D class averages indicated that a significant level of heterogeneity remained within each subset. In summary, image sorting methods allowed visualization of extra density near the base of DNA-PKcs, suggesting that DNA binds in the vicinity of the base of the molecule and potentially to a flexible region of DNA-PKcs. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Mirion--a software package for automatic processing of mass spectrometric images.

    PubMed

    Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B

    2013-08-01

    Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.

  9. Fuzzy Matching Based on Gray-scale Difference for Quantum Images

    NASA Astrophysics Data System (ADS)

    Luo, GaoFeng; Zhou, Ri-Gui; Liu, XingAo; Hu, WenWen; Luo, Jia

    2018-05-01

    Quantum image processing has recently emerged as an essential problem in practical tasks, e.g. real-time image matching. Previous studies have shown that the superposition and entanglement of quantum can greatly improve the efficiency of complex image processing. In this paper, a fuzzy quantum image matching scheme based on gray-scale difference is proposed to find out the target region in a reference image, which is very similar to the template image. Firstly, we employ the proposed enhanced quantum representation (NEQR) to store digital images. Then some certain quantum operations are used to evaluate the gray-scale difference between two quantum images by thresholding. If all of the obtained gray-scale differences are not greater than the threshold value, it indicates a successful fuzzy matching of quantum images. Theoretical analysis and experiments show that the proposed scheme performs fuzzy matching at a low cost and also enables exponentially significant speedup via quantum parallel computation.

  10. Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Bales, Ben; Pollock, Tresa; Petzold, Linda

    2017-06-01

    Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.

  11. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  12. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  13. Kernel Regression Estimation of Fiber Orientation Mixtures in Diffusion MRI

    PubMed Central

    Cabeen, Ryan P.; Bastin, Mark E.; Laidlaw, David H.

    2016-01-01

    We present and evaluate a method for kernel regression estimation of fiber orientations and associated volume fractions for diffusion MR tractography and population-based atlas construction in clinical imaging studies of brain white matter. This is a model-based image processing technique in which representative fiber models are estimated from collections of component fiber models in model-valued image data. This extends prior work in nonparametric image processing and multi-compartment processing to provide computational tools for image interpolation, smoothing, and fusion with fiber orientation mixtures. In contrast to related work on multi-compartment processing, this approach is based on directional measures of divergence and includes data-adaptive extensions for model selection and bilateral filtering. This is useful for reconstructing complex anatomical features in clinical datasets analyzed with the ball-and-sticks model, and our framework’s data-adaptive extensions are potentially useful for general multi-compartment image processing. We experimentally evaluate our approach with both synthetic data from computational phantoms and in vivo clinical data from human subjects. With synthetic data experiments, we evaluate performance based on errors in fiber orientation, volume fraction, compartment count, and tractography-based connectivity. With in vivo data experiments, we first show improved scan-rescan reproducibility and reliability of quantitative fiber bundle metrics, including mean length, volume, streamline count, and mean volume fraction. We then demonstrate the creation of a multi-fiber tractography atlas from a population of 80 human subjects. In comparison to single tensor atlasing, our multi-fiber atlas shows more complete features of known fiber bundles and includes reconstructions of the lateral projections of the corpus callosum and complex fronto-parietal connections of the superior longitudinal fasciculus I, II, and III. PMID:26691524

  14. F3D Image Processing and Analysis for Many - and Multi-core Platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    F3D is written in OpenCL, so it achieve[sic] platform-portable parallelism on modern mutli-core CPUs and many-core GPUs. The interface and mechanims to access F3D core are written in Java as a plugin for Fiji/ImageJ to deliver several key image-processing algorithms necessary to remove artifacts from micro-tomography data. The algorithms consist of data parallel aware filters that can efficiently utilizes[sic] resources and can work on out of core datasets and scale efficiently across multiple accelerators. Optimizing for data parallel filters, streaming out of core datasets, and efficient resource and memory and data managements over complex execution sequence of filters greatly expeditesmore » any scientific workflow with image processing requirements. F3D performs several different types of 3D image processing operations, such as non-linear filtering using bilateral filtering and/or median filtering and/or morphological operators (MM). F3D gray-level MM operators are one-pass constant time methods that can perform morphological transformations with a line-structuring element oriented in discrete directions. Additionally, MM operators can be applied to gray-scale images, and consist of two parts: (a) a reference shape or structuring element, which is translated over the image, and (b) a mechanism, or operation, that defines the comparisons to be performed between the image and the structuring element. This tool provides a critical component within many complex pipelines such as those for performing automated segmentation of image stacks. F3D is also called a "descendent" of Quant-CT, another software we developed in the past. These two modules are to be integrated in a next version. Further details were reported in: D.M. Ushizima, T. Perciano, H. Krishnan, B. Loring, H. Bale, D. Parkinson, and J. Sethian. Structure recognition from high-resolution images of ceramic composites. IEEE International Conference on Big Data, October 2014.« less

  15. Measuring glomerular number from kidney MRI images

    NASA Astrophysics Data System (ADS)

    Thiagarajan, Jayaraman J.; Natesan Ramamurthy, Karthikeyan; Kanberoglu, Berkay; Frakes, David; Bennett, Kevin; Spanias, Andreas

    2016-03-01

    Measuring the glomerular number in the entire, intact kidney using non-destructive techniques is of immense importance in studying several renal and systemic diseases. Commonly used approaches either require destruction of the entire kidney or perform extrapolation from measurements obtained from a few isolated sections. A recent magnetic resonance imaging (MRI) method, based on the injection of a contrast agent (cationic ferritin), has been used to effectively identify glomerular regions in the kidney. In this work, we propose a robust, accurate, and low-complexity method for estimating the number of glomeruli from such kidney MRI images. The proposed technique has a training phase and a low-complexity testing phase. In the training phase, organ segmentation is performed on a few expert-marked training images, and glomerular and non-glomerular image patches are extracted. Using non-local sparse coding to compute similarity and dissimilarity graphs between the patches, the subspace in which the glomerular regions can be discriminated from the rest are estimated. For novel test images, the image patches extracted after pre-processing are embedded using the discriminative subspace projections. The testing phase is of low computational complexity since it involves only matrix multiplications, clustering, and simple morphological operations. Preliminary results with MRI data obtained from five kidneys of rats show that the proposed non-invasive, low-complexity approach performs comparably to conventional approaches such as acid maceration and stereology.

  16. Science, Technical Innovation and Applications in Bioacoustics: Summary of a Workshop

    DTIC Science & Technology

    2004-07-01

    binaural processing have been neglected. From a signal-processing standpoint, we should avoid complex computational methods and instead use massively...design and/or build transducers or arrays with anywhere near the performance and, most importantly, environmental adaptability of animal binaural ...shell Small animal imaging Cardiac Imaging in Mice The Challenge Mouse heart • 7mm diameter • 8 beats /sec Mouse Heart L16-28MHzL5-10MHz Laptop

  17. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  18. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  19. Photoelectron imaging of doped helium nanodroplets

    NASA Astrophysics Data System (ADS)

    Neumark, Daniel

    2008-03-01

    Photoelectron images of helium nanodroplets doped with Kr and Ne atoms are reported. The images and resulting photoelectron spectra were obtained using tunable synchrotron radiation to ionize the droplets. Droplets were excited at 21.6 eV, corresponding to a strong droplet electronic excitation. The rare gas dopant is then ionized via a Penning excitation transfer process. The electron kinetic energy distributions reflect complex ionization and electron escape dynamics.

  20. Implementation of Canny and Isotropic Operator with Power Law Transformation to Identify Cervical Cancer

    NASA Astrophysics Data System (ADS)

    Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.

    2018-03-01

    Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.

  1. ImSyn: photonic image synthesis applied to synthetic aperture radar, microscopy, and ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Turpin, Terry M.; Lafuse, James L.

    1993-02-01

    ImSynTM is an image synthesis technology, developed and patented by Essex Corporation. ImSynTM can provide compact, low cost, and low power solutions to some of the most difficult image synthesis problems existing today. The inherent simplicity of ImSynTM enables the manufacture of low cost and reliable photonic systems for imaging applications ranging from airborne reconnaissance to doctor's office ultrasound. The initial application of ImSynTM technology has been to SAR processing; however, it has a wide range of applications such as: image correlation, image compression, acoustic imaging, x-ray tomographic (CAT, PET, SPECT), magnetic resonance imaging (MRI), microscopy, range- doppler mapping (extended TDOA/FDOA). This paper describes ImSynTM in terms of synthetic aperture microscopy and then shows how the technology can be extended to ultrasound and synthetic aperture radar. The synthetic aperture microscope (SAM) enables high resolution three dimensional microscopy with greater dynamic range than real aperture microscopes. SAM produces complex image data, enabling the use of coherent image processing techniques. Most importantly SAM produces the image data in a form that is easily manipulated by a digital image processing workstation.

  2. Design and implementation of non-linear image processing functions for CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel

    2012-11-01

    Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.

  3. Simultaneous realization of Hg2+ sensing, magnetic resonance imaging and upconversion luminescence in vitro and in vivo bioimaging based on hollow mesoporous silica coated UCNPs and ruthenium complex

    NASA Astrophysics Data System (ADS)

    Ge, Xiaoqian; Sun, Lining; Ma, Binbin; Jin, Di; Dong, Liang; Shi, Liyi; Li, Nan; Chen, Haige; Huang, Wei

    2015-08-01

    We have constructed a multifunctional nanoprobe with sensing and imaging properties by using hollow mesoporous silica coated upconversion nanoparticles (UCNPs) and Hg2+ responsive ruthenium (Ru) complex. The Ru complex was loaded into the hollow mesoporous silica and the UCNPs acted as an energy donor, transferring luminescence energy to the Ru complex. Furthermore, polyethylenimine (PEI) was assembled on the surface of mesoporous silica to achieve better hydrophilic and bio-compatibility. Upon addition of Hg2+, a blue shift of the absorption peak of the Ru complex is observed and the energy transfer process between the UCNPs and the Ru complex was blocked, resulting in an increase of the green emission intensity of the UCNPs. The un-changed 801 nm emission of the nanoprobe was used as an internal standard reference and the detection limit of Hg2+ was determined to be 0.16 μM for this nanoprobe in aqueous solution. In addition, based on the low cytotoxicity as studied by CCK-8 assay, the nanoprobe was successfully applied for cell imaging and small animal imaging. Furthermore, when doped with Gd3+ ions, the nanoprobe was successfully applied to in vivo magnetic resonance imaging (MRI) of Kunming mice, which demonstrates its potential as a MRI positive-contrast agent. Therefore, the method and results may provide more exciting opportunities to afford nanoprobes with multimodal bioimaging and multifunctional applications.We have constructed a multifunctional nanoprobe with sensing and imaging properties by using hollow mesoporous silica coated upconversion nanoparticles (UCNPs) and Hg2+ responsive ruthenium (Ru) complex. The Ru complex was loaded into the hollow mesoporous silica and the UCNPs acted as an energy donor, transferring luminescence energy to the Ru complex. Furthermore, polyethylenimine (PEI) was assembled on the surface of mesoporous silica to achieve better hydrophilic and bio-compatibility. Upon addition of Hg2+, a blue shift of the absorption peak of the Ru complex is observed and the energy transfer process between the UCNPs and the Ru complex was blocked, resulting in an increase of the green emission intensity of the UCNPs. The un-changed 801 nm emission of the nanoprobe was used as an internal standard reference and the detection limit of Hg2+ was determined to be 0.16 μM for this nanoprobe in aqueous solution. In addition, based on the low cytotoxicity as studied by CCK-8 assay, the nanoprobe was successfully applied for cell imaging and small animal imaging. Furthermore, when doped with Gd3+ ions, the nanoprobe was successfully applied to in vivo magnetic resonance imaging (MRI) of Kunming mice, which demonstrates its potential as a MRI positive-contrast agent. Therefore, the method and results may provide more exciting opportunities to afford nanoprobes with multimodal bioimaging and multifunctional applications. Electronic supplementary information (ESI) available: DLS of Ru-UCNPs@HmSiO2-PEI in water. The zeta potential. The XRD patterns. EDX spectrum of Ru-UCNPs@HmSiO2-PEI. FT-IR spectra. N2 adsorption-desorption isotherm and pore size distribution. The investigation of the stability of Ru-UCNPs@HmSiO2-PEI. TG curves. UV/Vis absorption spectra of Ru complex at different concentrations. The sensitivity test of Ru-UCNPs@HmSiO2-PEI towards Hg2+. Cell viabilities of HeLa cells incubated with Ru-UCNPs@HmSiO2-PEI. See DOI: 10.1039/c5nr04006j

  4. Radiography for intensive care: participatory process analysis in a PACS-equipped and film/screen environment

    NASA Astrophysics Data System (ADS)

    Peer, Regina; Peer, Siegfried; Sander, Heike; Marsolek, Ingo; Koller, Wolfgang; Pappert, Dirk; Hierholzer, Johannes

    2002-05-01

    If new technology is introduced into medical practice it must prove to make a difference. However traditional approaches of outcome analysis failed to show a direct benefit of PACS on patient care and economical benefits are still in debate. A participatory process analysis was performed to compare workflow in a film based hospital and a PACS environment. This included direct observation of work processes, interview of involved staff, structural analysis and discussion of observations with staff members. After definition of common structures strong and weak workflow steps were evaluated. With a common workflow structure in both hospitals, benefits of PACS were revealed in workflow steps related to image reporting with simultaneous image access for ICU-physicians and radiologists, archiving of images as well as image and report distribution. However PACS alone is not able to cover the complete process of 'radiography for intensive care' from ordering of an image till provision of the final product equals image + report. Interference of electronic workflow with analogue process steps such as paper based ordering reduces the potential benefits of PACS. In this regard workflow modeling proved to be very helpful for the evaluation of complex work processes linking radiology and the ICU.

  5. [Quantitative data analysis for live imaging of bone.

    PubMed

    Seno, Shigeto

    Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.

  6. Quantum image median filtering in the spatial domain

    NASA Astrophysics Data System (ADS)

    Li, Panchi; Liu, Xiande; Xiao, Hong

    2018-03-01

    Spatial filtering is one principal tool used in image processing for a broad spectrum of applications. Median filtering has become a prominent representation of spatial filtering because its performance in noise reduction is excellent. Although filtering of quantum images in the frequency domain has been described in the literature, and there is a one-to-one correspondence between linear spatial filters and filters in the frequency domain, median filtering is a nonlinear process that cannot be achieved in the frequency domain. We therefore investigated the spatial filtering of quantum image, focusing on the design method of the quantum median filter and applications in image de-noising. To this end, first, we presented the quantum circuits for three basic modules (i.e., Cycle Shift, Comparator, and Swap), and then, we design two composite modules (i.e., Sort and Median Calculation). We next constructed a complete quantum circuit that implements the median filtering task and present the results of several simulation experiments on some grayscale images with different noise patterns. Although experimental results show that the proposed scheme has almost the same noise suppression capacity as its classical counterpart, the complexity analysis shows that the proposed scheme can reduce the computational complexity of the classical median filter from the exponential function of image size n to the second-order polynomial function of image size n, so that the classical method can be speeded up.

  7. Comparing an FPGA to a Cell for an Image Processing Application

    NASA Astrophysics Data System (ADS)

    Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.

    2010-12-01

    Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.

  8. Generation of synthetic image sequences for the verification of matching and tracking algorithms for deformation analysis

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Jepping, C.; Luhmann, T.

    2013-04-01

    This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.

  9. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    PubMed

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  10. a Critical Review of Automated Photogrammetric Processing of Large Datasets

    NASA Astrophysics Data System (ADS)

    Remondino, F.; Nocerino, E.; Toschi, I.; Menna, F.

    2017-08-01

    The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.

  11. The image enhancement and region of interest extraction of lobster-eye X-ray dangerous material inspection system

    NASA Astrophysics Data System (ADS)

    Zhan, Qi; Wang, Xin; Mu, Baozhong; Xu, Jie; Xie, Qing; Li, Yaran; Chen, Yifan; He, Yanan

    2016-10-01

    Dangerous materials inspection is an important technique to confirm dangerous materials crimes. It has significant impact on the prohibition of dangerous materials-related crimes and the spread of dangerous materials. Lobster-Eye Optical Imaging System is a kind of dangerous materials detection device which mainly takes advantage of backscatter X-ray. The strength of the system is its applicability to access only one side of an object, and to detect dangerous materials without disturbing the surroundings of the target material. The device uses Compton scattered x-rays to create computerized outlines of suspected objects during security detection process. Due to the grid structure of the bionic object glass, which imitate the eye of a lobster, grids contribute to the main image noise during the imaging process. At the same time, when used to inspect structured or dense materials, the image is plagued by superposition artifacts and limited by attenuation and noise. With the goal of achieving high quality images which could be used for dangerous materials detection and further analysis, we developed effective image process methods applied to the system. The first aspect of the image process is the denoising and enhancing edge contrast process, during the process, we apply deconvolution algorithm to remove the grids and other noises. After image processing, we achieve high signal-to-noise ratio image. The second part is to reconstruct image from low dose X-ray exposure condition. We developed a kind of interpolation method to achieve the goal. The last aspect is the region of interest (ROI) extraction process, which could be used to help identifying dangerous materials mixed with complex backgrounds. The methods demonstrated in the paper have the potential to improve the sensitivity and quality of x-ray backscatter system imaging.

  12. Cone beam tomographic imaging anatomy of the maxillofacial region.

    PubMed

    Angelopoulos, Christos

    2008-10-01

    Multiplanar imaging is a fairly new concept in diagnostic imaging available with a number of contemporary imaging modalities such as CT, MR imaging, diagnostic ultrasound, and others. This modality allows reconstruction of images in different planes (flat or curved) from a volume of data that was acquired previously. This concept makes the diagnostic process more interactive, and proper use may increase diagnostic potential. At the same time, the complexity of the anatomical structures on the maxillofacial region may make it harder for these images to be interpreted. This article reviews the anatomy of maxillofacial structures in planar imaging, and more specifically cone-beam CT images.

  13. Multiparametric Imaging of Organ System Interfaces

    PubMed Central

    Vandoorne, Katrien; Nahrendorf, Matthias

    2017-01-01

    Cardiovascular diseases are a consequence of genetic and environmental risk factors that together generate arterial wall and cardiac pathologies. Blood vessels connect multiple systems throughout the entire body and allow organs to interact via circulating messengers. These same interactions facilitate nervous and metabolic system influence on cardiovascular health. Multiparametric imaging offers the opportunity to study these interfacing systems’ distinct processes, to quantify their interactions and to explore how these contribute to cardiovascular disease. Noninvasive multiparametric imaging techniques are emerging tools that can further our understanding of this complex and dynamic interplay. PET/MRI and multichannel optical imaging are particularly promising because they can simultaneously sample multiple biomarkers. Preclinical multiparametric diagnostics could help discover clinically relevant biomarker combinations pivotal for understanding cardiovascular disease. Interfacing systems important to cardiovascular disease include the immune, nervous and hematopoietic systems. These systems connect with ‘classical’ cardiovascular organs, like the heart and vasculature, and with the brain. The dynamic interplay between these systems and organs enables processes such as hemostasis, inflammation, angiogenesis, matrix remodeling, metabolism and fibrosis. As the opportunities provided by imaging expand, mapping interconnected systems will help us decipher the complexity of cardiovascular disease and monitor novel therapeutic strategies. PMID:28360260

  14. Architecture of the parallel hierarchical network for fast image recognition

    NASA Astrophysics Data System (ADS)

    Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule

    2016-09-01

    Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.

  15. Noninvasive imaging of protein-protein interactions in living organisms.

    PubMed

    Haberkorn, Uwe; Altmann, Annette

    2003-06-01

    Genomic research is expected to generate new types of complex observational data, changing the types of experiments as well as our understanding of biological processes. The investigation and definition of relationships among proteins is essential for understanding the function of each gene and the mechanisms of biological processes that specific genes are involved in. Recently, a study by Paulmurugan et al. demonstrated a tool for in vivo noninvasive imaging of protein-protein interactions and intracellular networks.

  16. New solutions and applications of 3D computer tomography image processing

    NASA Astrophysics Data System (ADS)

    Effenberger, Ira; Kroll, Julia; Verl, Alexander

    2008-02-01

    As nowadays the industry aims at fast and high quality product development and manufacturing processes a modern and efficient quality inspection is essential. Compared to conventional measurement technologies, industrial computer tomography (CT) is a non-destructive technology for 3D-image data acquisition which helps to overcome their disadvantages by offering the possibility to scan complex parts with all outer and inner geometric features. In this paper new and optimized methods for 3D image processing, including innovative ways of surface reconstruction and automatic geometric feature detection of complex components, are presented, especially our work of developing smart online data processing and data handling methods, with an integrated intelligent online mesh reduction. Hereby the processing of huge and high resolution data sets is guaranteed. Besides, new approaches for surface reconstruction and segmentation based on statistical methods are demonstrated. On the extracted 3D point cloud or surface triangulation automated and precise algorithms for geometric inspection are deployed. All algorithms are applied to different real data sets generated by computer tomography in order to demonstrate the capabilities of the new tools. Since CT is an emerging technology for non-destructive testing and inspection more and more industrial application fields will use and profit from this new technology.

  17. Analysis and improvement of the quantum image matching

    NASA Astrophysics Data System (ADS)

    Dang, Yijie; Jiang, Nan; Hu, Hao; Zhang, Wenyin

    2017-11-01

    We investigate the quantum image matching algorithm proposed by Jiang et al. (Quantum Inf Process 15(9):3543-3572, 2016). Although the complexity of this algorithm is much better than the classical exhaustive algorithm, there may be an error in it: After matching the area between two images, only the pixel at the upper left corner of the matched area played part in following steps. That is to say, the paper only matched one pixel, instead of an area. If more than one pixels in the big image are the same as the one at the upper left corner of the small image, the algorithm will randomly measure one of them, which causes the error. In this paper, an improved version is presented which takes full advantage of the whole matched area to locate a small image in a big image. The theoretical analysis indicates that the network complexity is higher than the previous algorithm, but it is still far lower than the classical algorithm. Hence, this algorithm is still efficient.

  18. 3D printed optical phantoms and deep tissue imaging for in vivo applications including oral surgery

    NASA Astrophysics Data System (ADS)

    Bentz, Brian Z.; Costas, Alfonso; Gaind, Vaibhav; Garcia, Jose M.; Webb, Kevin J.

    2017-03-01

    Progress in developing optical imaging for biomedical applications requires customizable and often complex objects known as "phantoms" for testing, evaluation, and calibration. This work demonstrates that 3D printing is an ideal method for fabricating such objects, allowing intricate inhomogeneities to be placed at exact locations in complex or anatomically realistic geometries, a process that is difficult or impossible using molds. We show printed mouse phantoms we have fabricated for developing deep tissue fluorescence imaging methods, and measurements of both their optical and mechanical properties. Additionally, we present a printed phantom of the human mouth that we use to develop an artery localization method to assist in oral surgery.

  19. Three-dimensional Bragg coherent diffraction imaging of an extended ZnO crystal.

    PubMed

    Huang, Xiaojing; Harder, Ross; Leake, Steven; Clark, Jesse; Robinson, Ian

    2012-08-01

    A complex three-dimensional quantitative image of an extended zinc oxide (ZnO) crystal has been obtained using Bragg coherent diffraction imaging integrated with ptychography. By scanning a 2.5 µm-long arm of a ZnO tetrapod across a 1.3 µm X-ray beam with fine step sizes while measuring a three-dimensional diffraction pattern at each scan spot, the three-dimensional electron density and projected displacement field of the entire crystal were recovered. The simultaneously reconstructed complex wavefront of the illumination combined with its coherence properties determined by a partial coherence analysis implemented in the reconstruction process provide a comprehensive characterization of the incident X-ray beam.

  20. Quantitative real-time analysis of collective cancer invasion and dissemination

    NASA Astrophysics Data System (ADS)

    Ewald, Andrew J.

    2015-05-01

    A grand challenge in biology is to understand the cellular and molecular basis of tissue and organ level function in mammals. The ultimate goals of such efforts are to explain how organs arise in development from the coordinated actions of their constituent cells and to determine how molecularly regulated changes in cell behavior alter the structure and function of organs during disease processes. Two major barriers stand in the way of achieving these goals: the relative inaccessibility of cellular processes in mammals and the daunting complexity of the signaling environment inside an intact organ in vivo. To overcome these barriers, we have developed a suite of tissue isolation, three dimensional (3D) culture, genetic manipulation, nanobiomaterials, imaging, and molecular analysis techniques to enable the real-time study of cell biology within intact tissues in physiologically relevant 3D environments. This manuscript introduces the rationale for 3D culture, reviews challenges to optical imaging in these cultures, and identifies current limitations in the analysis of complex experimental designs that could be overcome with improved imaging, imaging analysis, and automated classification of the results of experimental interventions.

  1. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  2. Semi-automatic mapping for identifying complex geobodies in seismic images

    NASA Astrophysics Data System (ADS)

    Domínguez-C, Raymundo; Romero-Salcedo, Manuel; Velasquillo-Martínez, Luis G.; Shemeretov, Leonid

    2017-03-01

    Seismic images are composed of positive and negative seismic wave traces with different amplitudes (Robein 2010 Seismic Imaging: A Review of the Techniques, their Principles, Merits and Limitations (Houten: EAGE)). The association of these amplitudes together with a color palette forms complex visual patterns. The color intensity of such patterns is directly related to impedance contrasts: the higher the contrast, the higher the color intensity. Generally speaking, low impedance contrasts are depicted with low tone colors, creating zones with different patterns whose features are not evident for a 3D automated mapping option available on commercial software. In this work, a workflow for a semi-automatic mapping of seismic images focused on those areas with low-intensity colored zones that may be associated with geobodies of petroleum interest is proposed. The CIE L*A*B* color space was used to perform the seismic image processing, which helped find small but significant differences between pixel tones. This process generated binary masks that bound color regions to low-intensity colors. The three-dimensional-mask projection allowed the construction of 3D structures for such zones (geobodies). The proposed method was applied to a set of digital images from a seismic cube and tested on four representative study cases. The obtained results are encouraging because interesting geobodies are obtained with a minimum of information.

  3. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  4. Singlet oxygen Triplet Energy Transfer based imaging technology for mapping protein-protein proximity in intact cells

    PubMed Central

    To, Tsz-Leung; Fadul, Michael J.; Shu, Xiaokun

    2014-01-01

    Many cellular processes are carried out by large protein complexes that can span several tens of nanometers. Whereas Forster resonance energy transfer has a detection range of <10 nm, here we report the theoretical development and experimental demonstration of a new fluorescence imaging technology with a detection range of up to several tens of nanometers: singlet oxygen triplet energy transfer. We demonstrate that our method confirms the topology of a large protein complex in intact cells, which spans from the endoplasmic reticulum to the outer mitochondrial membrane and the matrix. This new method is thus suited for mapping protein proximity in large protein complexes. PMID:24905026

  5. Automated segmentation of three-dimensional MR brain images

    NASA Astrophysics Data System (ADS)

    Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee

    2006-03-01

    Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.

  6. Near Real-Time Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Denker, C.; Yang, G.; Wang, H.

    2001-08-01

    In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.

  7. Swept-frequency feedback interferometry using terahertz frequency QCLs: a method for imaging and materials analysis.

    PubMed

    Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles

    2013-09-23

    The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.

  8. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  9. Small-Animal Imaging Using Diffuse Fluorescence Tomography.

    PubMed

    Davis, Scott C; Tichauer, Kenneth M

    2016-01-01

    Diffuse fluorescence tomography (DFT) has been developed to image the spatial distribution of fluorescence-tagged tracers in living tissue. This capability facilitates the recovery of any number of functional parameters, including enzymatic activity, receptor density, blood flow, and gene expression. However, deploying DFT effectively is complex and often requires years of know-how, especially for newer mutlimodal systems that combine DFT with conventional imaging systems. In this chapter, we step through the process of using MRI-DFT imaging of a receptor-targeted tracer in small animals.

  10. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  11. High-performance technology for indexing of high volumes of Earth remote sensing data

    NASA Astrophysics Data System (ADS)

    Strotov, Valery V.; Taganov, Alexander I.; Kolesenkov, Aleksandr N.; Kostrov, Boris V.

    2017-10-01

    The present paper has suggested a technology for search, indexing, cataloging and distribution of aerospace images on the basis of geo-information approach, cluster and spectral analysis. It has considered information and algorithmic support of the system. Functional circuit of the system and structure of the geographical data base have been developed on the basis of the geographical online portal technology. Taking into account heterogeneity of information obtained from various sources it is reasonable to apply a geoinformation platform that allows analyzing space location of objects and territories and executing complex processing of information. Geoinformation platform is based on cartographic fundamentals with the uniform coordinate system, the geographical data base, a set of algorithms and program modules for execution of various tasks. The technology for adding by particular users and companies of images taken by means of professional and amateur devices and also processed by various software tools to the array system has been suggested. Complex usage of visual and instrumental approaches allows significantly expanding an application area of Earth remote sensing data. Development and implementation of new algorithms based on the complex usage of new methods for processing of structured and unstructured data of high volumes will increase periodicity and rate of data updating. The paper has shown that application of original algorithms for search, indexing and cataloging of aerospace images will provide an easy access to information spread by hundreds of suppliers and allow increasing an access rate to aerospace images up to 5 times in comparison with current analogues.

  12. Causes of cine image quality deterioration in cardiac catheterization laboratories.

    PubMed

    Levin, D C; Dunham, L R; Stueve, R

    1983-10-01

    Deterioration of cineangiographic image quality can result from malfunctions or technical errors at a number of points along the cine imaging chain: generator and automatic brightness control, x-ray tube, x-ray beam geometry, image intensifier, optics, cine camera, cine film, film processing, and cine projector. Such malfunctions or errors can result in loss of image contrast, loss of spatial resolution, improper control of film optical density (brightness), or some combination thereof. While the electronic and photographic technology involved is complex, physicians who perform cardiac catheterization should be conversant with the problems and what can be done to solve them. Catheterization laboratory personnel have control over a number of factors that directly affect image quality, including radiation dose rate per cine frame, kilovoltage or pulse width (depending on type of automatic brightness control), cine run time, selection of small or large focal spot, proper object-intensifier distance and beam collimation, aperture of the cine camera lens, selection of cine film, processing temperature, processing immersion time, and selection of developer.

  13. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing.

    PubMed

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-02-23

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.

  14. Unmanned Aerial Vehicle Systems for Remote Estimation of Flooded Areas Based on Complex Image Processing

    PubMed Central

    Popescu, Dan; Ichim, Loretta; Stoican, Florin

    2017-01-01

    Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479

  15. An infrared-visible image fusion scheme based on NSCT and compressed sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Qiong; Maldague, Xavier

    2015-05-01

    Image fusion, as a research hot point nowadays in the field of infrared computer vision, has been developed utilizing different varieties of methods. Traditional image fusion algorithms are inclined to bring problems, such as data storage shortage and computational complexity increase, etc. Compressed sensing (CS) uses sparse sampling without knowing the priori knowledge and greatly reconstructs the image, which reduces the cost and complexity of image processing. In this paper, an advanced compressed sensing image fusion algorithm based on non-subsampled contourlet transform (NSCT) is proposed. NSCT provides better sparsity than the wavelet transform in image representation. Throughout the NSCT decomposition, the low-frequency and high-frequency coefficients can be obtained respectively. For the fusion processing of low-frequency coefficients of infrared and visible images , the adaptive regional energy weighting rule is utilized. Thus only the high-frequency coefficients are specially measured. Here we use sparse representation and random projection to obtain the required values of high-frequency coefficients, afterwards, the coefficients of each image block can be fused via the absolute maximum selection rule and/or the regional standard deviation rule. In the reconstruction of the compressive sampling results, a gradient-based iterative algorithm and the total variation (TV) method are employed to recover the high-frequency coefficients. Eventually, the fused image is recovered by inverse NSCT. Both the visual effects and the numerical computation results after experiments indicate that the presented approach achieves much higher quality of image fusion, accelerates the calculations, enhances various targets and extracts more useful information.

  16. A new hyperspectral image compression paradigm based on fusion

    NASA Astrophysics Data System (ADS)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  17. A comparative study of deep learning models for medical image classification

    NASA Astrophysics Data System (ADS)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are favoured as they provide better visual processing models successfully classifying the noisy data as well. The work centres on the detection on Diabetic Retinopathy-loss in vision and recognition of computed tomography (CT) emphysema data measuring the severity levels for both cases. The paper discovers how various Machine Learning algorithms can be implemented ensuing a supervised approach, so as to get accurate results with less complexity possible.

  18. Bio-inspired color image enhancement

    NASA Astrophysics Data System (ADS)

    Meylan, Laurence; Susstrunk, Sabine

    2004-06-01

    Capturing and rendering an image that fulfills the observer's expectations is a difficult task. This is due to the fact that the signal reaching the eye is processed by a complex mechanism before forming a percept, whereas a capturing device only retains the physical value of light intensities. It is especially difficult to render complex scenes with highly varying luminances. For example, a picture taken inside a room where objects are visible through the windows will not be rendered correctly by a global technique. Either details in the dim room will be hidden in shadow or the objects viewed through the window will be too bright. The image has to be treated locally to resemble more closely to what the observer remembers. The purpose of this work is to develop a technique for rendering images based on human local adaptation. We take inspiration from a model of color vision called Retinex. This model determines the perceived color given spatial relationships of the captured signals. Retinex has been used as a computational model for image rendering. In this article, we propose a new solution inspired by Retinex that is based on a single filter applied to the luminance channel. All parameters are image-dependent so that the process requires no parameter tuning. That makes the method more flexible than other existing ones. The presented results show that our method suitably enhances high dynamic range images.

  19. Design and analysis of x-ray vision systems for high-speed detection of foreign body contamination in food

    NASA Astrophysics Data System (ADS)

    Graves, Mark; Smith, Alexander; Batchelor, Bruce G.; Palmer, Stephen C.

    1994-10-01

    In the food industry there is an ever increasing need to control and monitor food quality. In recent years fully automated x-ray inspection systems have been used to detect food on-line for foreign body contamination. These systems involve a complex integration of x- ray imaging components with state of the art high speed image processing. The quality of the x-ray image obtained by such systems is very poor compared with images obtained from other inspection processes, this makes reliable detection of very small, low contrast defects extremely difficult. It is therefore extremely important to optimize the x-ray imaging components to give the very best image possible. In this paper we present a method of analyzing the x-ray imaging system in order to consider the contrast obtained when viewing small defects.

  20. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  1. Image processing to optimize wave energy converters

    NASA Astrophysics Data System (ADS)

    Bailey, Kyle Marc-Anthony

    The world is turning to renewable energies as a means of ensuring the planet's future and well-being. There have been a few attempts in the past to utilize wave power as a means of generating electricity through the use of Wave Energy Converters (WEC), but only recently are they becoming a focal point in the renewable energy field. Over the past few years there has been a global drive to advance the efficiency of WEC. Placing a mechanical device either onshore or offshore that captures the energy within ocean surface waves to drive a mechanical device is how wave power is produced. This paper seeks to provide a novel and innovative way to estimate ocean wave frequency through the use of image processing. This will be achieved by applying a complex modulated lapped orthogonal transform filter bank to satellite images of ocean waves. The complex modulated lapped orthogonal transform filterbank provides an equal subband decomposition of the Nyquist bounded discrete time Fourier Transform spectrum. The maximum energy of the 2D complex modulated lapped transform subband is used to determine the horizontal and vertical frequency, which subsequently can be used to determine the wave frequency in the direction of the WEC by a simple trigonometric scaling. The robustness of the proposed method is provided by the applications to simulated and real satellite images where the frequency is known.

  2. Automatic Sea Bird Detection from High Resolution Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Mader, S.; Grenzdörffer, G. J.

    2016-06-01

    Great efforts are presently taken in the scientific community to develop computerized and (fully) automated image processing methods allowing for an efficient and automatic monitoring of sea birds and marine mammals in ever-growing amounts of aerial imagery. Currently the major part of the processing, however, is still conducted by especially trained professionals, visually examining the images and detecting and classifying the requested subjects. This is a very tedious task, particularly when the rate of void images regularly exceeds the mark of 90%. In the content of this contribution we will present our work aiming to support the processing of aerial images by modern methods from the field of image processing. We will especially focus on the combination of local, region-based feature detection and piecewise global image segmentation for automatic detection of different sea bird species. Large image dimensions resulting from the use of medium and large-format digital cameras in aerial surveys inhibit the applicability of image processing methods based on global operations. In order to efficiently handle those image sizes and to nevertheless take advantage of globally operating segmentation algorithms, we will describe the combined usage of a simple performant feature detector based on local operations on the original image with a complex global segmentation algorithm operating on extracted sub-images. The resulting exact segmentation of possible candidates then serves as a basis for the determination of feature vectors for subsequent elimination of false candidates and for classification tasks.

  3. A service protocol for post-processing of medical images on the mobile device

    NASA Astrophysics Data System (ADS)

    He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian

    2014-03-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.

  4. Image space subdivision for fast ray tracing

    NASA Astrophysics Data System (ADS)

    Yu, Billy T.; Yu, William W.

    1999-09-01

    Ray-tracing is notorious of its computational requirement. There were a number of techniques to speed up the process. However, a famous statistic indicated that ray-object intersections occupies over 95% of the total image generation time. Thus, it is most beneficial to work on this bottle-neck. There were a number of ray-object intersection reduction techniques and they could be classified into three major categories: bounding volume hierarchies, space subdivision, and directional subdivision. This paper introduces a technique falling into the third category. To further speed up the process, it takes advantages of hierarchy by adopting a MX-CIF quadtree in the image space. This special kind of quadtree provides simple objects allocation and ease of implementation. The text also included a theoretical proof of the expected performance. For ray-polygon comparison, the technique reduces the order of complexity from linear to square-root, O(n) -> O(2(root)n). Experiments with various shape, size and complexity were conducted to verify the expectation. Results shown that computational improvement grew with the complexity of the sceneries. The experimental improvement was more than 90% and it agreed with the theoretical value when the number of polygons exceeded 3000. The more complex was the scene, the more efficient was the acceleration. The algorithm described was implemented in the polygonal level, however, it could be easily enhanced and extended to the object or higher levels.

  5. Quantitative imaging of mammalian transcriptional dynamics: from single cells to whole embryos.

    PubMed

    Zhao, Ziqing W; White, Melanie D; Bissiere, Stephanie; Levi, Valeria; Plachta, Nicolas

    2016-12-23

    Probing dynamic processes occurring within the cell nucleus at the quantitative level has long been a challenge in mammalian biology. Advances in bio-imaging techniques over the past decade have enabled us to directly visualize nuclear processes in situ with unprecedented spatial and temporal resolution and single-molecule sensitivity. Here, using transcription as our primary focus, we survey recent imaging studies that specifically emphasize the quantitative understanding of nuclear dynamics in both time and space. These analyses not only inform on previously hidden physical parameters and mechanistic details, but also reveal a hierarchical organizational landscape for coordinating a wide range of transcriptional processes shared by mammalian systems of varying complexity, from single cells to whole embryos.

  6. Parallel workflow tools to facilitate human brain MRI post-processing

    PubMed Central

    Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang

    2015-01-01

    Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043

  7. MIiSR: Molecular Interactions in Super-Resolution Imaging Enables the Analysis of Protein Interactions, Dynamics and Formation of Multi-protein Structures.

    PubMed

    Caetano, Fabiana A; Dirk, Brennan S; Tam, Joshua H K; Cavanagh, P Craig; Goiko, Maria; Ferguson, Stephen S G; Pasternak, Stephen H; Dikeakos, Jimmy D; de Bruyn, John R; Heit, Bryan

    2015-12-01

    Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.

  8. Quantitative multiplex immunohistochemistry reveals myeloid-inflamed tumor-immune complexity associated with poor prognosis

    PubMed Central

    Tsujikawa, Takahiro; Kumar, Sushil; Borkar, Rohan N.; Azimi, Vahid; Thibault, Guillaume; Chang, Young Hwan; Balter, Ariel; Kawashima, Rie; Choe, Gina; Sauer, David; El Rassi, Edward; Clayburgh, Daniel R.; Kulesz-Martin, Molly F.; Lutz, Eric R.; Zheng, Lei; Jaffee, Elizabeth M.; Leyshock, Patrick; Margolin, Adam A.; Mori, Motomi; Gray, Joe W.; Flint, Paul W.; Coussens, Lisa M.

    2017-01-01

    SUMMARY Here we describe a multiplexed immunohistochemical platform, with computational image processing workflows including image cytometry, enabling simultaneous evaluation of 12 biomarkers in one formalin-fixed paraffin-embedded tissue section. To validate this platform, we used tissue microarrays containing 38 archival head and neck squamous cell carcinomas, and revealed differential immune profiles based on lymphoid and myeloid cell densities, correlating with human papilloma virus status and prognosis. Based on these results, we investigated 24 pancreatic ductal adenocarcinomas from patients who received neoadjuvant GVAX vaccination, and revealed that response to therapy correlated with degree of mono-myelocytic cell density, and percentages of CD8+ T cells expressing T cell exhaustion markers. These data highlight the utility of in situ immune monitoring for patient stratification, and provide digital image processing pipelines (https://github.com/multiplexIHC/cppipe) to the community for examining immune complexity in precious tissue sections, where phenotype and tissue architecture are preserved to thus improve biomarker discovery and assessment. PMID:28380359

  9. Research on image complexity evaluation method based on color information

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Duan, Jin; Han, Xue-hui; Xiao, Bo

    2017-11-01

    In order to evaluate the complexity of a color image more effectively and find the connection between image complexity and image information, this paper presents a method to compute the complexity of image based on color information.Under the complexity ,the theoretical analysis first divides the complexity from the subjective level, divides into three levels: low complexity, medium complexity and high complexity, and then carries on the image feature extraction, finally establishes the function between the complexity value and the color characteristic model. The experimental results show that this kind of evaluation method can objectively reconstruct the complexity of the image from the image feature research. The experimental results obtained by the method of this paper are in good agreement with the results of human visual perception complexity,Color image complexity has a certain reference value.

  10. Learning Boolean Networks in HepG2 cells using ToxCast High-Content Imaging Data (SOT annual meeting)

    EPA Science Inventory

    Cells adapt to their environment via homeostatic processes that are regulated by complex molecular networks. Our objective was to learn key elements of these networks in HepG2 cells using ToxCast High-content imaging (HCI) measurements taken over three time points (1, 24, and 72h...

  11. Dual Language Use in Sign-Speech Bimodal Bilinguals: fNIRS Brain-Imaging Evidence

    ERIC Educational Resources Information Center

    Kovelman, Ioulia; Shalinsky, Mark H.; White, Katherine S.; Schmitt, Shawn N.; Berens, Melody S.; Paymer, Nora; Petitto, Laura-Ann

    2009-01-01

    The brain basis of bilinguals' ability to use two languages at the same time has been a hotly debated topic. On the one hand, behavioral research has suggested that bilingual dual language use involves complex and highly principled linguistic processes. On the other hand, brain-imaging research has revealed that bilingual language switching…

  12. Teaching Anatomy and Physiology Using Computer-Based, Stereoscopic Images

    ERIC Educational Resources Information Center

    Perry, Jamie; Kuehn, David; Langlois, Rick

    2007-01-01

    Learning real three-dimensional (3D) anatomy for the first time can be challenging. Two-dimensional drawings and plastic models tend to over-simplify the complexity of anatomy. The approach described uses stereoscopy to create 3D images of the process of cadaver dissection and to demonstrate the underlying anatomy related to the speech mechanisms.…

  13. Fire detection system using random forest classification for image sequences of complex background

    NASA Astrophysics Data System (ADS)

    Kim, Onecue; Kang, Dong-Joong

    2013-06-01

    We present a fire alarm system based on image processing that detects fire accidents in various environments. To reduce false alarms that frequently appeared in earlier systems, we combined image features including color, motion, and blinking information. We specifically define the color conditions of fires in hue, saturation and value, and RGB color space. Fire features are represented as intensity variation, color mean and variance, motion, and image differences. Moreover, blinking fire features are modeled by using crossing patches. We propose an algorithm that classifies patches into fire or nonfire areas by using random forest supervised learning. We design an embedded surveillance device made with acrylonitrile butadiene styrene housing for stable fire detection in outdoor environments. The experimental results show that our algorithm works robustly in complex environments and is able to detect fires in real time.

  14. Automated seamline detection along skeleton for remote sensing image mosaicking

    NASA Astrophysics Data System (ADS)

    Zhang, Hansong; Chen, Jianyu; Liu, Xin

    2015-08-01

    The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.

  15. Research of real-time video processing system based on 6678 multi-core DSP

    NASA Astrophysics Data System (ADS)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  16. Feedforward object-vision models only tolerate small image variations compared to human

    PubMed Central

    Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-01-01

    Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986

  17. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  18. PSF estimation for defocus blurred image based on quantum back-propagation neural network

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Zhang, Yan; Shao, Xiao-guang; Liu, Ying-hui; Ni, Guoqiang

    2010-11-01

    Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2 texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based on training sets from the historical images. Test results show that this method owns excellent features of high precision and strong generalization ability.

  19. Adaptive nonlinear L2 and L3 filters for speckled image processing

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Melnik, Vladimir P.; Chemerovsky, Victor I.; Astola, Jaakko T.

    1997-04-01

    Here we propose adaptive nonlinear filters based on calculation and analysis of two or three order statistics in a scanning window. They are designed for processing images corrupted by severe speckle noise with non-symmetrical. (Rayleigh or one-side exponential) distribution laws; impulsive noise can be also present. The proposed filtering algorithms provide trade-off between impulsive noise can be also present. The proposed filtering algorithms provide trade-off between efficient speckle noise suppression, robustness, good edge/detail preservation, low computational complexity, preservation of average level for homogeneous regions of images. Quantitative evaluations of the characteristics of the proposed filter are presented as well as the results of the application to real synthetic aperture radar and ultrasound medical images.

  20. Multiscale Image Processing of Solar Image Data

    NASA Astrophysics Data System (ADS)

    Young, C.; Myers, D. C.

    2001-12-01

    It is often said that the blessing and curse of solar physics is too much data. Solar missions such as Yohkoh, SOHO and TRACE have shown us the Sun with amazing clarity but have also increased the amount of highly complex data. We have improved our view of the Sun yet we have not improved our analysis techniques. The standard techniques used for analysis of solar images generally consist of observing the evolution of features in a sequence of byte scaled images or a sequence of byte scaled difference images. The determination of features and structures in the images are done qualitatively by the observer. There is little quantitative and objective analysis done with these images. Many advances in image processing techniques have occured in the past decade. Many of these methods are possibly suited for solar image analysis. Multiscale/Multiresolution methods are perhaps the most promising. These methods have been used to formulate the human ability to view and comprehend phenomena on different scales. So these techniques could be used to quantitify the imaging processing done by the observers eyes and brains. In this work we present several applications of multiscale techniques applied to solar image data. Specifically, we discuss uses of the wavelet, curvelet, and related transforms to define a multiresolution support for EIT, LASCO and TRACE images.

  1. Specific-Token Effects in Screening Tasks: Possible Implications for Aviation Security

    ERIC Educational Resources Information Center

    Smith, J. David; Redford, Joshua S.; Washburn, David A.; Taglialatela, Lauren A.

    2005-01-01

    Screeners at airport security checkpoints perform an important categorization task in which they search for threat items in complex x-ray images. But little is known about how the processes of categorization stand up to visual complexity. The authors filled this research gap with screening tasks in which participants searched for members of target…

  2. Simulation of noise involved in synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Grandchamp, Myriam; Cavassilas, Jean-Francois

    1996-08-01

    The synthetic aperture radr (SAR) returns from a linear distribution of scatterers are simulated and processed in order to estimate the reflectivity coefficients of the ground. An original expression of this estimate is given, which establishes the relation between the terms of signal and noise. Both are compared. One application of this formulation consists of detecting a surface ship wake on a complex SAR image. A smoothing is first accomplished on the complex image. The choice of the integration area is determined by the preceding mathematical formulation. Then a differential filter is applied, and results are shown for two parts of the wake.

  3. Three-color single molecule imaging shows WASP detachment from Arp2/3 complex triggers actin filament branch formation.

    PubMed

    Smith, Benjamin A; Padrick, Shae B; Doolittle, Lynda K; Daugherty-Clarke, Karen; Corrêa, Ivan R; Xu, Ming-Qun; Goode, Bruce L; Rosen, Michael K; Gelles, Jeff

    2013-09-03

    During cell locomotion and endocytosis, membrane-tethered WASP proteins stimulate actin filament nucleation by the Arp2/3 complex. This process generates highly branched arrays of filaments that grow toward the membrane to which they are tethered, a conflict that seemingly would restrict filament growth. Using three-color single-molecule imaging in vitro we revealed how the dynamic associations of Arp2/3 complex with mother filament and WASP are temporally coordinated with initiation of daughter filament growth. We found that WASP proteins dissociated from filament-bound Arp2/3 complex prior to new filament growth. Further, mutations that accelerated release of WASP from filament-bound Arp2/3 complex proportionally accelerated branch formation. These data suggest that while WASP promotes formation of pre-nucleation complexes, filament growth cannot occur until it is triggered by WASP release. This provides a mechanism by which membrane-bound WASP proteins can stimulate network growth without restraining it. DOI:http://dx.doi.org/10.7554/eLife.01008.001.

  4. Neural Correlates of Metonymy Resolution

    ERIC Educational Resources Information Center

    Rapp, Alexander M.; Erb, Michael; Grodd, Wolfgang; Bartels, Mathias; Markert, Katja

    2011-01-01

    Metonymies are exemplary models for complex semantic association processes at the sentence level. We investigated processing of metonymies using event-related functional magnetic resonance imaging (fMRI). During an 1.5 Tesla fMRI scan, 14 healthy subjects (12 female) read 124 short German sentences with either literal (like "Africa is arid"),…

  5. Quantum realization of the bilinear interpolation method for NEQR.

    PubMed

    Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou

    2017-05-31

    In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.

  6. Chemical sporulation and germination: cytoprotective nanocoating of individual mammalian cells with a degradable tannic acid-FeIII complex

    NASA Astrophysics Data System (ADS)

    Lee, Juno; Cho, Hyeoncheol; Choi, Jinsu; Kim, Doyeon; Hong, Daewha; Park, Ji Hun; Yang, Sung Ho; Choi, Insung S.

    2015-11-01

    Individual mammalian cells were coated with cytoprotective and degradable films by cytocompatible processes maintaining the cell viability. Three types of mammalian cells (HeLa, NIH 3T3, and Jurkat cells) were coated with a metal-organic complex of tannic acid (TA) and ferric ion, and the TA-FeIII nanocoat effectively protected the coated mammalian cells against UV-C irradiation and a toxic compound. More importantly, the cell proliferation was controlled by programmed formation and degradation of the TA-FeIII nanocoat, mimicking the sporulation and germination processes found in nature.Individual mammalian cells were coated with cytoprotective and degradable films by cytocompatible processes maintaining the cell viability. Three types of mammalian cells (HeLa, NIH 3T3, and Jurkat cells) were coated with a metal-organic complex of tannic acid (TA) and ferric ion, and the TA-FeIII nanocoat effectively protected the coated mammalian cells against UV-C irradiation and a toxic compound. More importantly, the cell proliferation was controlled by programmed formation and degradation of the TA-FeIII nanocoat, mimicking the sporulation and germination processes found in nature. Electronic supplementary information (ESI) available: Experimental details, LSCM images, and SEM and TEM images. See DOI: 10.1039/c5nr05573c

  7. Application of failure mode and effect analysis in a radiology department.

    PubMed

    Thornton, Eavan; Brook, Olga R; Mendiratta-Lala, Mishal; Hallett, Donna T; Kruskal, Jonathan B

    2011-01-01

    With increasing deployment, complexity, and sophistication of equipment and related processes within the clinical imaging environment, system failures are more likely to occur. These failures may have varying effects on the patient, ranging from no harm to devastating harm. Failure mode and effect analysis (FMEA) is a tool that permits the proactive identification of possible failures in complex processes and provides a basis for continuous improvement. This overview of the basic principles and methodology of FMEA provides an explanation of how FMEA can be applied to clinical operations in a radiology department to reduce, predict, or prevent errors. The six sequential steps in the FMEA process are explained, and clinical magnetic resonance imaging services are used as an example for which FMEA is particularly applicable. A modified version of traditional FMEA called Healthcare Failure Mode and Effect Analysis, which was introduced by the U.S. Department of Veterans Affairs National Center for Patient Safety, is briefly reviewed. In conclusion, FMEA is an effective and reliable method to proactively examine complex processes in the radiology department. FMEA can be used to highlight the high-risk subprocesses and allows these to be targeted to minimize the future occurrence of failures, thus improving patient safety and streamlining the efficiency of the radiology department. RSNA, 2010

  8. Implementation of an RBF neural network on embedded systems: real-time face tracking and identity verification.

    PubMed

    Yang, Fan; Paindavoine, M

    2003-01-01

    This paper describes a real time vision system that allows us to localize faces in video sequences and verify their identity. These processes are image processing techniques based on the radial basis function (RBF) neural network approach. The robustness of this system has been evaluated quantitatively on eight video sequences. We have adapted our model for an application of face recognition using the Olivetti Research Laboratory (ORL), Cambridge, UK, database so as to compare the performance against other systems. We also describe three hardware implementations of our model on embedded systems based on the field programmable gate array (FPGA), zero instruction set computer (ZISC) chips, and digital signal processor (DSP) TMS320C62, respectively. We analyze the algorithm complexity and present results of hardware implementations in terms of the resources used and processing speed. The success rates of face tracking and identity verification are 92% (FPGA), 85% (ZISC), and 98.2% (DSP), respectively. For the three embedded systems, the processing speeds for images size of 288 /spl times/ 352 are 14 images/s, 25 images/s, and 4.8 images/s, respectively.

  9. Hybrid Core-Shell (HyCoS) Nanoparticles produced by Complex Coacervation for Multimodal Applications

    NASA Astrophysics Data System (ADS)

    Vecchione, D.; Grimaldi, A. M.; Forte, E.; Bevilacqua, Paolo; Netti, P. A.; Torino, E.

    2017-03-01

    Multimodal imaging probes can provide diagnostic information combining different imaging modalities. Nanoparticles (NPs) can contain two or more imaging tracers that allow several diagnostic techniques to be used simultaneously. In this work, a complex coacervation process to produce core-shell completely biocompatible polymeric nanoparticles (HyCoS) for multimodal imaging applications is described. Innovations on the traditional coacervation process are found in the control of the reaction temperature, allowing a speeding up of the reaction itself, and the production of a double-crosslinked system to improve the stability of the nanostructures in the presence of a clinically relevant contrast agent for MRI (Gd-DTPA). Through the control of the crosslinking behavior, an increase up to 6 times of the relaxometric properties of the Gd-DTPA is achieved. Furthermore, HyCoS can be loaded with a high amount of dye such as ATTO 633 or conjugated with a model dye such as FITC for in vivo optical imaging. The results show stable core-shell polymeric nanoparticles that can be used both for MRI and for optical applications allowing detection free from harmful radiation. Additionally, preliminary results about the possibility to trigger the release of a drug through a pH effect are reported.

  10. Stent deployment protocol for optimized real-time visualization during endovascular neurosurgery.

    PubMed

    Silva, Michael A; See, Alfred P; Dasenbrock, Hormuzdiyar H; Ashour, Ramsey; Khandelwal, Priyank; Patel, Nirav J; Frerichs, Kai U; Aziz-Sultan, Mohammad A

    2017-05-01

    Successful application of endovascular neurosurgery depends on high-quality imaging to define the pathology and the devices as they are being deployed. This is especially challenging in the treatment of complex cases, particularly in proximity to the skull base or in patients who have undergone prior endovascular treatment. The authors sought to optimize real-time image guidance using a simple algorithm that can be applied to any existing fluoroscopy system. Exposure management (exposure level, pulse management) and image post-processing parameters (edge enhancement) were modified from traditional fluoroscopy to improve visualization of device position and material density during deployment. Examples include the deployment of coils in small aneurysms, coils in giant aneurysms, the Pipeline embolization device (PED), the Woven EndoBridge (WEB) device, and carotid artery stents. The authors report on the development of the protocol and their experience using representative cases. The stent deployment protocol is an image capture and post-processing algorithm that can be applied to existing fluoroscopy systems to improve real-time visualization of device deployment without hardware modifications. Improved image guidance facilitates aneurysm coil packing and proper positioning and deployment of carotid artery stents, flow diverters, and the WEB device, especially in the context of complex anatomy and an obscured field of view.

  11. Plenoptic mapping for imaging and retrieval of the complex field amplitude of a laser beam.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C

    2016-12-26

    The plenoptic sensor has been developed to sample complicated beam distortions produced by turbulence in the low atmosphere (deep turbulence or strong turbulence) with high density data samples. In contrast with the conventional Shack-Hartmann wavefront sensor, which utilizes all the pixels under each lenslet of a micro-lens array (MLA) to obtain one data sample indicating sub-aperture phase gradient and photon intensity, the plenoptic sensor uses each illuminated pixel (with significant pixel value) under each MLA lenslet as a data point for local phase gradient and intensity. To characterize the working principle of a plenoptic sensor, we propose the concept of plenoptic mapping and its inverse mapping to describe the imaging and reconstruction process respectively. As a result, we show that the plenoptic mapping is an efficient method to image and reconstruct the complex field amplitude of an incident beam with just one image. With a proof of concept experiment, we show that adaptive optics (AO) phase correction can be instantaneously achieved without going through a phase reconstruction process under the concept of plenoptic mapping. The plenoptic mapping technology has high potential for applications in imaging, free space optical (FSO) communication and directed energy (DE) where atmospheric turbulence distortion needs to be compensated.

  12. The formation of Charon's red poles from seasonally cold-trapped volatiles

    NASA Astrophysics Data System (ADS)

    Grundy, W. M.; Cruikshank, D. P.; Gladstone, G. R.; Howett, C. J. A.; Lauer, T. R.; Spencer, J. R.; Summers, M. E.; Buie, M. W.; Earle, A. M.; Ennico, K.; Parker, J. Wm.; Porter, S. B.; Singer, K. N.; Stern, S. A.; Verbiscer, A. J.; Beyer, R. A.; Binzel, R. P.; Buratti, B. J.; Cook, J. C.; Dalle Ore, C. M.; Olin, C. B.; Parker, A. H.; Protopapa, S.; Quirico, E.; Retherford, K. D.; Robbins, S. J.; Schmitt, B.; Stansberry, J. A.; Umurhan, O. M.; Weaver, H. A.; Young, L. A.; Zangari, A. M.; Bray, V. J.; Cheng, A. F.; McKinnon, W. B.; McNutt, R. L.; Morre, J. M.; Nimmo, F.; Reuter, D. C.; Schenk, P. M.; New Horizons Science Team; Stern, S. A.; Bagenal, F.; Ennico, K.; Gladstone, G. R.; Grundy, W. M.; McKinnon, W. B.; Moore, J. M.; Olkin, C. B.; Spencer, J. R.; Weaver, H. A.; Young, L. A.; Andert, T.; Barnouin, O.; Beyer, R. A.; Binzel, R. P.; Bird, M.; Bray, V. J.; Brozovic, M.; Buie, M. W.; Buratti, B. J.; Cheng, A. F.; Cook, J. C.; Cruikshank, D. P.; Dalle Ore, C. M.; Earler, A. M.; Elliott, H. A.; Greathouse, T. K.; Hahn, M.; Hamilton, D. P.; Hill, M. E.; Hinson, D. P.; Hofgartner, J.; Horányi, M.; Howard, A. D.; Howett, C. J. A.; Jennings, D. E.; Kammer, J. A.; Kollmann, P.; Lauer, T. R.; Lavvas, P.; Linscott, I. R. Lisse, C. M.; Lunsford, A. W.; McComas, D. J.; McNutt, R. L., Jr.; Mutchler, M.; Nimmo, F.; Nunez, J. I.; Paetzold, M.; Parker, A. H.; Parker, J. Wm.; Philippe, S.; Piquette, M.; Porter, S. B.; Protopapa, S.; Quirico, E.; Reitsema, H. J.; Reuter, D. C.; Robbins, S. J.; Roberts, J. H.; Runyon, K.; Schenk, P. M.; Schindhelm, E.; Schmitt, B.; Showalter, M. R.; Singer, K. N.; Stansberry, J. A.; Steffl, A. J.; Strobel, D. F.; Stryk, T.; Summers, M. E.; Szalay, J. R.; Throop, H. B.; Tsang, C. C. C.; Tyler, G. L.; Umurhan, O. M.; Verbiscer, A. J.; Versteeg, M. H.; Weigle, G. E., II; White, O. L.; Woods, W. W.; Young, E. F.; Zangari, A. M.

    2016-11-01

    A unique feature of Pluto's large satellite Charon is its dark red northern polar cap. Similar colours on Pluto's surface have been attributed to tholin-like organic macromolecules produced by energetic radiation processing of hydrocarbons. The polar location on Charon implicates the temperature extremes that result from Charon's high obliquity and long seasons in the production of this material. The escape of Pluto's atmosphere provides a potential feedstock for a complex chemistry. Gas from Pluto that is transiently cold-trapped and processed at Charon's winter pole was proposed as an explanation for the dark coloration on the basis of an image of Charon's northern hemisphere, but not modelled quantitatively. Here we report images of the southern hemisphere illuminated by Pluto-shine and also images taken during the approach phase that show the northern polar cap over a range of longitudes. We model the surface thermal environment on Charon and the supply and temporary cold-trapping of material escaping from Pluto, as well as the photolytic processing of this material into more complex and less volatile molecules while cold-trapped. The model results are consistent with the proposed mechanism for producing the observed colour pattern on Charon.

  13. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  14. The Formation of Charon's Red Poles from Seasonally Cold-Trapped Volatiles

    NASA Technical Reports Server (NTRS)

    Grundy, W. M.; Cruikshank, D. P.; Gladstone, D. R.; Howett, C. J. A.; Lauer, T. R.; Spencer, J. R.; Summers, M. E.; Buie, M. W.; Earle, A. M.; Ennico, K.; hide

    2016-01-01

    A unique feature of Plutos large satellite Charon is its dark red northern polar cap. Similar colours on Plutos surface have been attributed to tholin-like organic macromolecules produced by energetic radiation processing of hydrocarbons. The polar location on Charon implicates the temperature extremes that result from Charons high obliquity and long seasons in the production of this material. The escape of Pluto's atmosphere provides a potential feedstock for a complex chemistry. Gas from Pluto that is transiently cold-trapped and processed at Charon's winter pole was proposed as an explanation for the dark coloration on the basis of an image of Charon's northern hemisphere, but not modelled quantitatively. Here we report images of the southern hemisphere illuminated by Pluto-shine and also images taken during the approach phase that show the northern polar cap over a range of longitudes. We model the surface thermal environment on Charon and the supply and temporary cold-trapping of material escaping from Pluto, as well as the photolytic processing of this material into more complex and less volatile molecules while cold-trapped. The model results are consistent with the proposed mechanism for producing the observed colour pattern on Charon.

  15. The formation of Charon's red poles from seasonally cold-trapped volatiles.

    PubMed

    Grundy, W M; Cruikshank, D P; Gladstone, G R; Howett, C J A; Lauer, T R; Spencer, J R; Summers, M E; Buie, M W; Earle, A M; Ennico, K; Parker, J Wm; Porter, S B; Singer, K N; Stern, S A; Verbiscer, A J; Beyer, R A; Binzel, R P; Buratti, B J; Cook, J C; Dalle Ore, C M; Olkin, C B; Parker, A H; Protopapa, S; Quirico, E; Retherford, K D; Robbins, S J; Schmitt, B; Stansberry, J A; Umurhan, O M; Weaver, H A; Young, L A; Zangari, A M; Bray, V J; Cheng, A F; McKinnon, W B; McNutt, R L; Moore, J M; Nimmo, F; Reuter, D C; Schenk, P M

    2016-11-03

    A unique feature of Pluto's large satellite Charon is its dark red northern polar cap. Similar colours on Pluto's surface have been attributed to tholin-like organic macromolecules produced by energetic radiation processing of hydrocarbons. The polar location on Charon implicates the temperature extremes that result from Charon's high obliquity and long seasons in the production of this material. The escape of Pluto's atmosphere provides a potential feedstock for a complex chemistry. Gas from Pluto that is transiently cold-trapped and processed at Charon's winter pole was proposed as an explanation for the dark coloration on the basis of an image of Charon's northern hemisphere, but not modelled quantitatively. Here we report images of the southern hemisphere illuminated by Pluto-shine and also images taken during the approach phase that show the northern polar cap over a range of longitudes. We model the surface thermal environment on Charon and the supply and temporary cold-trapping of material escaping from Pluto, as well as the photolytic processing of this material into more complex and less volatile molecules while cold-trapped. The model results are consistent with the proposed mechanism for producing the observed colour pattern on Charon.

  16. Distorted images of one's own body activates the prefrontal cortex and limbic/paralimbic system in young women: a functional magnetic resonance imaging study.

    PubMed

    Kurosaki, Mitsuhaya; Shirao, Naoko; Yamashita, Hidehisa; Okamoto, Yasumasa; Yamawaki, Shigeto

    2006-02-15

    Our aim was to study the gender differences in brain activation upon viewing visual stimuli of distorted images of one's own body. We performed functional magnetic resonance imaging on 11 healthy young men and 11 healthy young women using the "body image tasks" which consisted of fat, real, and thin shapes of the subject's own body. Comparison of the brain activation upon performing the fat-image task versus real-image task showed significant activation of the bilateral prefrontal cortex and left parahippocampal area including the amygdala in the women, and significant activation of the right occipital lobe including the primary and secondary visual cortices in the men. Comparison of brain activation upon performing the thin-image task versus real-image task showed significant activation of the left prefrontal cortex, left limbic area including the cingulate gyrus and paralimbic area including the insula in women, and significant activation of the occipital lobe including the left primary and secondary visual cortices in men. These results suggest that women tend to perceive distorted images of their own bodies by complex cognitive processing of emotion, whereas men tend to perceive distorted images of their own bodies by object visual processing and spatial visual processing.

  17. Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images.

    PubMed

    Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde

    2017-03-01

    Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality.

  18. Comparison of mandibular first molar mesial root canal morphology using micro-computed tomography and clearing technique.

    PubMed

    Kim, Yeun; Perinpanayagam, Hiran; Lee, Jong-Ki; Yoo, Yeon-Jee; Oh, Soram; Gu, Yu; Lee, Seung-Pyo; Chang, Seok Woo; Lee, Woocheol; Baek, Seung-Ho; Zhu, Qiang; Kum, Kee-Yeon

    2015-08-01

    Micro-computed tomography (MCT) with alternative image reformatting techniques shows complex and detailed root canal anatomy. This study compared two-dimensional (2D) and 3D MCT image reformatting with standard tooth clearing for studying mandibular first molar mesial root canal morphology. Extracted human mandibular first molar mesial roots (n=31) were scanned by MCT (Skyscan 1172). 2D thin-slab minimum intensity projection (TS-MinIP) and 3D volume rendered images were constructed. The same teeth were then processed by clearing and staining. For each root, images obtained from clearing, 2D, 3D and combined 2D and 3D techniques were examined independently by four endodontists and categorized according to Vertucci's classification. Fine anatomical structures such as accessory canals, intercanal communications and loops were also identified. Agreement among the four techniques for Vertucci's classification was 45.2% (14/31). The most frequent were Vertucci's type IV and then type II, although many had complex configurations that were non-classifiable. Generally, complex canal systems were more clearly visible in MCT images than with standard clearing and staining. Fine anatomical structures such as intercanal communications, accessory canals and loops were mostly detected with a combination of 2D TS-MinIP and 3D volume-rendering MCT images. Canal configurations and fine anatomic structures were more clearly observed in the combined 2D and 3D MCT images than the clearing technique. The frequency of non-classifiable configurations demonstrated the complexity of mandibular first molar mesial root canal anatomy.

  19. Non-covalent pomegranate (Punica granatum) hydrolyzable tannin-protein complexes modulate antigen uptake, processing and presentation by a T-cell hybridoma line co-cultured with murine peritoneal macrophages.

    PubMed

    Madrigal-Carballo, Sergio; Haas, Linda; Vestling, Martha; Krueger, Christian G; Reed, Jess D

    2016-12-01

    In this work we characterize the interaction of pomegranate hydrolyzable tannins (HT) with hen egg-white lysozyme (HEL) and determine the effects of non-covalent tannin-protein complexes on macrophage endocytosis, processing and presentation of antigen. We isolated HT from pomegranate and complex to HEL, the resulting non-covalent tannin-protein complex was characterized by gel electrophoresis and MALDI-TOF MS. Finally, cell culture studies and confocal microscopy imaging were conducted on the non-covalent pomegranate HT-HEL protein complexes to evaluate its effect on macrophage antigen uptake, processing and presentation to T-cell hybridomas. Our results indicate that non-covalent pomegranate HT-HEL protein complexes modulate uptake, processing and antigen presentation by mouse peritoneal macrophages. After 4 h of pre-incubation, only trace amounts of IL-2 were detected in the co-cultures treated with HEL alone, whereas a non-covalent pomegranate HT-HEL complex had already reached maximum IL-2 expression. Pomegranate HT may increase rate of endocytose of HEL and subsequent expression of IL-2 by the T-cell hybridomas.

  20. Adaptive Microwave Staring Correlated Imaging for Targets Appearing in Discrete Clusters.

    PubMed

    Tian, Chao; Jiang, Zheng; Chen, Weidong; Wang, Dongjin

    2017-10-21

    Microwave staring correlated imaging (MSCI) can achieve ultra-high resolution in real aperture staring radar imaging using the correlated imaging process (CIP) under all-weather and all-day circumstances. The CIP must combine the received echo signal with the temporal-spatial stochastic radiation field. However, a precondition of the CIP is that the continuous imaging region must be discretized to a fine grid, and the measurement matrix should be accurately computed, which makes the imaging process highly complex when the MSCI system observes a wide area. This paper proposes an adaptive imaging approach for the targets in discrete clusters to reduce the complexity of the CIP. The approach is divided into two main stages. First, as discrete clustered targets are distributed in different range strips in the imaging region, the transmitters of the MSCI emit narrow-pulse waveforms to separate the echoes of the targets in different strips in the time domain; using spectral entropy, a modified method robust against noise is put forward to detect the echoes of the discrete clustered targets, based on which the strips with targets can be adaptively located. Second, in a strip with targets, the matched filter reconstruction algorithm is used to locate the regions with targets, and only the regions of interest are discretized to a fine grid; sparse recovery is used, and the band exclusion is used to maintain the non-correlation of the dictionary. Simulation results are presented to demonstrate that the proposed approach can accurately and adaptively locate the regions with targets and obtain high-quality reconstructed images.

  1. Cryo-EM of dynamic protein complexes in eukaryotic DNA replication.

    PubMed

    Sun, Jingchuan; Yuan, Zuanning; Bai, Lin; Li, Huilin

    2017-01-01

    DNA replication in Eukaryotes is a highly dynamic process that involves several dozens of proteins. Some of these proteins form stable complexes that are amenable to high-resolution structure determination by cryo-EM, thanks to the recent advent of the direct electron detector and powerful image analysis algorithm. But many of these proteins associate only transiently and flexibly, precluding traditional biochemical purification. We found that direct mixing of the component proteins followed by 2D and 3D image sorting can capture some very weakly interacting complexes. Even at 2D average level and at low resolution, EM images of these flexible complexes can provide important biological insights. It is often necessary to positively identify the feature-of-interest in a low resolution EM structure. We found that systematically fusing or inserting maltose binding protein (MBP) to selected proteins is highly effective in these situations. In this chapter, we describe the EM studies of several protein complexes involved in the eukaryotic DNA replication over the past decade or so. We suggest that some of the approaches used in these studies may be applicable to structural analysis of other biological systems. © 2016 The Protein Society.

  2. Multiplexed aberration measurement for deep tissue imaging in vivo

    PubMed Central

    Wang, Chen; Liu, Rui; Milkie, Daniel E.; Sun, Wenzhi; Tan, Zhongchao; Kerlin, Aaron; Chen, Tsai-Wen; Kim, Douglas S.; Ji, Na

    2014-01-01

    We describe a multiplexed aberration measurement method that modulates the intensity or phase of light rays at multiple pupil segments in parallel to determine their phase gradients. Applicable to fluorescent-protein-labeled structures of arbitrary complexity, it allows us to obtain diffraction-limited resolution in various samples in vivo. For the strongly scattering mouse brain, a single aberration correction improves structural and functional imaging of fine neuronal processes over a large imaging volume. PMID:25128976

  3. Selection of bi-level image compression method for reduction of communication energy in wireless visual sensor networks

    NASA Astrophysics Data System (ADS)

    Khursheed, Khursheed; Imran, Muhammad; Ahmad, Naeem; O'Nils, Mattias

    2012-06-01

    Wireless Visual Sensor Network (WVSN) is an emerging field which combines image sensor, on board computation unit, communication component and energy source. Compared to the traditional wireless sensor network, which operates on one dimensional data, such as temperature, pressure values etc., WVSN operates on two dimensional data (images) which requires higher processing power and communication bandwidth. Normally, WVSNs are deployed in areas where installation of wired solutions is not feasible. The energy budget in these networks is limited to the batteries, because of the wireless nature of the application. Due to the limited availability of energy, the processing at Visual Sensor Nodes (VSN) and communication from VSN to server should consume as low energy as possible. Transmission of raw images wirelessly consumes a lot of energy and requires higher communication bandwidth. Data compression methods reduce data efficiently and hence will be effective in reducing communication cost in WVSN. In this paper, we have compared the compression efficiency and complexity of six well known bi-level image compression methods. The focus is to determine the compression algorithms which can efficiently compress bi-level images and their computational complexity is suitable for computational platform used in WVSNs. These results can be used as a road map for selection of compression methods for different sets of constraints in WVSN.

  4. A multimodal imaging workflow to visualize metal mixtures in the human placenta and explore colocalization with biological response markers.

    PubMed

    Niedzwiecki, Megan M; Austin, Christine; Remark, Romain; Merad, Miriam; Gnjatic, Sacha; Estrada-Gutierrez, Guadalupe; Espejel-Nuñez, Aurora; Borboa-Olivares, Hector; Guzman-Huerta, Mario; Wright, Rosalind J; Wright, Robert O; Arora, Manish

    2016-04-01

    Fetal exposure to essential and toxic metals can influence life-long health trajectories. The placenta regulates chemical transmission from maternal circulation to the fetus and itself exhibits a complex response to environmental stressors. The placenta can thus be a useful matrix to monitor metal exposures and stress responses in utero, but strategies to explore the biologic effects of metal mixtures in this organ are not well-developed. In this proof-of-concept study, we used laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) to measure the distributions of multiple metals in placental tissue from a low-birth-weight pregnancy, and we developed an approach to identify the components of metal mixtures that colocalized with biological response markers. Our novel workflow, which includes custom-developed software tools and algorithms for spatial outlier identification and background subtraction in multidimensional elemental image stacks, enables rapid image processing and seamless integration of data from elemental imaging and immunohistochemistry. Using quantitative spatial statistics, we identified distinct patterns of metal accumulation at sites of inflammation. Broadly, our multiplexed approach can be used to explore the mechanisms mediating complex metal exposures and biologic responses within placentae and other tissue types. Our LA-ICP-MS image processing workflow can be accessed through our interactive R Shiny application 'shinyImaging', which is available at or through our laboratory's website, .

  5. Three-dimensional image signals: processing methods

    NASA Astrophysics Data System (ADS)

    Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru

    2010-11-01

    Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.

  6. Partitioning medical image databases for content-based queries on a Grid.

    PubMed

    Montagnat, J; Breton, V; E Magnin, I

    2005-01-01

    In this paper we study the impact of executing a medical image database query application on the grid. For lowering the total computation time, the image database is partitioned into subsets to be processed on different grid nodes. A theoretical model of the application complexity and estimates of the grid execution overhead are used to efficiently partition the database. We show results demonstrating that smart partitioning of the database can lead to significant improvements in terms of total computation time. Grids are promising for content-based image retrieval in medical databases.

  7. Artifact removal algorithms for stroke detection using a multistatic MIST beamforming algorithm.

    PubMed

    Ricci, E; Di Domenico, S; Cianca, E; Rossi, T

    2015-01-01

    Microwave imaging (MWI) has been recently proved as a promising imaging modality for low-complexity, low-cost and fast brain imaging tools, which could play a fundamental role to efficiently manage emergencies related to stroke and hemorrhages. This paper focuses on the UWB radar imaging approach and in particular on the processing algorithms of the backscattered signals. Assuming the use of the multistatic version of the MIST (Microwave Imaging Space-Time) beamforming algorithm, developed by Hagness et al. for the early detection of breast cancer, the paper proposes and compares two artifact removal algorithms. Artifacts removal is an essential step of any UWB radar imaging system and currently considered artifact removal algorithms have been shown not to be effective in the specific scenario of brain imaging. First of all, the paper proposes modifications of a known artifact removal algorithm. These modifications are shown to be effective to achieve good localization accuracy and lower false positives. However, the main contribution is the proposal of an artifact removal algorithm based on statistical methods, which allows to achieve even better performance but with much lower computational complexity.

  8. A bioaccumulative cyclometalated platinum(II) complex with two-photon-induced emission for live cell imaging.

    PubMed

    Koo, Chi-Kin; Wong, Ka-Leung; Man, Cornelia Wing-Yin; Lam, Yun-Wah; So, Leo King-Yan; Tam, Hoi-Lam; Tsao, Sai-Wah; Cheah, Kok-Wai; Lau, Kai-Chung; Yang, Yang-Yi; Chen, Jin-Can; Lam, Michael Hon-Wah

    2009-02-02

    The cyclometalated platinum(II) complex [Pt(L)Cl], where HL is a new cyclometalating ligand 2-phenyl-6-(1H-pyrazol-3-yl)pyridine containing C(phenyl), N(pyridyl), and N(pyrazolyl) donor moieties, was found to possess two-photon-induced luminescent properties. The two-photon-absorption cross section of the complex in N,N-dimethylformamide at room temperature was measured to be 20.8 GM. Upon two-photon excitation at 730 nm from a Ti:sapphire laser, bright-green emission was observed. Besides its two-photon-induced luminescent properties, [Pt(L)Cl] was able to be rapidly accumulated in live HeLa and NIH3T3 cells. The two-photon-induced luminescence of the complex was retained after live cell internalization and can be observed by two-photon confocal microscopy. Its bioaccumulation properties enabled time-lapse imaging of the internalization process of the dye into living cells. Cytotoxicity of [Pt(L)Cl] to both tested cell lines was low, according to MTT assays, even at loadings as high as 20 times the dose concentration for imaging for 6 h.

  9. Image Size Variation Influence on Corrupted and Non-viewable BMP Image

    NASA Astrophysics Data System (ADS)

    Azmi, Tengku Norsuhaila T.; Azma Abdullah, Nurul; Rahman, Nurul Hidayah Ab; Hamid, Isredza Rahmi A.; Chai Wen, Chuah

    2017-08-01

    Image is one of the evidence component seek in digital forensics. Joint Photographic Experts Group (JPEG) format is most popular used in the Internet because JPEG files are very lossy and easy to compress that can speed up Internet transmitting processes. However, corrupted JPEG images are hard to recover due to the complexities of determining corruption point. Nowadays Bitmap (BMP) images are preferred in image processing compared to another formats because BMP image contain all the image information in a simple format. Therefore, in order to investigate the corruption point in JPEG, the file is required to be converted into BMP format. Nevertheless, there are many things that can influence the corrupting of BMP image such as the changes of image size that make the file non-viewable. In this paper, the experiment indicates that the size of BMP file influences the changes in the image itself through three conditions, deleting, replacing and insertion. From the experiment, we learnt by correcting the file size, it can able to produce a viewable file though partially. Then, it can be investigated further to identify the corruption point.

  10. Colour application on mammography image segmentation

    NASA Astrophysics Data System (ADS)

    Embong, R.; Aziz, N. M. Nik Ab.; Karim, A. H. Abd; Ibrahim, M. R.

    2017-09-01

    The segmentation process is one of the most important steps in image processing and computer vision since it is vital in the initial stage of image analysis. Segmentation of medical images involves complex structures and it requires precise segmentation result which is necessary for clinical diagnosis such as the detection of tumour, oedema, and necrotic tissues. Since mammography images are grayscale, researchers are looking at the effect of colour in the segmentation process of medical images. Colour is known to play a significant role in the perception of object boundaries in non-medical colour images. Processing colour images require handling more data, hence providing a richer description of objects in the scene. Colour images contain ten percent (10%) additional edge information as compared to their grayscale counterparts. Nevertheless, edge detection in colour image is more challenging than grayscale image as colour space is considered as a vector space. In this study, we implemented red, green, yellow, and blue colour maps to grayscale mammography images with the purpose of testing the effect of colours on the segmentation of abnormality regions in the mammography images. We applied the segmentation process using the Fuzzy C-means algorithm and evaluated the percentage of average relative error of area for each colour type. The results showed that all segmentation with the colour map can be done successfully even for blurred and noisy images. Also the size of the area of the abnormality region is reduced when compare to the segmentation area without the colour map. The green colour map segmentation produced the smallest percentage of average relative error (10.009%) while yellow colour map segmentation gave the largest percentage of relative error (11.367%).

  11. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  12. A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System

    PubMed Central

    Wu, Xiangjun; Li, Yang; Kurths, Jürgen

    2015-01-01

    The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602

  13. Impact of negative cognitions about body image on inflammatory status in relation to health.

    PubMed

    Černelič-Bizjak, Maša; Jenko-Pražnikar, Zala

    2014-01-01

    Evidence suggests that body dissatisfaction may relate to biological processes and that negative cognitions can influence physical health through the complex pathways linking psychological and biological factors. The present study investigates the relationships between body image satisfaction, inflammation (cytokine levels), aerobic fitness level and obesity in 96 middle-aged men and women (48 normal and 48 overweight). All participants underwent measurements of body satisfaction, body composition, serological measurements of inflammation and aerobic capabilities assessment. Body image dissatisfaction uniquely predicted inflammation biomarkers, C-reactive protein and tumour necrosis factor-α, even when controlled for obesity indicators. Thus, body image dissatisfaction is strongly linked to inflammation processes and may promote the increase in cytokines, representing a relative metabolic risk, independent of most traditional risk factors, such as gender, body mass index and intra-abdominal (waist to hip ratio) adiposity. Results highlight the fact that person's negative cognitions need to be considered in psychologically based interventions and strategies in treatment of obesity, including strategies for health promotion. Results contribute to the knowledge base of the complex pathways in the association between psychological factors and physical illness and some important attempts were made to explain the psychological pathways linking cognitions with inflammation.

  14. Monitoring/Imaging and Regenerative Agents for Enhancing Tissue Engineering Characterization and Therapies.

    PubMed

    Santiesteban, Daniela Y; Kubelick, Kelsey; Dhada, Kabir S; Dumani, Diego; Suggs, Laura; Emelianov, Stanislav

    2016-03-01

    The past three decades have seen numerous advances in tissue engineering and regenerative medicine (TERM) therapies. However, despite the successes there is still much to be done before TERM therapies become commonplace in clinic. One of the main obstacles is the lack of knowledge regarding complex tissue engineering processes. Imaging strategies, in conjunction with exogenous contrast agents, can aid in this endeavor by assessing in vivo therapeutic progress. The ability to uncover real-time treatment progress will help shed light on the complex tissue engineering processes and lead to development of improved, adaptive treatments. More importantly, the utilized exogenous contrast agents can double as therapeutic agents. Proper use of these Monitoring/Imaging and Regenerative Agents (MIRAs) can help increase TERM therapy successes and allow for clinical translation. While other fields have exploited similar particles for combining diagnostics and therapy, MIRA research is still in its beginning stages with much of the current research being focused on imaging or therapeutic applications, separately. Advancing MIRA research will have numerous impacts on achieving clinical translations of TERM therapies. Therefore, it is our goal to highlight current MIRA progress and suggest future research that can lead to effective TERM treatments.

  15. Stroke-model-based character extraction from gray-level document images.

    PubMed

    Ye, X; Cheriet, M; Suen, C Y

    2001-01-01

    Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.

  16. 3D image restoration for confocal microscopy: toward a wavelet deconvolution for the study of complex biological structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, Jacques; Le Calvez, Sophie; Ulfendahl, Mats

    2000-05-01

    Image restoration algorithms provide efficient tools for recovering part of the information lost in the imaging process of a microscope. We describe recent progress in the application of deconvolution to confocal microscopy. The point spread function of a Biorad-MRC1024 confocal microscope was measured under various imaging conditions, and used to process 3D-confocal images acquired in an intact preparation of the inner ear developed at Karolinska Institutet. Using these experiments we investigate the application of denoising methods based on wavelet analysis as a natural regularization of the deconvolution process. Within the Bayesian approach to image restoration, we compare wavelet denoising with the use of a maximum entropy constraint as another natural regularization method. Numerical experiments performed with test images show a clear advantage of the wavelet denoising approach, allowing to `cool down' the image with respect to the signal, while suppressing much of the fine-scale artifacts appearing during deconvolution due to the presence of noise, incomplete knowledge of the point spread function, or undersampling problems. We further describe a natural development of this approach, which consists of performing the Bayesian inference directly in the wavelet domain.

  17. The whole mesh deformation model: a fast image segmentation method suitable for effective parallelization

    NASA Astrophysics Data System (ADS)

    Lenkiewicz, Przemyslaw; Pereira, Manuela; Freire, Mário M.; Fernandes, José

    2013-12-01

    In this article, we propose a novel image segmentation method called the whole mesh deformation (WMD) model, which aims at addressing the problems of modern medical imaging. Such problems have raised from the combination of several factors: (1) significant growth of medical image volumes sizes due to increasing capabilities of medical acquisition devices; (2) the will to increase the complexity of image processing algorithms in order to explore new functionality; (3) change in processor development and turn towards multi processing units instead of growing bus speeds and the number of operations per second of a single processing unit. Our solution is based on the concept of deformable models and is characterized by a very effective and precise segmentation capability. The proposed WMD model uses a volumetric mesh instead of a contour or a surface to represent the segmented shapes of interest, which allows exploiting more information in the image and obtaining results in shorter times, independently of image contents. The model also offers a good ability for topology changes and allows effective parallelization of workflow, which makes it a very good choice for large datasets. We present a precise model description, followed by experiments on artificial images and real medical data.

  18. An Integrative Object-Based Image Analysis Workflow for Uav Images

    NASA Astrophysics Data System (ADS)

    Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong

    2016-06-01

    In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.

  19. Automated measurement of pressure injury through image processing.

    PubMed

    Li, Dan; Mathews, Carol

    2017-11-01

    To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.

  20. Artificial intelligence and signal processing for infrastructure assessment

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Shanableh, Tamer; Yehia, Sherif

    2015-04-01

    The Ground Penetrating Radar (GPR) is being recognized as an effective nondestructive evaluation technique to improve the inspection process. However, data interpretation and complexity of the results impose some limitations on the practicality of using this technique. This is mainly due to the need of a trained experienced person to interpret images obtained by the GPR system. In this paper, an algorithm to classify and assess the condition of infrastructures utilizing image processing and pattern recognition techniques is discussed. Features extracted form a dataset of images of defected and healthy slabs are used to train a computer vision based system while another dataset is used to evaluate the proposed algorithm. Initial results show that the proposed algorithm is able to detect the existence of defects with about 77% success rate.

  1. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  2. Visual Processing in Rapid-Chase Systems: Image Processing, Attention, and Awareness

    PubMed Central

    Schmidt, Thomas; Haberkamp, Anke; Veltkamp, G. Marina; Weber, Andreas; Seydell-Greenwald, Anna; Schmidt, Filipp

    2011-01-01

    Visual stimuli can be classified so rapidly that their analysis may be based on a single sweep of feedforward processing through the visuomotor system. Behavioral criteria for feedforward processing can be evaluated in response priming tasks where speeded pointing or keypress responses are performed toward target stimuli which are preceded by prime stimuli. We apply this method to several classes of complex stimuli. (1) When participants classify natural images into animals or non-animals, the time course of their pointing responses indicates that prime and target signals remain strictly sequential throughout all processing stages, meeting stringent behavioral criteria for feedforward processing (rapid-chase criteria). (2) Such priming effects are boosted by selective visual attention for positions, shapes, and colors, in a way consistent with bottom-up enhancement of visuomotor processing, even when primes cannot be consciously identified. (3) Speeded processing of phobic images is observed in participants specifically fearful of spiders or snakes, suggesting enhancement of feedforward processing by long-term perceptual learning. (4) When the perceived brightness of primes in complex displays is altered by means of illumination or transparency illusions, priming effects in speeded keypress responses can systematically contradict subjective brightness judgments, such that one prime appears brighter than the other but activates motor responses as if it was darker. We propose that response priming captures the output of the first feedforward pass of visual signals through the visuomotor system, and that this output lacks some characteristic features of more elaborate, recurrent processing. This way, visuomotor measures may become dissociated from several aspects of conscious vision. We argue that “fast” visuomotor measures predominantly driven by feedforward processing should supplement “slow” psychophysical measures predominantly based on visual awareness. PMID:21811484

  3. Enhanced Fluorescence Imaging of Live Cells by Effective Cytosolic Delivery of Probes

    PubMed Central

    Massignani, Marzia; Canton, Irene; Sun, Tao; Hearnden, Vanessa; MacNeil, Sheila; Blanazs, Adam; Armes, Steven P.; Lewis, Andrew; Battaglia, Giuseppe

    2010-01-01

    Background Microscopic techniques enable real-space imaging of complex biological events and processes. They have become an essential tool to confirm and complement hypotheses made by biomedical scientists and also allow the re-examination of existing models, hence influencing future investigations. Particularly imaging live cells is crucial for an improved understanding of dynamic biological processes, however hitherto live cell imaging has been limited by the necessity to introduce probes within a cell without altering its physiological and structural integrity. We demonstrate herein that this hurdle can be overcome by effective cytosolic delivery. Principal Findings We show the delivery within several types of mammalian cells using nanometre-sized biomimetic polymer vesicles (a.k.a. polymersomes) that offer both highly efficient cellular uptake and endolysomal escape capability without any effect on the cellular metabolic activity. Such biocompatible polymersomes can encapsulate various types of probes including cell membrane probes and nucleic acid probes as well as labelled nucleic acids, antibodies and quantum dots. Significance We show the delivery of sufficient quantities of probes to the cytosol, allowing sustained functional imaging of live cells over time periods of days to weeks. Finally the combination of such effective staining with three-dimensional imaging by confocal laser scanning microscopy allows cell imaging in complex three-dimensional environments under both mono-culture and co-culture conditions. Thus cell migration and proliferation can be studied in models that are much closer to the in vivo situation. PMID:20454666

  4. Special Software for Planetary Image Processing and Research

    NASA Astrophysics Data System (ADS)

    Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.

    2016-06-01

    The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).

  5. Neural plasticity explored by correlative two-photon and electron/SPIM microscopy

    NASA Astrophysics Data System (ADS)

    Allegra Mascaro, A. L.; Silvestri, L.; Costantini, I.; Sacconi, L.; Maco, B.; Knott, G. W.; Pavone, F. S.

    2013-06-01

    Plasticity of the central nervous system is a complex process which involves the remodeling of neuronal processes and synaptic contacts. However, a single imaging technique can reveal only a small part of this complex machinery. To obtain a more complete view, complementary approaches should be combined. Two-photon fluorescence microscopy, combined with multi-photon laser nanosurgery, allow following the real-time dynamics of single neuronal processes in the cerebral cortex of living mice. The structural rearrangement elicited by this highly confined paradigm of injury can be imaged in vivo first, and then the same neuron could be retrieved ex-vivo and characterized in terms of ultrastructural features of the damaged neuronal branch by means of electron microscopy. Afterwards, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, based on the use of major blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from its apical portion, the whole pyramidal neuron can then be segmented and located in the correct cortical layer. With the correlative approach presented here, researchers will be able to place in a three-dimensional anatomic context the neurons whose dynamics have been observed with high detail in vivo.

  6. CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.

    PubMed

    Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W

    2010-09-01

    Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.

  7. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  8. Adaptive segmentation of nuclei in H&S stained tendon microscopy

    NASA Astrophysics Data System (ADS)

    Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien

    2015-12-01

    Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.

  9. A three-dimensional image processing program for accurate, rapid, and semi-automated segmentation of neuronal somata with dense neurite outgrowth

    PubMed Central

    Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.

    2015-01-01

    Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609

  10. Image segmentation of pyramid style identifier based on Support Vector Machine for colorectal endoscopic images.

    PubMed

    Okamoto, Takumi; Koide, Tetsushi; Sugi, Koki; Shimizu, Tatsuya; Anh-Tuan Hoang; Tamaki, Toru; Raytchev, Bisser; Kaneda, Kazufumi; Kominami, Yoko; Yoshida, Shigeto; Mieno, Hiroshi; Tanaka, Shinji

    2015-08-01

    With the increase of colorectal cancer patients in recent years, the needs of quantitative evaluation of colorectal cancer are increased, and the computer-aided diagnosis (CAD) system which supports doctor's diagnosis is essential. In this paper, a hardware design of type identification module in CAD system for colorectal endoscopic images with narrow band imaging (NBI) magnification is proposed for real-time processing of full high definition image (1920 × 1080 pixel). A pyramid style image segmentation with SVMs for multi-size scan windows, which can be implemented on an FPGA with small circuit area and achieve high accuracy, is proposed for actual complex colorectal endoscopic images.

  11. Serial femtosecond crystallography datasets from G protein-coupled receptors

    PubMed Central

    White, Thomas A.; Barty, Anton; Liu, Wei; Ishchenko, Andrii; Zhang, Haitao; Gati, Cornelius; Zatsepin, Nadia A.; Basu, Shibom; Oberthür, Dominik; Metz, Markus; Beyerlein, Kenneth R.; Yoon, Chun Hong; Yefanov, Oleksandr M.; James, Daniel; Wang, Dingjie; Messerschmidt, Marc; Koglin, Jason E.; Boutet, Sébastien; Weierstall, Uwe; Cherezov, Vadim

    2016-01-01

    We describe the deposition of four datasets consisting of X-ray diffraction images acquired using serial femtosecond crystallography experiments on microcrystals of human G protein-coupled receptors, grown and delivered in lipidic cubic phase, at the Linac Coherent Light Source. The receptors are: the human serotonin receptor 2B in complex with an agonist ergotamine, the human δ-opioid receptor in complex with a bi-functional peptide ligand DIPP-NH2, the human smoothened receptor in complex with an antagonist cyclopamine, and finally the human angiotensin II type 1 receptor in complex with the selective antagonist ZD7155. All four datasets have been deposited, with minimal processing, in an HDF5-based file format, which can be used directly for crystallographic processing with CrystFEL or other software. We have provided processing scripts and supporting files for recent versions of CrystFEL, which can be used to validate the data. PMID:27479354

  12. Serial femtosecond crystallography datasets from G protein-coupled receptors.

    PubMed

    White, Thomas A; Barty, Anton; Liu, Wei; Ishchenko, Andrii; Zhang, Haitao; Gati, Cornelius; Zatsepin, Nadia A; Basu, Shibom; Oberthür, Dominik; Metz, Markus; Beyerlein, Kenneth R; Yoon, Chun Hong; Yefanov, Oleksandr M; James, Daniel; Wang, Dingjie; Messerschmidt, Marc; Koglin, Jason E; Boutet, Sébastien; Weierstall, Uwe; Cherezov, Vadim

    2016-08-01

    We describe the deposition of four datasets consisting of X-ray diffraction images acquired using serial femtosecond crystallography experiments on microcrystals of human G protein-coupled receptors, grown and delivered in lipidic cubic phase, at the Linac Coherent Light Source. The receptors are: the human serotonin receptor 2B in complex with an agonist ergotamine, the human δ-opioid receptor in complex with a bi-functional peptide ligand DIPP-NH2, the human smoothened receptor in complex with an antagonist cyclopamine, and finally the human angiotensin II type 1 receptor in complex with the selective antagonist ZD7155. All four datasets have been deposited, with minimal processing, in an HDF5-based file format, which can be used directly for crystallographic processing with CrystFEL or other software. We have provided processing scripts and supporting files for recent versions of CrystFEL, which can be used to validate the data.

  13. 3D correlative light and electron microscopy of cultured cells using serial blockface scanning electron microscopy

    PubMed Central

    Lerner, Thomas R.; Burden, Jemima J.; Nkwe, David O.; Pelchen-Matthews, Annegret; Domart, Marie-Charlotte; Durgan, Joanne; Weston, Anne; Jones, Martin L.; Peddie, Christopher J.; Carzaniga, Raffaella; Florey, Oliver; Marsh, Mark; Gutierrez, Maximiliano G.

    2017-01-01

    ABSTRACT The processes of life take place in multiple dimensions, but imaging these processes in even three dimensions is challenging. Here, we describe a workflow for 3D correlative light and electron microscopy (CLEM) of cell monolayers using fluorescence microscopy to identify and follow biological events, combined with serial blockface scanning electron microscopy to analyse the underlying ultrastructure. The workflow encompasses all steps from cell culture to sample processing, imaging strategy, and 3D image processing and analysis. We demonstrate successful application of the workflow to three studies, each aiming to better understand complex and dynamic biological processes, including bacterial and viral infections of cultured cells and formation of entotic cell-in-cell structures commonly observed in tumours. Our workflow revealed new insight into the replicative niche of Mycobacterium tuberculosis in primary human lymphatic endothelial cells, HIV-1 in human monocyte-derived macrophages, and the composition of the entotic vacuole. The broad application of this 3D CLEM technique will make it a useful addition to the correlative imaging toolbox for biomedical research. PMID:27445312

  14. Named Entity Recognition in a Hungarian NL Based QA System

    NASA Astrophysics Data System (ADS)

    Tikkl, Domonkos; Szidarovszky, P. Ferenc; Kardkovacs, Zsolt T.; Magyar, Gábor

    In WoW project our purpose is to create a complex search interface with the following features: search in the deep web content of contracted partners' databases, processing Hungarian natural language (NL) questions and transforming them to SQL queries for database access, image search supported by a visual thesaurus that describes in a structural form the visual content of images (also in Hungarian). This paper primarily focuses on a particular problem of question processing task: the entity recognition. Before going into details we give a short overview of the project's aims.

  15. Automated image processing of Landsat II digital data for watershed runoff prediction

    NASA Technical Reports Server (NTRS)

    Sasso, R. R.; Jensen, J. R.; Estes, J. E.

    1977-01-01

    Digital image processing of Landsat data from a 230 sq km area was examined as a possible means of generating soil cover information for use in the watershed runoff prediction of Kern County, California. The soil cover information included data on brush, grass, pasture lands and forests. A classification accuracy of 94% for the Landsat-based soil cover survey suggested that the technique could be applied to the watershed runoff estimate. However, problems involving the survey of complex mountainous environments may require further attention

  16. CAD/CAM guided surgery in implant dentistry. A review of software packages and step-by-step protocols for planning surgical guides.

    PubMed

    Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta

    2014-01-01

    Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.

  17. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  18. Slide Set: Reproducible image analysis and batch processing with ImageJ.

    PubMed

    Nanes, Benjamin A

    2015-11-01

    Most imaging studies in the biological sciences rely on analyses that are relatively simple. However, manual repetition of analysis tasks across multiple regions in many images can complicate even the simplest analysis, making record keeping difficult, increasing the potential for error, and limiting reproducibility. While fully automated solutions are necessary for very large data sets, they are sometimes impractical for the small- and medium-sized data sets common in biology. Here we present the Slide Set plugin for ImageJ, which provides a framework for reproducible image analysis and batch processing. Slide Set organizes data into tables, associating image files with regions of interest and other relevant information. Analysis commands are automatically repeated over each image in the data set, and multiple commands can be chained together for more complex analysis tasks. All analysis parameters are saved, ensuring transparency and reproducibility. Slide Set includes a variety of built-in analysis commands and can be easily extended to automate other ImageJ plugins, reducing the manual repetition of image analysis without the set-up effort or programming expertise required for a fully automated solution.

  19. The Vector, Signal, and Image Processing Library (VSIPL): an Open Standard for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Kepner, J. V.; Janka, R. S.; Lebak, J.; Richards, M. A.

    1999-12-01

    The Vector/Signal/Image Processing Library (VSIPL) is a DARPA initiated effort made up of industry, government and academic representatives who have defined an industry standard API for vector, signal, and image processing primitives for real-time signal processing on high performance systems. VSIPL supports a wide range of data types (int, float, complex, ...) and layouts (vectors, matrices and tensors) and is ideal for astronomical data processing. The VSIPL API is intended to serve as an open, vendor-neutral, industry standard interface. The object-based VSIPL API abstracts the memory architecture of the underlying machine by using the concept of memory blocks and views. Early experiments with VSIPL code conversions have been carried out by the High Performance Computing Program team at the UCSD. Commercially, several major vendors of signal processors are actively developing implementations. VSIPL has also been explicitly required as part of a recent Rome Labs teraflop procurement. This poster presents the VSIPL API, its functionality and the status of various implementations.

  20. Smart cloud system with image processing server in diagnosing brain diseases dedicated for hospitals with limited resources.

    PubMed

    Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny

    2017-01-01

    The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.

  1. Dependence of quantitative accuracy of CT perfusion imaging on system parameters

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2017-03-01

    Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.

  2. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  3. A novel BA complex network model on color template matching.

    PubMed

    Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.

  4. iSBatch: a batch-processing platform for data analysis and exploration of live-cell single-molecule microscopy images and other hierarchical datasets.

    PubMed

    Caldas, Victor E A; Punter, Christiaan M; Ghodke, Harshad; Robinson, Andrew; van Oijen, Antoine M

    2015-10-01

    Recent technical advances have made it possible to visualize single molecules inside live cells. Microscopes with single-molecule sensitivity enable the imaging of low-abundance proteins, allowing for a quantitative characterization of molecular properties. Such data sets contain information on a wide spectrum of important molecular properties, with different aspects highlighted in different imaging strategies. The time-lapsed acquisition of images provides information on protein dynamics over long time scales, giving insight into expression dynamics and localization properties. Rapid burst imaging reveals properties of individual molecules in real-time, informing on their diffusion characteristics, binding dynamics and stoichiometries within complexes. This richness of information, however, adds significant complexity to analysis protocols. In general, large datasets of images must be collected and processed in order to produce statistically robust results and identify rare events. More importantly, as live-cell single-molecule measurements remain on the cutting edge of imaging, few protocols for analysis have been established and thus analysis strategies often need to be explored for each individual scenario. Existing analysis packages are geared towards either single-cell imaging data or in vitro single-molecule data and typically operate with highly specific algorithms developed for particular situations. Our tool, iSBatch, instead allows users to exploit the inherent flexibility of the popular open-source package ImageJ, providing a hierarchical framework in which existing plugins or custom macros may be executed over entire datasets or portions thereof. This strategy affords users freedom to explore new analysis protocols within large imaging datasets, while maintaining hierarchical relationships between experiments, samples, fields of view, cells, and individual molecules.

  5. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  6. THREE-DIMENSIONAL RANDOM ACCESS MULTIPHOTON MICROSCOPY FOR FAST FUNCTIONAL IMAGING OF NEURONAL ACTIVITY

    PubMed Central

    Reddy, Gaddum Duemani; Kelleher, Keith; Fink, Rudy; Saggau, Peter

    2009-01-01

    The dynamic ability of neuronal dendrites to shape and integrate synaptic responses is the hallmark of information processing in the brain. Effectively studying this phenomenon requires concurrent measurements at multiple sites on live neurons. Significant progress has been made by optical imaging systems which combine confocal and multiphoton microscopy with inertia-free laser scanning. However, all systems developed to date restrict fast imaging to two dimensions. This severely limits the extent to which neurons can be studied, since they represent complex three-dimensional (3D) structures. Here we present a novel imaging system that utilizes a unique arrangement of acousto-optic deflectors to steer a focused ultra-fast laser beam to arbitrary locations in 3D space without moving the objective lens. As we demonstrate, this highly versatile random-access multiphoton microscope supports functional imaging of complex 3D cellular structures such as neuronal dendrites or neural populations at acquisition rates on the order of tens of kilohertz. PMID:18432198

  7. Acousto-optic time- and space-integrating spotlight-mode SAR processor

    NASA Astrophysics Data System (ADS)

    Haney, Michael W.; Levy, James J.; Michael, Robert R., Jr.

    1993-09-01

    The technical approach and recent experimental results for the acousto-optic time- and space- integrating real-time SAR image formation processor program are reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results include a demonstration of the processor's ability to perform high-resolution spotlight-mode SAR imaging by simultaneously compensating for range migration and range/azimuth coupling in the analog optical domain, thereby avoiding a highly power-consuming digital interpolation or reformatting operation usually required in all-electronic approaches.

  8. Low-complexity image processing for real-time detection of neonatal clonic seizures.

    PubMed

    Ntonfo, Guy Mathurin Kouamou; Ferrari, Gianluigi; Raheli, Riccardo; Pisani, Francesco

    2012-05-01

    In this paper, we consider a novel low-complexity real-time image-processing-based approach to the detection of neonatal clonic seizures. Our approach is based on the extraction, from a video of a newborn, of an average luminance signal representative of the body movements. Since clonic seizures are characterized by periodic movements of parts of the body (e.g., the limbs), by evaluating the periodicity of the extracted average luminance signal it is possible to detect the presence of a clonic seizure. The periodicity is investigated, through a hybrid autocorrelation-Yin estimation technique, on a per-window basis, where a time window is defined as a sequence of consecutive video frames. While processing is first carried out on a single window basis, we extend our approach to interlaced windows. The performance of the proposed detection algorithm is investigated, in terms of sensitivity and specificity, through receiver operating characteristic curves, considering video recordings of newborns affected by neonatal seizures.

  9. Real-time FPGA architectures for computer vision

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2000-03-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.

  10. Delineation of karst terranes in complex environments: Application of modern developments in the wavelet theory and data mining

    NASA Astrophysics Data System (ADS)

    Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery

    2013-04-01

    Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.

  11. High-performance computing in image registration

    NASA Astrophysics Data System (ADS)

    Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro

    2012-10-01

    Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.

  12. ARPA surveillance technology for detection of targets hidden in foliage

    NASA Astrophysics Data System (ADS)

    Hoff, Lawrence E.; Stotts, Larry B.

    1994-02-01

    The processing of large quantities of synthetic aperture radar data in real time is a complex problem. Even the image formation process taxes today's most advanced computers. The use of complex algorithms with multiple channels adds another dimension to the computational problem. Advanced Research Projects Agency (ARPA) is currently planning on using the Paragon parallel processor for this task. The Paragon is small enough to allow its use in a sensor aircraft. Candidate algorithms will be implemented on the Paragon for evaluation for real time processing. In this paper ARPA technology developments for detecting targets hidden in foliage are reviewed and examples of signal processing techniques on field collected data are presented.

  13. A Youthful Crater in the Cydonia Colles Region

    NASA Image and Video Library

    2015-11-27

    The central portion of this image from NASA's Mars Reconnaissance Orbiter is dominated by a sharp-rimmed crater that is roughly 5 kilometers in diameter. On its slopes, gullies show young (i.e., geologically recent) headward erosion, which is the lengthening of the gully in the upslope direction. This crater is also remarkable for another reason. This image is part of a stereo pair, and the anaglyph of these images shows that the bottom of the crater contains a small mound. This mound hints at a possible complex crater, with the mound being a central uplift. Complex craters as small as this one are uncommon and such examples may provide clues to the lithology of the rocks underground and possibly to the impact process itself. http://photojournal.jpl.nasa.gov/catalog/PIA20158

  14. A novel encryption scheme for high-contrast image data in the Fresnelet domain

    PubMed Central

    Bibi, Nargis; Farwa, Shabieh; Jahngir, Adnan; Usman, Muhammad

    2018-01-01

    In this paper, a unique and more distinctive encryption algorithm is proposed. This is based on the complexity of highly nonlinear S box in Flesnelet domain. The nonlinear pattern is transformed further to enhance the confusion in the dummy data using Fresnelet technique. The security level of the encrypted image boosts using the algebra of Galois field in Fresnelet domain. At first level, the Fresnelet transform is used to propagate the given information with desired wavelength at specified distance. It decomposes given secret data into four complex subbands. These complex sub-bands are separated into two components of real subband data and imaginary subband data. At second level, the net subband data, produced at the first level, is deteriorated to non-linear diffused pattern using the unique S-box defined on the Galois field F28. In the diffusion process, the permuted image is substituted via dynamic algebraic S-box substitution. We prove through various analysis techniques that the proposed scheme enhances the cipher security level, extensively. PMID:29608609

  15. Noise reduction with complex bilateral filter.

    PubMed

    Matsumoto, Mitsuharu

    2017-12-01

    This study introduces a noise reduction technique that uses a complex bilateral filter. A bilateral filter is a nonlinear filter originally developed for images that can reduce noise while preserving edge information. It is an attractive filter and has been used in many applications in image processing. When it is applied to an acoustical signal, small-amplitude noise is reduced while the speech signal is preserved. However, a bilateral filter cannot handle noise with relatively large amplitudes owing to its innate characteristics. In this study, the noisy signal is transformed into the time-frequency domain and the filter is improved to handle complex spectra. The high-amplitude noise is reduced in the time-frequency domain via the proposed filter. The features and the potential of the proposed filter are also confirmed through experiments.

  16. Robust algebraic image enhancement for intelligent control systems

    NASA Technical Reports Server (NTRS)

    Lerner, Bao-Ting; Morrelli, Michael

    1993-01-01

    Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.

  17. Image re-sampling detection through a novel interpolation kernel.

    PubMed

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy.

    PubMed

    Gualda, Emilio J; Simão, Daniel; Pinto, Catarina; Alves, Paula M; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment.

  19. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    PubMed Central

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  20. Spaceborne synthetic aperture radar signal processing using FPGAs

    NASA Astrophysics Data System (ADS)

    Sugimoto, Yohei; Ozawa, Satoru; Inaba, Noriyasu

    2017-10-01

    Synthetic Aperture Radar (SAR) imagery requires image reproduction through successive signal processing of received data before browsing images and extracting information. The received signal data records of the ALOS-2/PALSAR-2 are stored in the onboard mission data storage and transmitted to the ground. In order to compensate the storage usage and the capacity of transmission data through the mission date communication networks, the operation duty of the PALSAR-2 is limited. This balance strongly relies on the network availability. The observation operations of the present spaceborne SAR systems are rigorously planned by simulating the mission data balance, given conflicting user demands. This problem should be solved such that we do not have to compromise the operations and the potential of the next-generation spaceborne SAR systems. One of the solutions is to compress the SAR data through onboard image reproduction and information extraction from the reproduced images. This is also beneficial for fast delivery of information products and event-driven observations by constellation. The Emergence Studio (Sōhatsu kōbō in Japanese) with Japan Aerospace Exploration Agency is developing evaluation models of FPGA-based signal processing system for onboard SAR image reproduction. The model, namely, "Fast L1 Processor (FLIP)" developed in 2016 can reproduce a 10m-resolution single look complex image (Level 1.1) from ALOS/PALSAR raw signal data (Level 1.0). The processing speed of the FLIP at 200 MHz results in twice faster than CPU-based computing at 3.7 GHz. The image processed by the FLIP is no way inferior to the image processed with 32-bit computing in MATLAB.

  1. Animal Detection in Natural Images: Effects of Color and Image Database

    PubMed Central

    Zhu, Weina; Drewes, Jan; Gegenfurtner, Karl R.

    2013-01-01

    The visual system has a remarkable ability to extract categorical information from complex natural scenes. In order to elucidate the role of low-level image features for the recognition of objects in natural scenes, we recorded saccadic eye movements and event-related potentials (ERPs) in two experiments, in which human subjects had to detect animals in previously unseen natural images. We used a new natural image database (ANID) that is free of some of the potential artifacts that have plagued the widely used COREL images. Color and grayscale images picked from the ANID and COREL databases were used. In all experiments, color images induced a greater N1 EEG component at earlier time points than grayscale images. We suggest that this influence of color in animal detection may be masked by later processes when measuring reation times. The ERP results of go/nogo and forced choice tasks were similar to those reported earlier. The non-animal stimuli induced bigger N1 than animal stimuli both in the COREL and ANID databases. This result indicates ultra-fast processing of animal images is possible irrespective of the particular database. With the ANID images, the difference between color and grayscale images is more pronounced than with the COREL images. The earlier use of the COREL images might have led to an underestimation of the contribution of color. Therefore, we conclude that the ANID image database is better suited for the investigation of the processing of natural scenes than other databases commonly used. PMID:24130744

  2. A novel imaging method for photonic crystal fiber fusion splicer

    NASA Astrophysics Data System (ADS)

    Bi, Weihong; Fu, Guangwei; Guo, Xuan

    2007-01-01

    Because the structure of Photonic Crystal Fiber (PCF) is very complex, and it is very difficult that traditional fiber fusion splice obtains optical axial information of PCF. Therefore, we must search for a bran-new optical imaging method to get section information of Photonic Crystal Fiber. Based on complex trait of PCF, a novel high-precision optics imaging system is presented in this article. The system uses a thinned electron-bombarded CCD (EBCCD) which is a kind of image sensor as imaging element, the thinned electron-bombarded CCD can offer low light level performance superior to conventional image intensifier coupled CCD approaches, this high-performance device can provide high contrast high resolution in low light level surveillance imaging; in order to realize precision focusing of image, we use a ultra-highprecision pace motor to adjust position of imaging lens. In this way, we can obtain legible section information of PCF. We may realize further concrete analysis for section information of PCF by digital image processing technology. Using this section information may distinguish different sorts of PCF, compute some parameters such as the size of PCF ventage, cladding structure of PCF and so on, and provide necessary analysis data for PCF fixation, adjustment, regulation, fusion and cutting system.

  3. Salient contour extraction from complex natural scene in night vision image

    NASA Astrophysics Data System (ADS)

    Han, Jing; Yue, Jiang; Zhang, Yi; Bai, Lian-fa

    2014-03-01

    The theory of center-surround interaction in non-classical receptive field can be applied in night vision information processing. In this work, an optimized compound receptive field modulation method is proposed to extract salient contour from complex natural scene in low-light-level (LLL) and infrared images. The kernel idea is that multi-feature analysis can recognize the inhomogeneity in modulatory coverage more accurately and that center and surround with the grouping structure satisfying Gestalt rule deserves high connection-probability. Computationally, a multi-feature contrast weighted inhibition model is presented to suppress background and lower mutual inhibition among contour elements; a fuzzy connection facilitation model is proposed to achieve the enhancement of contour response, the connection of discontinuous contour and the further elimination of randomly distributed noise and texture; a multi-scale iterative attention method is designed to accomplish dynamic modulation process and extract contours of targets in multi-size. This work provides a series of biologically motivated computational visual models with high-performance for contour detection from cluttered scene in night vision images.

  4. An innovative and shared methodology for event reconstruction using images in forensic science.

    PubMed

    Milliet, Quentin; Jendly, Manon; Delémont, Olivier

    2015-09-01

    This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Document Image Parsing and Understanding using Neuromorphic Architecture

    DTIC Science & Technology

    2015-03-01

    processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored to reduce the processing...developed to reduce the processing speed at different layers. In the pattern matching layer, the computing power of multicore processors is explored... cortex where the complex data is reduced to abstract representations. The abstract representation is compared to stored patterns in massively parallel

  6. Imaging Complex Protein Metabolism in Live Organisms by Stimulated Raman Scattering Microscopy with Isotope Labeling

    PubMed Central

    2016-01-01

    Protein metabolism, consisting of both synthesis and degradation, is highly complex, playing an indispensable regulatory role throughout physiological and pathological processes. Over recent decades, extensive efforts, using approaches such as autoradiography, mass spectrometry, and fluorescence microscopy, have been devoted to the study of protein metabolism. However, noninvasive and global visualization of protein metabolism has proven to be highly challenging, especially in live systems. Recently, stimulated Raman scattering (SRS) microscopy coupled with metabolic labeling of deuterated amino acids (D-AAs) was demonstrated for use in imaging newly synthesized proteins in cultured cell lines. Herein, we significantly generalize this notion to develop a comprehensive labeling and imaging platform for live visualization of complex protein metabolism, including synthesis, degradation, and pulse–chase analysis of two temporally defined populations. First, the deuterium labeling efficiency was optimized, allowing time-lapse imaging of protein synthesis dynamics within individual live cells with high spatial–temporal resolution. Second, by tracking the methyl group (CH3) distribution attributed to pre-existing proteins, this platform also enables us to map protein degradation inside live cells. Third, using two subsets of structurally and spectroscopically distinct D-AAs, we achieved two-color pulse–chase imaging, as demonstrated by observing aggregate formation of mutant hungtingtin proteins. Finally, going beyond simple cell lines, we demonstrated the imaging ability of protein synthesis in brain tissues, zebrafish, and mice in vivo. Hence, the presented labeling and imaging platform would be a valuable tool to study complex protein metabolism with high sensitivity, resolution, and biocompatibility for a broad spectrum of systems ranging from cells to model animals and possibly to humans. PMID:25560305

  7. Pixel-based speckle adjustment for noise reduction in Fourier-domain OCT images

    PubMed Central

    Zhang, Anqi; Xi, Jiefeng; Sun, Jitao; Li, Xingde

    2017-01-01

    Speckle resides in OCT signals and inevitably effects OCT image quality. In this work, we present a novel method for speckle noise reduction in Fourier-domain OCT images, which utilizes the phase information of complex OCT data. In this method, speckle area is pre-delineated pixelwise based on a phase-domain processing method and then adjusted by the results of wavelet shrinkage of the original image. Coefficient shrinkage method such as wavelet or contourlet is applied afterwards for further suppressing the speckle noise. Compared with conventional methods without speckle adjustment, the proposed method demonstrates significant improvement of image quality. PMID:28663860

  8. Bayesian demosaicing using Gaussian scale mixture priors with local adaptivity in the dual tree complex wavelet packet transform domain

    NASA Astrophysics Data System (ADS)

    Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried

    2013-02-01

    In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.

  9. Collective Autoionization in Multiply-Excited Systems: A novel ionization process observed in Helium Nanodroplets

    PubMed Central

    LaForge, A. C.; Drabbels, M.; Brauer, N. B.; Coreno, M.; Devetta, M.; Di Fraia, M.; Finetti, P.; Grazioli, C.; Katzy, R.; Lyamayev, V.; Mazza, T.; Mudrich, M.; O'Keeffe, P.; Ovcharenko, Y.; Piseri, P.; Plekan, O.; Prince, K. C.; Richter, R.; Stranges, S.; Callegari, C.; Möller, T.; Stienkemeier, F.

    2014-01-01

    Free electron lasers (FELs) offer the unprecedented capability to study reaction dynamics and image the structure of complex systems. When multiple photons are absorbed in complex systems, a plasma-like state is formed where many atoms are ionized on a femtosecond timescale. If multiphoton absorption is resonantly-enhanced, the system becomes electronically-excited prior to plasma formation, with subsequent decay paths which have been scarcely investigated to date. Here, we show using helium nanodroplets as an example that these systems can decay by a new type of process, named collective autoionization. In addition, we show that this process is surprisingly efficient, leading to ion abundances much greater than that of direct single-photon ionization. This novel collective ionization process is expected to be important in many other complex systems, e.g. macromolecules and nanoparticles, exposed to high intensity radiation fields. PMID:24406316

  10. IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java

    PubMed Central

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319

  11. IQM: an extensible and portable open source application for image and signal analysis in Java.

    PubMed

    Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut

    2015-01-01

    Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.

  12. Image, imagination, and reality: on effectiveness of introductory work with vocalists.

    PubMed

    Gullaer, Irene; Walker, Robert; Badin, Pierre; Lamalle, Laurent

    2006-01-01

    Fifty-four sung tokens, each consisting of eight images were generated with the help of magnetic resonance imaging (MRI) technique to demonstrate the work of intrapharyngeal muscles when singing and speaking, and to help the educational process. The MRI images can be used as a part of a visualization feed-back method in vocal education and contribute to creation of proper mental images. The use of visualization (pictures, drafts, graphs, spectra, MRI images, etc.), along with mental images, facilitates simplification and acceleration of the process of understanding and learning how to master the basics of vocal technique, especially in the initial period of study. It is shown that work on muscle development and use of imagination should progress with close interaction between the two. For higher effectiveness and tangible results, mental images used by a vocal pedagogue should correspond to the technical and emotional level of a student. Therefore, mental images have to undertake the same evolution as articulation technique-from simplified and comprehensible to complex and abstract. Our integrated approach suggests continuing the work on muscle development and use of imagination in singing classes, employing the experience of voice-speech teachers. Their exercises are modified using the empirical method and other techniques developed creatively by singing teachers. In this method, sensitivity towards the state of the tissues becomes increasingly refined; students acquire a conscious control over the muscle work, students gain full awareness of both sensation and muscle activity. As a result, a complex of professional conditioned reflexes is being developed. A case study of the New Zealand experience was conducted with groups of Maori and European students. Unique properties and trends in the voices of Maori people are discussed.

  13. Physics-based approach to color image enhancement in poor visibility conditions.

    PubMed

    Tan, K K; Oakley, J P

    2001-10-01

    Degradation of images by the atmosphere is a familiar problem. For example, when terrain is imaged from a forward-looking airborne camera, the atmosphere degradation causes a loss in both contrast and color information. Enhancement of such images is a difficult task because of the complexity in restoring both the luminance and the chrominance while maintaining good color fidelity. One particular problem is the fact that the level of contrast loss depends strongly on wavelength. A novel method is presented for the enhancement of color images. This method is based on the underlying physics of the degradation process, and the parameters required for enhancement are estimated from the image itself.

  14. Lithological discrimination of accretionary complex (Sivas, northern Turkey) using novel hybrid color composites and field data

    NASA Astrophysics Data System (ADS)

    Özkan, Mutlu; Çelik, Ömer Faruk; Özyavaş, Aziz

    2018-02-01

    One of the most appropriate approaches to better understand and interpret geologic evolution of an accretionary complex is to make a detailed geologic map. The fact that ophiolite sequences consist of various rock types may require a unique image processing method to map each ophiolite body. The accretionary complex in the study area is composed mainly of ophiolitic and metamorphic rocks along with epi-ophiolitic sedimentary rocks. This paper attempts to map the Late Cretaceous accretionary complex in detail in northern Sivas (within İzmir-Ankara-Erzincan Suture Zone in Turkey) by the analysis of all of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) bands and field study. The new two hybrid color composite images yield satisfactory results in delineating peridotite, gabbro, basalt, and epi-ophiolitic sedimentary rocks of the accretionary complex in the study area. While the first hybrid color composite image consists of one principle component (PC) and two band ratios (PC1, 3/4, 4/6 in the RGB), the PC5, the original ASTER band 4 and the 3/4 band ratio images were assigned to the RGB colors to generate the second hybrid color composite image. In addition to that, the spectral indices derived from the ASTER thermal infrared (TIR) bands discriminate clearly ultramafic, siliceous, and carbonate rocks from adjacent lithologies at a regional scale. Peridotites with varying degrees of serpentinization illustrated as a single color were best identified in the spectral indices map. Furthermore, the boundaries of ophiolitic rocks based on fieldwork were outlined in detail in some parts of the study area by superimposing the resultant maps of ASTER maps on Google Earth images of finer spatial resolution. Eventually, the encouraging geologic map generated by the image analysis of ASTER data strongly correlates with lithological boundaries from a field survey.

  15. Three-dimensional imaging of porous media using confocal laser scanning microscopy.

    PubMed

    Shah, S M; Crawshaw, J P; Boek, E S

    2017-02-01

    In the last decade, imaging techniques capable of reconstructing three-dimensional (3-D) pore-scale model have played a pivotal role in the study of fluid flow through complex porous media. In this study, we present advances in the application of confocal laser scanning microscopy (CLSM) to image, reconstruct and characterize complex porous geological materials with hydrocarbon reservoir and CO 2 storage potential. CLSM has a unique capability of producing 3-D thin optical sections of a material, with a wide field of view and submicron resolution in the lateral and axial planes. However, CLSM is limited in the depth (z-dimension) that can be imaged in porous materials. In this study, we introduce a 'grind and slice' technique to overcome this limitation. We discuss the practical and technical aspects of the confocal imaging technique with application to complex rock samples including Mt. Gambier and Ketton carbonates. We then describe the complete workflow of image processing to filtering and segmenting the raw 3-D confocal volumetric data into pores and grains. Finally, we use the resulting 3-D pore-scale binarized confocal data obtained to quantitatively determine petrophysical pore-scale properties such as total porosity, macro- and microporosity and single-phase permeability using lattice Boltzmann (LB) simulations, validated by experiments. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  16. Mapping wildland fuels for fire management across multiple scales: integrating remote sensing, GIS, and biophysical modeling

    USGS Publications Warehouse

    Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.

    2001-01-01

    Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.

  17. Insights into the failure of the potential, neutral myocardial imaging agent TcN-NOET: physicochemical identification of by-products and degradation species.

    PubMed

    Tisato, Francesco; Bolzati, Cristina; Refosco, Fiorenzo; Porchia, Marina; Seraglia, Roberta; Carta, Davide; Pasqualini, Roberto

    2012-04-01

    The neutral complex [(99m)Tc(N)(NOEt)(2)], often referred to as TcN-NOET [NOEt=N-ethoxy,N-ethyldithiocarbamate(1-)], was proposed several years ago as a myocardial imaging agent. Despite some favorable clinical properties evidenced during phase I and phase II studies, the overall results of the European and American phase III clinical studies have been judged insufficient for a successful approval process by the regulatory agencies. Non-carrier-added and carrier-added experiments using short-lived (99m)Tc and long-lived (99g)Tc have been utilized to prepare a series of bis-substituted [Tc(N)(DTC)(2)] complexes [DTC=dithiocarbamate(1-)]. They have been purified by means of chromatographic techniques (high-performance liquid chromatography and thin-layer chromatography) and identified via double detection (UV-vis and radiometry) by comparison with authenticated samples of (99g)Tc compounds prepared by conventional coordination chemistry procedures. The molecular structure of the lipophilic, neutral complex cis-[Tc(N)(NOEt)(2)] has been assigned by comparison with similar nitrido-Tc(V) complexes already reported in the literature. Novel bis-substituted nitrido-Tc complexes containing hydrolyzed portions of coordinated NOEt, namely, N-ethyldithiocarbamate [NHEt(1-)] and N-hydroxy, N-ethyldithiocarbamate [NOHEt(1-)], have been prepared and characterized by means of multinuclear nuclear magnetic resonance spectroscopy and mass spectrometry. Despite the identification of these "hydrolyzed" species, it is still unclear whether the failure to reach the clinical goal of the perfusion tracer [(99m)Tc(N)(NOEt)(2)] is related to the degradation processes evidenced in this study or is the result of the mediocre imaging properties of the tracer. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  19. Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes

    PubMed Central

    Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin

    2012-01-01

    Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993

  20. Validation of a rapid, semiautomatic image analysis tool for measurement of gastric accommodation and emptying by magnetic resonance imaging

    PubMed Central

    Dixit, Sudeepa; Fox, Mark; Pal, Anupam

    2014-01-01

    Magnetic resonance imaging (MRI) has advantages for the assessment of gastrointestinal structures and functions; however, processing MRI data is time consuming and this has limited uptake to a few specialist centers. This study introduces a semiautomatic image processing system for rapid analysis of gastrointestinal MRI. For assessment of simpler regions of interest (ROI) such as the stomach, the system generates virtual images along arbitrary planes that intersect the ROI edges in the original images. This generates seed points that are joined automatically to form contours on each adjacent two-dimensional image and reconstructed in three dimensions (3D). An alternative thresholding approach is available for rapid assessment of complex structures like the small intestine. For assessment of dynamic gastrointestinal function, such as gastric accommodation and emptying, the initial 3D reconstruction is used as reference to process adjacent image stacks automatically. This generates four-dimensional (4D) reconstructions of dynamic volume change over time. Compared with manual processing, this semiautomatic system reduced the user input required to analyze a MRI gastric emptying study (estimated 100 vs. 10,000 mouse clicks). This analysis was not subject to variation in volume measurements seen between three human observers. In conclusion, the image processing platform presented processed large volumes of MRI data, such as that produced by gastric accommodation and emptying studies, with minimal user input. 3D and 4D reconstructions of the stomach and, potentially, other gastrointestinal organs are produced faster and more accurately than manual methods. This system will facilitate the application of MRI in gastrointestinal research and clinical practice. PMID:25540229

  1. Relationship between diffusivity of water molecules inside hydrating tablets and their drug release behavior elucidated by magnetic resonance imaging.

    PubMed

    Kikuchi, Shingo; Onuki, Yoshinori; Kuribayashi, Hideto; Takayama, Kozo

    2012-01-01

    We reported previously that sustained release matrix tablets showed zero-order drug release without being affected by pH change. To understand drug release mechanisms more fully, we monitored the swelling and erosion of hydrating tablets using magnetic resonance imaging (MRI). Three different types of tablets comprised of polyion complex-forming materials and a hydroxypropyl methylcellulose (HPMC) were used. Proton density- and diffusion-weighted images of the hydrating tablets were acquired at intervals. Furthermore, apparent self-diffusion coefficient maps were generated from diffusion-weighted imaging to evaluate the state of hydrating tablets. Our findings indicated that water penetration into polyion complex tablets was faster than that into HPMC matrix tablets. In polyion complex tablets, water molecules were dispersed homogeneously and their diffusivity was relatively high, whereas in HPMC matrix tablets, water molecule movement was tightly restricted within the gel. An optimal tablet formulation determined in a previous study had water molecule penetration and diffusivity properties that appeared intermediate to those of polyion complex and HPMC matrix tablets; water molecules were capable of penetrating throughout the tablets and relatively high diffusivity was similar to that in the polyion complex tablet, whereas like the HPMC matrix tablet, it was well swollen. This study succeeded in characterizing the tablet hydration process. MRI provides profound insight into the state of water molecules in hydrating tablets; thus, it is a useful tool for understanding drug release mechanisms at a molecular level.

  2. Mapping the Complex Morphology of Cell Interactions with Nanowire Substrates Using FIB-SEM

    PubMed Central

    Jensen, Mikkel R. B.; Łopacińska, Joanna; Schmidt, Michael S.; Skolimowski, Maciej; Abeille, Fabien; Qvortrup, Klaus; Mølhave, Kristian

    2013-01-01

    Using high resolution focused ion beam scanning electron microscopy (FIB-SEM) we study the details of cell-nanostructure interactions using serial block face imaging. 3T3 Fibroblast cellular monolayers are cultured on flat glass as a control surface and on two types of nanostructured scaffold substrates made from silicon black (Nanograss) with low- and high nanowire density. After culturing for 72 hours the cells were fixed, heavy metal stained, embedded in resin, and processed with FIB-SEM block face imaging without removing the substrate. The sample preparation procedure, image acquisition and image post-processing were specifically optimised for cellular monolayers cultured on nanostructured substrates. Cells display a wide range of interactions with the nanostructures depending on the surface morphology, but also greatly varying from one cell to another on the same substrate, illustrating a wide phenotypic variability. Depending on the substrate and cell, we observe that cells could for instance: break the nanowires and engulf them, flatten the nanowires or simply reside on top of them. Given the complexity of interactions, we have categorised our observations and created an overview map. The results demonstrate that detailed nanoscale resolution images are required to begin understanding the wide variety of individual cells’ interactions with a structured substrate. The map will provide a framework for light microscopy studies of such interactions indicating what modes of interactions must be considered. PMID:23326412

  3. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    NASA Astrophysics Data System (ADS)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  4. Three-color single molecule imaging shows WASP detachment from Arp2/3 complex triggers actin filament branch formation

    PubMed Central

    Smith, Benjamin A; Padrick, Shae B; Doolittle, Lynda K; Daugherty-Clarke, Karen; Corrêa, Ivan R; Xu, Ming-Qun; Goode, Bruce L; Rosen, Michael K; Gelles, Jeff

    2013-01-01

    During cell locomotion and endocytosis, membrane-tethered WASP proteins stimulate actin filament nucleation by the Arp2/3 complex. This process generates highly branched arrays of filaments that grow toward the membrane to which they are tethered, a conflict that seemingly would restrict filament growth. Using three-color single-molecule imaging in vitro we revealed how the dynamic associations of Arp2/3 complex with mother filament and WASP are temporally coordinated with initiation of daughter filament growth. We found that WASP proteins dissociated from filament-bound Arp2/3 complex prior to new filament growth. Further, mutations that accelerated release of WASP from filament-bound Arp2/3 complex proportionally accelerated branch formation. These data suggest that while WASP promotes formation of pre-nucleation complexes, filament growth cannot occur until it is triggered by WASP release. This provides a mechanism by which membrane-bound WASP proteins can stimulate network growth without restraining it. DOI: http://dx.doi.org/10.7554/eLife.01008.001 PMID:24015360

  5. On-line measurement of diameter of hot-rolled steel tube

    NASA Astrophysics Data System (ADS)

    Zhu, Xueliang; Zhao, Huiying; Tian, Ailing; Li, Bin

    2015-02-01

    In order to design a online diameter measurement system for Hot-rolled seamless steel tube production line. On one hand, it can play a stimulate part in the domestic pipe measuring technique. On the other hand, it can also make our domestic hot rolled seamless steel tube enterprises gain a strong product competitiveness with low input. Through the analysis of various detection methods and techniques contrast, this paper choose a CCD camera-based online caliper system design. The system mainly includes the hardware measurement portion and the image processing section, combining with software control technology and image processing technology, which can complete online measurement of heat tube diameter. Taking into account the complexity of the actual job site situation, it can choose a relatively simple and reasonable layout. The image processing section mainly to solve the camera calibration and the application of a function in Matlab, to achieve the diameter size display directly through the algorithm to calculate the image. I build a simulation platform in the design last phase, successfully, collect images for processing, to prove the feasibility and rationality of the design and make error in less than 2%. The design successfully using photoelectric detection technology to solve real work problems

  6. Continuous monitoring of enzymatic activity within native electrophoresis gels: Application to mitochondrial oxidative phosphorylation complexes

    PubMed Central

    Covian, Raul; Chess, David; Balaban, Robert S.

    2012-01-01

    Native gel electrophoresis allows the separation of very small amounts of protein complexes while retaining aspects of their activity. In-gel enzymatic assays are usually performed by using reaction-dependent deposition of chromophores or light scattering precipitates quantified at fixed time points after gel removal and fixation, limiting the ability to analyze enzyme reaction kinetics. Herein, we describe a custom reaction chamber with reaction media recirculation and filtering and an imaging system that permits the continuous monitoring of in-gel enzymatic activity even in the presence of turbidity. Images were continuously collected using time-lapse high resolution digital imaging, and processing routines were developed to obtain kinetic traces of the in-gel activities and analyze reaction time courses. This system also permitted the evaluation of enzymatic activity topology within the protein bands of the gel. This approach was used to analyze the reaction kinetics of two mitochondrial complexes in native gels. Complex IV kinetics showed a short initial linear phase where catalytic rates could be calculated, whereas Complex V activity revealed a significant lag phase followed by two linear phases. The utility of monitoring the entire kinetic behavior of these reactions in native gels, as well as the general application of this approach, is discussed. PMID:22975200

  7. Continuous monitoring of enzymatic activity within native electrophoresis gels: application to mitochondrial oxidative phosphorylation complexes.

    PubMed

    Covian, Raul; Chess, David; Balaban, Robert S

    2012-12-01

    Native gel electrophoresis allows the separation of very small amounts of protein complexes while retaining aspects of their activity. In-gel enzymatic assays are usually performed by using reaction-dependent deposition of chromophores or light-scattering precipitates quantified at fixed time points after gel removal and fixation, limiting the ability to analyze the enzyme reaction kinetics. Herein, we describe a custom reaction chamber with reaction medium recirculation and filtering and an imaging system that permits the continuous monitoring of in-gel enzymatic activity even in the presence of turbidity. Images were continuously collected using time-lapse high-resolution digital imaging, and processing routines were developed to obtain kinetic traces of the in-gel activities and analyze reaction time courses. This system also permitted the evaluation of enzymatic activity topology within the protein bands of the gel. This approach was used to analyze the reaction kinetics of two mitochondrial complexes in native gels. Complex IV kinetics showed a short initial linear phase in which catalytic rates could be calculated, whereas Complex V activity revealed a significant lag phase followed by two linear phases. The utility of monitoring the entire kinetic behavior of these reactions in native gels, as well as the general application of this approach, is discussed. Published by Elsevier Inc.

  8. Laser-assisted nanomaterial deposition, nanomanufacturing, in situ monitoring and associated apparatus

    DOEpatents

    Mao, Samuel S; Grigoropoulos, Costas P; Hwang, David J; Minor, Andrew M

    2013-11-12

    Laser-assisted apparatus and methods for performing nanoscale material processing, including nanodeposition of materials, can be controlled very precisely to yield both simple and complex structures with sizes less than 100 nm. Optical or thermal energy in the near field of a photon (laser) pulse is used to fabricate submicron and nanometer structures on a substrate. A wide variety of laser material processing techniques can be adapted for use including, subtractive (e.g., ablation, machining or chemical etching), additive (e.g., chemical vapor deposition, selective self-assembly), and modification (e.g., phase transformation, doping) processes. Additionally, the apparatus can be integrated into imaging instruments, such as SEM and TEM, to allow for real-time imaging of the material processing.

  9. Fusion of visible and near-infrared images based on luminance estimation by weighted luminance algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Zhun; Cheng, Feiyan; Shi, Junsheng; Huang, Xiaoqiao

    2018-01-01

    In a low-light scene, capturing color images needs to be at a high-gain setting or a long-exposure setting to avoid a visible flash. However, such these setting will lead to color images with serious noise or motion blur. Several methods have been proposed to improve a noise-color image through an invisible near infrared flash image. A novel method is that the luminance component and the chroma component of the improved color image are estimated from different image sources [1]. The luminance component is estimated mainly from the NIR image via a spectral estimation, and the chroma component is estimated from the noise-color image by denoising. However, it is challenging to estimate the luminance component. This novel method to estimate the luminance component needs to generate the learning data pairs, and the processes and algorithm are complex. It is difficult to achieve practical application. In order to reduce the complexity of the luminance estimation, an improved luminance estimation algorithm is presented in this paper, which is to weight the NIR image and the denoised-color image and the weighted coefficients are based on the mean value and standard deviation of both images. Experimental results show that the same fusion effect at aspect of color fidelity and texture quality is achieved, compared the proposed method with the novel method, however, the algorithm is more simple and practical.

  10. Automatic detection of surface changes on Mars - a status report

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2016-10-01

    Orbiter missions have acquired approximately 500,000 high-resolution visible images of the Martian surface, covering an area approximately 6 times larger than the overall area of Mars. This data abundance allows the scientific community to examine the Martian surface thoroughly and potentially make exciting new discoveries. However, the increased data volume, as well as its complexity, generate problems at the data processing stages, which are mainly related to a number of unresolved issues that batch-mode planetary data processing presents. As a matter of fact, the scientific community is currently struggling to scale the common ("one-at-a-time" processing of incoming products by expert scientists) paradigm to tackle the large volumes of input data. Moreover, expert scientists are more or less forced to use complex software in order to extract input information for their research from raw data, even though they are not data scientists themselves.Our work within the STFC and EU FP7 i-Mars projects aims at developing automated software that will process all of the acquired data, leaving domain expert planetary scientists to focus on their final analysis and interpretation. Moreover, after completing the development of a fully automated pipeline that processes automatically the co-registration of high-resolution NASA images to ESA/DLR HRSC baseline, our main goal has shifted to the automated detection of surface changes on Mars. In particular, we are developing a pipeline that uses as an input multi-instrument image pairs, which are processed by an automated pipeline, in order to identify changes that are correlated with Mars surface dynamic phenomena. The pipeline has currently been tested in anger on 8,000 co-registered images and by the time of DPS/EPSC we expect to have processed many tens of thousands of image pairs, producing a set of change detection results, a subset of which will be shown in the presentation.The research leading to these results has received funding from the STFC "MSSL Consolidated Grant under "Planetary Surface Data Mining" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement number 607379

  11. Interferometric inverse synthetic aperture radar imaging for space targets based on wideband direct sampling using two antennas

    NASA Astrophysics Data System (ADS)

    Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping

    2014-01-01

    Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.

  12. Automated image analysis for quantification of reactive oxygen species in plant leaves.

    PubMed

    Sekulska-Nalewajko, Joanna; Gocławski, Jarosław; Chojak-Koźniewska, Joanna; Kuźniak, Elżbieta

    2016-10-15

    The paper presents an image processing method for the quantitative assessment of ROS accumulation areas in leaves stained with DAB or NBT for H 2 O 2 and O 2 - detection, respectively. Three types of images determined by the combination of staining method and background color are considered. The method is based on the principle of supervised machine learning with manually labeled image patterns used for training. The method's algorithm is developed as a JavaScript macro in the public domain Fiji (ImageJ) environment. It allows to select the stained regions of ROS-mediated histochemical reactions, subsequently fractionated according to the weak, medium and intense staining intensity and thus ROS accumulation. It also evaluates total leaf blade area. The precision of ROS accumulation area detection is validated by the Dice Similarity Coefficient in the case of manual patterns. The proposed framework reduces the computation complexity, once prepared, requires less image processing expertise than the competitive methods and represents a routine quantitative imaging assay for a general histochemical image classification. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Physics and psychophysics of color reproduction

    NASA Astrophysics Data System (ADS)

    Giorgianni, Edward J.

    1991-08-01

    The successful design of a color-imaging system requires knowledge of the factors used to produce and control color. This knowledge can be derived, in part, from measurements of the physical properties of the imaging system. Color itself, however, is a perceptual response and cannot be directly measured. Though the visual process begins with physics, as radiant energy reaching the eyes, it is in the mind of the observer that the stimuli produced from this radiant energy are interpreted and organized to form meaningful perceptions, including the perception of color. A comprehensive understanding of color reproduction, therefore, requires not only a knowledge of the physical properties of color-imaging systems but also an understanding of the physics, psychophysics, and psychology of the human observer. The human visual process is quite complex; in many ways the physical properties of color-imaging systems are easier to understand.

  14. Potential of PET-MRI for imaging of non-oncologic musculoskeletal disease.

    PubMed

    Kogan, Feliks; Fan, Audrey P; Gold, Garry E

    2016-12-01

    Early detection of musculoskeletal disease leads to improved therapies and patient outcomes, and would benefit greatly from imaging at the cellular and molecular level. As it becomes clear that assessment of multiple tissues and functional processes are often necessary to study the complex pathogenesis of musculoskeletal disorders, the role of multi-modality molecular imaging becomes increasingly important. New positron emission tomography-magnetic resonance imaging (PET-MRI) systems offer to combine high-resolution MRI with simultaneous molecular information from PET to study the multifaceted processes involved in numerous musculoskeletal disorders. In this article, we aim to outline the potential clinical utility of hybrid PET-MRI to these non-oncologic musculoskeletal diseases. We summarize current applications of PET molecular imaging in osteoarthritis (OA), rheumatoid arthritis (RA), metabolic bone diseases and neuropathic peripheral pain. Advanced MRI approaches that reveal biochemical and functional information offer complementary assessment in soft tissues. Additionally, we discuss technical considerations for hybrid PET-MR imaging including MR attenuation correction, workflow, radiation dose, and quantification.

  15. The infection algorithm: an artificial epidemic approach for dense stereo correspondence.

    PubMed

    Olague, Gustavo; Fernández, Francisco; Pérez, Cynthia B; Lutton, Evelyne

    2006-01-01

    We present a new bio-inspired approach applied to a problem of stereo image matching. This approach is based on an artificial epidemic process, which we call the infection algorithm. The problem at hand is a basic one in computer vision for 3D scene reconstruction. It has many complex aspects and is known as an extremely difficult one. The aim is to match the contents of two images in order to obtain 3D information that allows the generation of simulated projections from a viewpoint that is different from the ones of the initial photographs. This process is known as view synthesis. The algorithm we propose exploits the image contents in order to produce only the necessary 3D depth information, while saving computational time. It is based on a set of distributed rules, which propagate like an artificial epidemic over the images. Experiments on a pair of real images are presented, and realistic reprojected images have been generated.

  16. Smartphones as image processing systems for prosthetic vision.

    PubMed

    Zapf, Marc P; Matteucci, Paul B; Lovell, Nigel H; Suaning, Gregg J

    2013-01-01

    The feasibility of implants for prosthetic vision has been demonstrated by research and commercial organizations. In most devices, an essential forerunner to the internal stimulation circuit is an external electronics solution for capturing, processing and relaying image information as well as extracting useful features from the scene surrounding the patient. The capabilities and multitude of image processing algorithms that can be performed by the device in real-time plays a major part in the final quality of the prosthetic vision. It is therefore optimal to use powerful hardware yet to avoid bulky, straining solutions. Recent publications have reported of portable single-board computers fast enough for computationally intensive image processing. Following the rapid evolution of commercial, ultra-portable ARM (Advanced RISC machine) mobile devices, the authors investigated the feasibility of modern smartphones running complex face detection as external processing devices for vision implants. The role of dedicated graphics processors in speeding up computation was evaluated while performing a demanding noise reduction algorithm (image denoising). The time required for face detection was found to decrease by 95% from 2.5 year old to recent devices. In denoising, graphics acceleration played a major role, speeding up denoising by a factor of 18. These results demonstrate that the technology has matured sufficiently to be considered as a valid external electronics platform for visual prosthetic research.

  17. Deformation-induced speckle-pattern evolution and feasibility of correlational speckle tracking in optical coherence elastography.

    PubMed

    Zaitsev, Vladimir Y; Matveyev, Alexandr L; Matveev, Lev A; Gelikonov, Grigory V; Gelikonov, Valentin M; Vitkin, Alex

    2015-07-01

    Feasibility of speckle tracking in optical coherence tomography (OCT) based on digital image correlation (DIC) is discussed in the context of elastography problems. Specifics of applying DIC methods to OCT, compared to processing of photographic images in mechanical engineering applications, are emphasized and main complications are pointed out. Analytical arguments are augmented by accurate numerical simulations of OCT speckle patterns. In contrast to DIC processing for displacement and strain estimation in photographic images, the accuracy of correlational speckle tracking in deformed OCT images is strongly affected by the coherent nature of speckles, for which strain-induced complications of speckle “blinking” and “boiling” are typical. The tracking accuracy is further compromised by the usually more pronounced pixelated structure of OCT scans compared with digital photographic images in classical DIC applications. Processing of complex-valued OCT data (comprising both amplitude and phase) compared to intensity-only scans mitigates these deleterious effects to some degree. Criteria of the attainable speckle tracking accuracy and its dependence on the key OCT system parameters are established.

  18. An embedded processor for real-time atmoshperic compensation

    NASA Astrophysics Data System (ADS)

    Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-05-01

    Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.

  19. Information theoretic methods for image processing algorithm optimization

    NASA Astrophysics Data System (ADS)

    Prokushkin, Sergey F.; Galil, Erez

    2015-01-01

    Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).

  20. A De-Identification Pipeline for Ultrasound Medical Images in DICOM Format.

    PubMed

    Monteiro, Eriksson; Costa, Carlos; Oliveira, José Luís

    2017-05-01

    Clinical data sharing between healthcare institutions, and between practitioners is often hindered by privacy protection requirements. This problem is critical in collaborative scenarios where data sharing is fundamental for establishing a workflow among parties. The anonymization of patient information burned in DICOM images requires elaborate processes somewhat more complex than simple de-identification of textual information. Usually, before sharing, there is a need for manual removal of specific areas containing sensitive information in the images. In this paper, we present a pipeline for ultrasound medical image de-identification, provided as a free anonymization REST service for medical image applications, and a Software-as-a-Service to streamline automatic de-identification of medical images, which is freely available for end-users. The proposed approach applies image processing functions and machine-learning models to bring about an automatic system to anonymize medical images. To perform character recognition, we evaluated several machine-learning models, being Convolutional Neural Networks (CNN) selected as the best approach. For accessing the system quality, 500 processed images were manually inspected showing an anonymization rate of 89.2%. The tool can be accessed at https://bioinformatics.ua.pt/dicom/anonymizer and it is available with the most recent version of Google Chrome, Mozilla Firefox and Safari. A Docker image containing the proposed service is also publicly available for the community.

  1. Automated processing for proton spectroscopic imaging using water reference deconvolution.

    PubMed

    Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W

    1994-06-01

    Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.

  2. The general mitochondrial processing peptidase from potato is an integral part of cytochrome c reductase of the respiratory chain.

    PubMed Central

    Braun, H P; Emmermann, M; Kruft, V; Schmitz, U K

    1992-01-01

    The major mitochondrial processing activity removing presequences from nuclear encoded precursor proteins is present in the soluble fraction of fungal and mammalian mitochondria. We found that in potato, this activity resides in the inner mitochondrial membrane. Surprisingly, the proteolytic activity co-purifies with cytochrome c reductase, a protein complex of the respiratory chain. The purified complex is bifunctional, as it has the ability to transfer electrons from ubiquinol to cytochrome c and to cleave off the presequences of mitochondrial precursor proteins. In contrast to the nine subunit fungal complex, cytochrome c reductase from potato comprises 10 polypeptides. Protein sequencing of peptides from individual subunits and analysis of corresponding cDNA clones reveals that subunit III of cytochrome c reductase (51 kDa) represents the general mitochondrial processing peptidase. Images PMID:1324169

  3. Compact time- and space-integrating SAR processor: design and development status

    NASA Astrophysics Data System (ADS)

    Haney, Michael W.; Levy, James J.; Christensen, Marc P.; Michael, Robert R., Jr.; Mock, Michael M.

    1994-06-01

    Progress toward a flight demonstration of the acousto-optic time- and space- integrating real-time SAR image formation processor program is reported. The concept overcomes the size and power consumption limitations of electronic approaches by using compact, rugged, and low-power analog optical signal processing techniques for the most computationally taxing portions of the SAR imaging problem. Flexibility and performance are maintained by the use of digital electronics for the critical low-complexity filter generation and output image processing functions. The results reported include tests of a laboratory version of the concept, a description of the compact optical design that will be implemented, and an overview of the electronic interface and controller modules of the flight-test system.

  4. Comments on `Area and power efficient DCT architecture for image compression' by Dhandapani and Ramachandran

    NASA Astrophysics Data System (ADS)

    Cintra, Renato J.; Bayer, Fábio M.

    2017-12-01

    In [Dhandapani and Ramachandran, "Area and power efficient DCT architecture for image compression", EURASIP Journal on Advances in Signal Processing 2014, 2014:180] the authors claim to have introduced an approximation for the discrete cosine transform capable of outperforming several well-known approximations in literature in terms of additive complexity. We could not verify the above results and we offer corrections for their work.

  5. Automated campaign system

    NASA Astrophysics Data System (ADS)

    Vondran, Gary; Chao, Hui; Lin, Xiaofan; Beyer, Dirk; Joshi, Parag; Atkins, Brian; Obrador, Pere

    2006-02-01

    To run a targeted campaign involves coordination and management across numerous organizations and complex process flows. Everything from market analytics on customer databases, acquiring content and images, composing the materials, meeting the sponsoring enterprise brand standards, driving through production and fulfillment, and evaluating results; all processes are currently performed by experienced highly trained staff. Presented is a developed solution that not only brings together technologies that automate each process, but also automates the entire flow so that a novice user could easily run a successful campaign from their desktop. This paper presents the technologies, structure, and process flows used to bring this system together. Highlighted will be how the complexity of running a targeted campaign is hidden from the user through technologies, all while providing the benefits of a professionally managed campaign.

  6. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  7. Adaptive multifocus image fusion using block compressed sensing with smoothed projected Landweber integration in the wavelet domain.

    PubMed

    V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S

    2016-12-01

    The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.

  8. Recent advances in imaging technologies in dentistry.

    PubMed

    Shah, Naseem; Bansal, Nikhil; Logani, Ajay

    2014-10-28

    Dentistry has witnessed tremendous advances in all its branches over the past three decades. With these advances, the need for more precise diagnostic tools, specially imaging methods, have become mandatory. From the simple intra-oral periapical X-rays, advanced imaging techniques like computed tomography, cone beam computed tomography, magnetic resonance imaging and ultrasound have also found place in modern dentistry. Changing from analogue to digital radiography has not only made the process simpler and faster but also made image storage, manipulation (brightness/contrast, image cropping, etc.) and retrieval easier. The three-dimensional imaging has made the complex cranio-facial structures more accessible for examination and early and accurate diagnosis of deep seated lesions. This paper is to review current advances in imaging technology and their uses in different disciplines of dentistry.

  9. Recent advances in imaging technologies in dentistry

    PubMed Central

    Shah, Naseem; Bansal, Nikhil; Logani, Ajay

    2014-01-01

    Dentistry has witnessed tremendous advances in all its branches over the past three decades. With these advances, the need for more precise diagnostic tools, specially imaging methods, have become mandatory. From the simple intra-oral periapical X-rays, advanced imaging techniques like computed tomography, cone beam computed tomography, magnetic resonance imaging and ultrasound have also found place in modern dentistry. Changing from analogue to digital radiography has not only made the process simpler and faster but also made image storage, manipulation (brightness/contrast, image cropping, etc.) and retrieval easier. The three-dimensional imaging has made the complex cranio-facial structures more accessible for examination and early and accurate diagnosis of deep seated lesions. This paper is to review current advances in imaging technology and their uses in different disciplines of dentistry. PMID:25349663

  10. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    NASA Astrophysics Data System (ADS)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  11. Computational Failure Modeling of Lower Extremities

    DTIC Science & Technology

    2012-01-01

    bone fracture, ligament tear, and muscle rupture . While these injuries may seem well-defined through medical imaging, the process of injury and the...to vehicles from improvised explosives cause severe injuries to the lower extremities, in- cluding bone fracture, ligament tear, and muscle rupture ...modeling offers a powerful tool to explore the insult-to-injury process with high-resolution. When studying a complex dynamic process such as this, it is

  12. Apparent Brecciation Gradient, Mount Desert Island, Maine

    NASA Astrophysics Data System (ADS)

    Hawkins, A. T.; Johnson, S. E.

    2004-05-01

    Mount Desert Island, Maine, comprises a shallow level, Siluro-Devonian igneous complex surrounded by a distinctive breccia zone ("shatter zone" of Gilman and Chapman, 1988). The zone is very well exposed on the southern and eastern shores of the island and provides a unique opportunity to examine subvolcanic processes. The breccia of the Shatter Zone shows wide variation in percent matrix and clast, and may represent a spatial and temporal gradient in breccia formation due to a single eruptive or other catastrophic volcanic event. The shatter zone was divided into five developmental stages based on the extent of brecciation: Bar Harbor Formation, Sols Cliffs breccia, Seeley Road breccia, Dubois breccia, and Great Head breccia. A digital camera was employed to capture scale images of representative outcrops using a 0.5 m square Plexiglas frame. Individual images were joined in Adobe Photoshop to create a composite image of each outcrop. The composite photo was then exported to Adobe Illustrator, which was used to outline the clasts and produce a digital map of the outcrop for analysis. The fractal dimension (Fd) of each clast was calculated using NIH Image and a Euclidean distance mapping method described by Bérubé and Jébrak (1999) to quantify the morphology of the fragments, or the complexity of the outline. The more complex the fragment outline, the higher the fractal dimension, indicating that the fragment is less "mature" or has had less exposure to erosional processes, such as the injection of an igneous matrix. Sols Cliffs breccia has an average Fd of 1.125, whereas Great Head breccia has an average Fd of 1.040, with the stages between having intermediate values. The more complex clasts of the Sols Cliffs breccia with a small amount (26.38%) of matrix material suggests that it is the first stage in a sequence of brecciation ending at the more mature, matrix-supported (71.37%) breccia of Great Head. The results of this study will be used to guide isotopic and geochemical analysis of the matrix igneous material in the attempt to better understand the dynamic processes that occur in subvolcanic environments and the mechanisms involved in breccia formation.

  13. Superresolution intrinsic fluorescence imaging of chromatin utilizing native, unmodified nucleic acids for contrast

    PubMed Central

    Dong, Biqin; Almassalha, Luay M.; Stypula-Cyrus, Yolanda; Urban, Ben E.; Chandler, John E.; Nguyen, The-Quyen; Sun, Cheng; Zhang, Hao F.; Backman, Vadim

    2016-01-01

    Visualizing the nanoscale intracellular structures formed by nucleic acids, such as chromatin, in nonperturbed, structurally and dynamically complex cellular systems, will help expand our understanding of biological processes and open the next frontier for biological discovery. Traditional superresolution techniques to visualize subdiffractional macromolecular structures formed by nucleic acids require exogenous labels that may perturb cell function and change the very molecular processes they intend to study, especially at the extremely high label densities required for superresolution. However, despite tremendous interest and demonstrated need, label-free optical superresolution imaging of nucleotide topology under native nonperturbing conditions has never been possible. Here we investigate a photoswitching process of native nucleotides and present the demonstration of subdiffraction-resolution imaging of cellular structures using intrinsic contrast from unmodified DNA based on the principle of single-molecule photon localization microscopy (PLM). Using DNA-PLM, we achieved nanoscopic imaging of interphase nuclei and mitotic chromosomes, allowing a quantitative analysis of the DNA occupancy level and a subdiffractional analysis of the chromosomal organization. This study may pave a new way for label-free superresolution nanoscopic imaging of macromolecular structures with nucleotide topologies and could contribute to the development of new DNA-based contrast agents for superresolution imaging. PMID:27535934

  14. Uncooled Micro-Cantilever Infrared Imager Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panagiotis, Datskos G.

    2008-02-05

    We report on the development, fabrication and characterization of microcantilever based uncooled focal plane array (FPA) for infrared imaging. By combining a streamlined design of microcantilever thermal transducers with a highly efficient optical readout, we minimized the fabrication complexity while achieving a competitive level of imaging performance. The microcantilever FPAs were fabricated using a straightforward fabrication process that involved only three photolithographic steps (i.e. three masks). A designed and constructed prototype of an IR imager employed a simple optical readout based on a noncoherent low-power light source. The main figures of merit of the IR imager were found to bemore » comparable to those of uncooled MEMS infrared detectors with substantially higher degree of fabrication complexity. In particular, the NETD and the response time of the implemented MEMS IR detector were measured to be as low as 0.5K and 6 ms, respectively. The potential of the implemented designs can also be concluded from the fact that the constructed prototype enabled IR imaging of close to room temperature objects without the use of any advanced data processing. The most unique and practically valuable feature of the implemented FPAs, however, is their scalability to high resolution formats, such as 2000 x 2000, without progressively growing device complexity and cost. The overall technical objective of the proposed work was to develop uncooled infrared arrays based on micromechanical sensors. Currently used miniature sensors use a number of different readout techniques to accomplish the sensing. The use of optical readout techniques sensing require the deposition of thin coatings on the surface of micromechanical thermal detectors. Oak Ridge National Laboratory (ORNL) is uniquely qualified to perform the required research and development (R&D) services that will assist our ongoing activities. Over the past decade ORNL has developed a number of unique methods and techniques that led to improved sensors using a number of different approaches.« less

  15. Improvement of Speckle Contrast Image Processing by an Efficient Algorithm.

    PubMed

    Steimers, A; Farnung, W; Kohl-Bareis, M

    2016-01-01

    We demonstrate an efficient algorithm for the temporal and spatial based calculation of speckle contrast for the imaging of blood flow by laser speckle contrast analysis (LASCA). It reduces the numerical complexity of necessary calculations, facilitates a multi-core and many-core implementation of the speckle analysis and enables an independence of temporal or spatial resolution and SNR. The new algorithm was evaluated for both spatial and temporal based analysis of speckle patterns with different image sizes and amounts of recruited pixels as sequential, multi-core and many-core code.

  16. Neural correlates in the processing of phoneme-level complexity in vowel production.

    PubMed

    Park, Haeil; Iverson, Gregory K; Park, Hae-Jeong

    2011-12-01

    We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX: /tiɯi/ > MID-COMPLEX: /tiye/ > SIMPLE: /tii/). Increased activity in the left inferior frontal gyrus (Brodmann Areas (BA) 44 and 47), supplementary motor area and anterior insula was observed for the articulation of COMPLEX sequences relative to MID-COMPLEX; this was the case with the articulation of MID-COMPLEX relative to SIMPLE, except that the pars orbitalis (BA 47) was dominantly identified in the Broca's area. The differentiation indicates that phonological complexity is reflected in the neural processing of distinct phonemic representations, both by recruiting brain regions associated with retrieval of phonological information from memory and via articulatory rehearsal for the production of COMPLEX vowels. In addition, the finding that increased complexity engages greater areas of the brain suggests that brain activation can be a neurobiological measure of articulo-phonological complexity, complementing, if not substituting for, biomechanical measurements of speech motor activity. 2011 Elsevier Inc. All rights reserved.

  17. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.

  18. The study of cognitive processes in the brain EEG during the perception of bistable images using wavelet skeleton

    NASA Astrophysics Data System (ADS)

    Runnova, Anastasiya E.; Zhuravlev, Maksim O.; Pysarchik, Alexander N.; Khramova, Marina V.; Grubov, Vadim V.

    2017-03-01

    In the paper we study the appearance of the complex patterns in human EEG data during a psychophysiological experiment by stimulating cognitive activity with the perception of ambiguous object. A new method based on the calculation of the maximum energy component for the continuous wavelet transform (skeletons) is proposed. Skeleton analysis allows us to identify specific patterns in the EEG data set, appearing in the perception of ambiguous objects. Thus, it becomes possible to diagnose some cognitive processes associated with the concentration of attention and recognition of complex visual objects. The article presents the processing results of experimental data for 6 male volunteers.

  19. In situ investigation of complex BaSO4 fiber generation in the presence of sodium polyacrylate. 1. Kinetics and solution analysis.

    PubMed

    Wang, Tongxin; Cölfen, Helmut

    2006-10-10

    Simple solution analysis of the formation mechanism of complex BaSO(4) fiber bundles in the presence of polyacrylate sodium salt, via a bioinspired approach, is reported. Titration of the polyacrylate solution with Ba(2+) revealed complex formation and the optimum ratio of Ba(2+) to polyacrylate for a slow polymer-controlled mineralization process. This is a much simpler and faster method to determine the appropriate additive/mineral concentration pairs as opposed to more common crystallization experiments in which the additive/mineral concentration is varied. Time-dependent pH measurements were carried out to determine the concentration of solution species from which BaSO(4) supersaturation throughout the fiber formation process can be calculated and the second-order kinetics of the Ba(2+) concentration in solution can be identified. Conductivity measurements, pH measurements, and analytical ultracentrifugation revealed the first formed species to be Ba-polyacrylate complexes. A combination of the solution analysis results and optical microscopic images allows a detailed picture of the complex precipitation and self-organization process, a particle-mediated process involving mesoscopic transformations, to be revealed.

  20. Imaging of Posttraumatic Arthritis, Avascular Necrosis, Septic Arthritis, Complex Regional Pain Syndrome, and Cancer Mimicking Arthritis.

    PubMed

    Rupasov, Andrey; Cain, Usa; Montoya, Simone; Blickman, Johan G

    2017-09-01

    This article focuses on the imaging of 5 discrete entities with a common end result of disability: posttraumatic arthritis, a common form of secondary osteoarthritis that results from a prior insult to the joint; avascular necrosis, a disease of impaired osseous blood flow, leading to cellular death and subsequent osseous collapse; septic arthritis, an infectious process leading to destructive changes within the joint; complex regional pain syndrome, a chronic limb-confined painful condition arising after injury; and cases of cancer mimicking arthritis, in which the initial findings seem to represent arthritis, despite a more insidious cause. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Rocket launcher mechanism of collaborative actin assembly defined by single-molecule imaging.

    PubMed

    Breitsprecher, Dennis; Jaiswal, Richa; Bombardier, Jeffrey P; Gould, Christopher J; Gelles, Jeff; Goode, Bruce L

    2012-06-01

    Interacting sets of actin assembly factors work together in cells, but the underlying mechanisms have remained obscure. We used triple-color single-molecule fluorescence microscopy to image the tumor suppressor adenomatous polyposis coli (APC) and the formin mDia1 during filament assembly. Complexes consisting of APC, mDia1, and actin monomers initiated actin filament formation, overcoming inhibition by capping protein and profilin. Upon filament polymerization, the complexes separated, with mDia1 moving processively on growing barbed ends while APC remained at the site of nucleation. Thus, the two assembly factors directly interact to initiate filament assembly and then separate but retain independent associations with either end of the growing filament.

  2. Rocket launcher mechanism of collaborative actin assembly defined by single-molecule imaging

    PubMed Central

    Breitsprecher, Dennis; Jaiswal, Richa; Bombardier, Jeffrey P.; Gould, Christopher J.; Gelles, Jeff; Goode, Bruce L.

    2013-01-01

    Interacting sets of actin assembly factors work together in cells, but the underlying mechanisms have remained obscure. We used triple-color single molecule fluorescence microscopy to image the tumor-suppressor Adenomateous polyposis coli (APC) and the formin mDia1 during filament assembly. Complexes consisting of APC, mDia1, and actin monomers intiated actin filament formation, overcoming inhibition by capping protein and profilin. Upon filament polymerization, the complexes separated, with mDia1 moving processively on growing barbed ends while APC remained at the site of nucleation. Thus, the two assembly factors directly interact to initiate filament assembly, and then separate but retain independent associations with either end of the growing filament. PMID:22654058

  3. Inter-image matching

    NASA Technical Reports Server (NTRS)

    Wolfe, R. H., Jr.; Juday, R. D.

    1982-01-01

    Interimage matching is the process of determining the geometric transformation required to conform spatially one image to another. In principle, the parameters of that transformation are varied until some measure of some difference between the two images is minimized or some measure of sameness (e.g., cross-correlation) is maximized. The number of such parameters to vary is faily large (six for merely an affine transformation), and it is customary to attempt an a priori transformation reducing the complexity of the residual transformation or subdivide the image into small enough match zones (control points or patches) that a simple transformation (e.g., pure translation) is applicable, yet large enough to facilitate matching. In the latter case, a complex mapping function is fit to the results (e.g., translation offsets) in all the patches. The methods reviewed have all chosen one or both of the above options, ranging from a priori along-line correction for line-dependent effects (the high-frequency correction) to a full sensor-to-geobase transformation with subsequent subdivision into a grid of match points.

  4. Infrared vehicle recognition using unsupervised feature learning based on K-feature

    NASA Astrophysics Data System (ADS)

    Lin, Jin; Tan, Yihua; Xia, Haijiao; Tian, Jinwen

    2018-02-01

    Subject to the complex battlefield environment, it is difficult to establish a complete knowledge base in practical application of vehicle recognition algorithms. The infrared vehicle recognition is always difficult and challenging, which plays an important role in remote sensing. In this paper we propose a new unsupervised feature learning method based on K-feature to recognize vehicle in infrared images. First, we use the target detection algorithm which is based on the saliency to detect the initial image. Then, the unsupervised feature learning based on K-feature, which is generated by Kmeans clustering algorithm that extracted features by learning a visual dictionary from a large number of samples without label, is calculated to suppress the false alarm and improve the accuracy. Finally, the vehicle target recognition image is finished by some post-processing. Large numbers of experiments demonstrate that the proposed method has satisfy recognition effectiveness and robustness for vehicle recognition in infrared images under complex backgrounds, and it also improve the reliability of it.

  5. Research on the generation of the background with sea and sky in infrared scene

    NASA Astrophysics Data System (ADS)

    Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li

    2008-03-01

    It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.

  6. Fuzzy logic particle tracking velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1993-01-01

    Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.

  7. Fuzzy logic and image processing techniques for the interpretation of seismic data

    NASA Astrophysics Data System (ADS)

    Orozco-del-Castillo, M. G.; Ortiz-Alemán, C.; Urrutia-Fucugauchi, J.; Rodríguez-Castellanos, A.

    2011-06-01

    Since interpretation of seismic data is usually a tedious and repetitive task, the ability to do so automatically or semi-automatically has become an important objective of recent research. We believe that the vagueness and uncertainty in the interpretation process makes fuzzy logic an appropriate tool to deal with seismic data. In this work we developed a semi-automated fuzzy inference system to detect the internal architecture of a mass transport complex (MTC) in seismic images. We propose that the observed characteristics of a MTC can be expressed as fuzzy if-then rules consisting of linguistic values associated with fuzzy membership functions. The constructions of the fuzzy inference system and various image processing techniques are presented. We conclude that this is a well-suited problem for fuzzy logic since the application of the proposed methodology yields a semi-automatically interpreted MTC which closely resembles the MTC from expert manual interpretation.

  8. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography).

    PubMed

    Siegel, Nisan; Storrie, Brian; Bruce, Marc; Brooker, Gary

    2015-02-07

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called "CINCH". An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution.

  9. A FPGA-based architecture for real-time image matching

    NASA Astrophysics Data System (ADS)

    Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo

    2013-10-01

    Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.

  10. GPU-Accelerated Hybrid Algorithm for 3D Localization of Fluorescent Emitters in Dense Clusters

    NASA Astrophysics Data System (ADS)

    Jung, Yoon; Barsic, Anthony; Piestun, Rafael; Fakhri, Nikta

    In stochastic switching-based super-resolution imaging, a random subset of fluorescent emitters are imaged and localized for each frame to construct a single high resolution image. However, the condition of non-overlapping point spread functions (PSFs) imposes constraints on experimental parameters. Recent development in post processing methods such as dictionary-based sparse support recovery using compressive sensing has shown up to an order of magnitude higher recall rate than single emitter fitting methods. However, the computational complexity of this approach scales poorly with the grid size and requires long runtime. Here, we introduce a fast and accurate compressive sensing algorithm for localizing fluorescent emitters in high density in 3D, namely sparse support recovery using Orthogonal Matching Pursuit (OMP) and L1-Homotopy algorithm for reconstructing STORM images (SOLAR STORM). SOLAR STORM combines OMP with L1-Homotopy to reduce computational complexity, which is further accelerated by parallel implementation using GPUs. This method can be used in a variety of experimental conditions for both in vitro and live cell fluorescence imaging.

  11. LONI visualization environment.

    PubMed

    Dinov, Ivo D; Valentino, Daniel; Shin, Bae Cheol; Konstantinidis, Fotios; Hu, Guogang; MacKenzie-Graham, Allan; Lee, Erh-Fang; Shattuck, David; Ma, Jeff; Schwartz, Craig; Toga, Arthur W

    2006-06-01

    Over the past decade, the use of informatics to solve complex neuroscientific problems has increased dramatically. Many of these research endeavors involve examining large amounts of imaging, behavioral, genetic, neurobiological, and neuropsychiatric data. Superimposing, processing, visualizing, or interpreting such a complex cohort of datasets frequently becomes a challenge. We developed a new software environment that allows investigators to integrate multimodal imaging data, hierarchical brain ontology systems, on-line genetic and phylogenic databases, and 3D virtual data reconstruction models. The Laboratory of Neuro Imaging visualization environment (LONI Viz) consists of the following components: a sectional viewer for imaging data, an interactive 3D display for surface and volume rendering of imaging data, a brain ontology viewer, and an external database query system. The synchronization of all components according to stereotaxic coordinates, region name, hierarchical ontology, and genetic labels is achieved via a comprehensive BrainMapper functionality, which directly maps between position, structure name, database, and functional connectivity information. This environment is freely available, portable, and extensible, and may prove very useful for neurobiologists, neurogenetisists, brain mappers, and for other clinical, pedagogical, and research endeavors.

  12. A Novel Image Encryption Scheme Based on Intertwining Chaotic Maps and RC4 Stream Cipher

    NASA Astrophysics Data System (ADS)

    Kumari, Manju; Gupta, Shailender

    2018-03-01

    As the systems are enabling us to transmit large chunks of data, both in the form of texts and images, there is a need to explore algorithms which can provide a higher security without increasing the time complexity significantly. This paper proposes an image encryption scheme which uses intertwining chaotic maps and RC4 stream cipher to encrypt/decrypt the images. The scheme employs chaotic map for the confusion stage and for generation of key for the RC4 cipher. The RC4 cipher uses this key to generate random sequences which are used to implement an efficient diffusion process. The algorithm is implemented in MATLAB-2016b and various performance metrics are used to evaluate its efficacy. The proposed scheme provides highly scrambled encrypted images and can resist statistical, differential and brute-force search attacks. The peak signal-to-noise ratio values are quite similar to other schemes, the entropy values are close to ideal. In addition, the scheme is very much practical since having lowest time complexity then its counterparts.

  13. Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis

    NASA Astrophysics Data System (ADS)

    Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.

    2016-07-01

    In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.

  14. Ariel at Voyager Closest Approach

    NASA Image and Video Library

    2000-06-02

    This picture is part of NASA Voyager 2 imaging sequence of Ariel, a moon of Uranus taken on January 24, 1986. The complexity of Ariel surface indicates that a variety of geologic processes have occurred. http://photojournal.jpl.nasa.gov/catalog/PIA00037

  15. Modeling Coupled Physical and Chemical Erosional Processes Using Structure from Motion Reconstruction and Multiphysics Simulation: Applications to Knickpoints in Bedrock Streams in Limestone Caves and on Earth's Surface

    NASA Astrophysics Data System (ADS)

    Bosch, R.; Ward, D.

    2017-12-01

    Investigation of erosion rates and processes at knickpoints in surface bedrock streams is an active area of research, involving complex feedbacks in the coupled relationships between dissolution, abrasion, and plucking that have not been sufficiently addressed. Even less research has addressed how these processes operate to propagate knickpoints through cave passages in layered sedimentary rocks, despite these features being common along subsurface streams. In both settings, there is evidence for mechanical and chemical erosion, but in cave passages the different hydrologic and hydraulic regimes, combined with an important role for the dissolution process, affect the relative roles and coupled interactions between these processes, and distinguish them from surface stream knickpoints. Using a novel approach of imaging cave passages using Structure from Motion (SFM), we create 3D geometry meshes to explore these systems using multiphysics simulation, and compare the processes as they occur in caves with those in surface streams. Here we focus on four field sites with actively eroding streambeds that include knickpoints: Upper River Acheron and Devil's Cooling Tub in Mammoth Cave, Kentucky; and two surface streams in Clermont County, Ohio, Avey's Run and Fox Run. SFM 3D reconstructions are built using images exported from 4K video shot at each field location. We demonstrate that SFM is a viable imaging approach for reconstructing cave passages with complex morphologies. We then use these reconstructions to create meshes upon which to run multiphysics simulations using STAR-CCM+. Our approach incorporates multiphase free-surface computational fluid dynamics simulations with sediment transport modeled using discrete element method grains. Physical and chemical properties of the water, bedrock, and sediment enable computation of shear stress, sediment impact forces, and chemical kinetic conditions at the bed surface. Preliminary results prove the efficacy of commercially available multiphysics simulation software for modeling various flow conditions, erosional processes, and their complex coupled interactions in cave passages and in surface stream channels to expand knowledge and understanding of overall cave system development and river profile erosion.

  16. Pathway of actin filament branch formation by Arp2/3 complex revealed by single-molecule imaging

    PubMed Central

    Smith, Benjamin A.; Daugherty-Clarke, Karen; Goode, Bruce L.; Gelles, Jeff

    2013-01-01

    Actin filament nucleation by actin-related protein (Arp) 2/3 complex is a critical process in cell motility and endocytosis, yet key aspects of its mechanism are unknown due to a lack of real-time observations of Arp2/3 complex through the nucleation process. Triggered by the verprolin homology, central, and acidic (VCA) region of proteins in the Wiskott-Aldrich syndrome protein (WASp) family, Arp2/3 complex produces new (daughter) filaments as branches from the sides of preexisting (mother) filaments. We visualized individual fluorescently labeled Arp2/3 complexes dynamically interacting with and producing branches on growing actin filaments in vitro. Branch formation was strikingly inefficient, even in the presence of VCA: only ∼1% of filament-bound Arp2/3 complexes yielded a daughter filament. VCA acted at multiple steps, increasing both the association rate of Arp2/3 complexes with mother filament and the fraction of filament-bound complexes that nucleated a daughter. The results lead to a quantitative kinetic mechanism for branched actin assembly, revealing the steps that can be stimulated by additional cellular factors. PMID:23292935

  17. Emotional Complexity and the Neural Representation of Emotion in Motion

    PubMed Central

    Barnard, Philip J.; Lawrence, Andrew D.

    2011-01-01

    According to theories of emotional complexity, individuals low in emotional complexity encode and represent emotions in visceral or action-oriented terms, whereas individuals high in emotional complexity encode and represent emotions in a differentiated way, using multiple emotion concepts. During functional magnetic resonance imaging, participants viewed valenced animated scenarios of simple ball-like figures attending either to social or spatial aspects of the interactions. Participant’s emotional complexity was assessed using the Levels of Emotional Awareness Scale. We found a distributed set of brain regions previously implicated in processing emotion from facial, vocal and bodily cues, in processing social intentions, and in emotional response, were sensitive to emotion conveyed by motion alone. Attention to social meaning amplified the influence of emotion in a subset of these regions. Critically, increased emotional complexity correlated with enhanced processing in a left temporal polar region implicated in detailed semantic knowledge; with a diminished effect of social attention; and with increased differentiation of brain activity between films of differing valence. Decreased emotional complexity was associated with increased activity in regions of pre-motor cortex. Thus, neural coding of emotion in semantic vs action systems varies as a function of emotional complexity, helping reconcile puzzling inconsistencies in neuropsychological investigations of emotion recognition. PMID:20207691

  18. Three-dimensional-printed cardiac prototypes aid surgical decision-making and preoperative planning in selected cases of complex congenital heart diseases: Early experience and proof of concept in a resource-limited environment.

    PubMed

    Kappanayil, Mahesh; Koneti, Nageshwara Rao; Kannan, Rajesh R; Kottayil, Brijesh P; Kumar, Krishna

    2017-01-01

    Three-dimensional. (3D) printing is an innovative manufacturing process that allows computer-assisted conversion of 3D imaging data into physical "printouts" Healthcare applications are currently in evolution. The objective of this study was to explore the feasibility and impact of using patient-specific 3D-printed cardiac prototypes derived from high-resolution medical imaging data (cardiac magnetic resonance imaging/computed tomography [MRI/CT]) on surgical decision-making and preoperative planning in selected cases of complex congenital heart diseases (CHDs). Five patients with complex CHD with previously unresolved management decisions were chosen. These included two patients with complex double-outlet right ventricle, two patients with criss-cross atrioventricular connections, and one patient with congenitally corrected transposition of great arteries with pulmonary atresia. Cardiac MRI was done for all patients, cardiac CT for one; specific surgical challenges were identified. Volumetric data were used to generate patient-specific 3D models. All cases were reviewed along with their 3D models, and the impact on surgical decision-making and preoperative planning was assessed. Accurate life-sized 3D cardiac prototypes were successfully created for all patients. The models enabled radically improved 3D understanding of anatomy, identification of specific technical challenges, and precise surgical planning. Augmentation of existing clinical and imaging data by 3D prototypes allowed successful execution of complex surgeries for all five patients, in accordance with the preoperative planning. 3D-printed cardiac prototypes can radically assist decision-making, planning, and safe execution of complex congenital heart surgery by improving understanding of 3D anatomy and allowing anticipation of technical challenges.

  19. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    PubMed

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  20. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  1. Advanced imaging in COPD: insights into pulmonary pathophysiology

    PubMed Central

    Milne, Stephen

    2014-01-01

    Chronic obstructive pulmonary disease (COPD) involves a complex interaction of structural and functional abnormalities. The two have long been studied in isolation. However, advanced imaging techniques allow us to simultaneously assess pathological processes and their physiological consequences. This review gives a comprehensive account of the various advanced imaging modalities used to study COPD, including computed tomography (CT), magnetic resonance imaging (MRI), and the nuclear medicine techniques positron emission tomography (PET) and single-photon emission computed tomography (SPECT). Some more recent developments in imaging technology, including micro-CT, synchrotron imaging, optical coherence tomography (OCT) and electrical impedance tomography (EIT), are also described. The authors identify the pathophysiological insights gained from these techniques, and speculate on the future role of advanced imaging in both clinical and research settings. PMID:25478198

  2. Image processing analysis of traditional Gestalt vision experiments

    NASA Astrophysics Data System (ADS)

    McCann, John J.

    2002-06-01

    In the late 19th century, the Gestalt Psychology rebelled against the popular new science of Psychophysics. The Gestalt revolution used many fascinating visual examples to illustrate that the whole is greater than the sum of all the parts. Color constancy was an important example. The physical interpretation of sensations and their quantification by JNDs and Weber fractions were met with innumerable examples in which two 'identical' physical stimuli did not look the same. The fact that large changes in the color of the illumination failed to change color appearance in real scenes demanded something more than quantifying the psychophysical response of a single pixel. The debates continues today with proponents of both physical, pixel-based colorimetry and perceptual, image- based cognitive interpretations. Modern instrumentation has made colorimetric pixel measurement universal. As well, new examples of unconscious inference continue to be reported in the literature. Image processing provides a new way of analyzing familiar Gestalt displays. Since the pioneering experiments by Fergus Campbell and Land, we know that human vision has independent spatial channels and independent color channels. Color matching data from color constancy experiments agrees with spatial comparison analysis. In this analysis, simple spatial processes can explain the different appearances of 'identical' stimuli by analyzing the multiresolution spatial properties of their surrounds. Benary's Cross, White's Effect, the Checkerboard Illusion and the Dungeon Illusion can all be understood by the analysis of their low-spatial-frequency components. Just as with color constancy, these Gestalt images are most simply described by the analysis of spatial components. Simple spatial mechanisms account for the appearance of 'identical' stimuli in complex scenes. It does not require complex, cognitive processes to calculate appearances in familiar Gestalt experiments.

  3. Contrasting landform perception with varied radar illumination geometries and at simulated resolutions of Venera and Magellan

    NASA Technical Reports Server (NTRS)

    Ford, J. P.; Arvidson, R. E.

    1989-01-01

    The high sensitivity of imaging radars to slope at moderate to low incidence angles enhances the perception of linear topography on images. It reveals broad spatial patterns that are essential to landform mapping and interpretation. As radar responses are strongly directional, the ability to discriminate linear features on images varies with their orientation. Landforms that appear prominent on images where they are transverse to the illumination may be obscure to indistinguishable on images where they are parallel to it. Landform detection is also influenced by the spatial resolution in radar images. Seasat radar images of the Gran Desierto Dunes complex, Sonora, Mexico; the Appalachian Valley and Ridge Province; and accreted terranes in eastern interior Alaska were processed to simulate both Venera 15 and 16 images (1000 to 3000 km resolution) and image data expected from the Magellan mission (120 to 300 m resolution. The Gran Desierto Dunes are not discernable in the Venera simulation, whereas the higher resolution Magellan simulation shows dominant dune patterns produced from differential erosion of the rocks. The Magellan simulation also shows that fluvial processes have dominated erosion and exposure of the folds.

  4. The PICWidget

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark

    2007-01-01

    The Plug-in Image Component Widget (PICWidget) is a software component for building digital imaging applications. The component is part of a methodology described in GIS Methodology for Planning Planetary-Rover Operations (NPO-41812), which appears elsewhere in this issue of NASA Tech Briefs. Planetary rover missions return a large number and wide variety of image data products that vary in complexity in many ways. Supported by a powerful, flexible image-data-processing pipeline, the PICWidget can process and render many types of imagery, including (but not limited to) thumbnail, subframed, downsampled, stereoscopic, and mosaic images; images coregistred with orbital data; and synthetic red/green/blue images. The PICWidget is capable of efficiently rendering images from data representing many more pixels than are available at a computer workstation where the images are to be displayed. The PICWidget is implemented as an Eclipse plug-in using the Standard Widget Toolkit, which provides a straightforward interface for re-use of the PICWidget in any number of application programs built upon the Eclipse application framework. Because the PICWidget is tile-based and performs aggressive tile caching, it has flexibility to perform faster or slower, depending whether more or less memory is available.

  5. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  6. Image-based tracking of the suturing needle during laparoscopic interventions

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kroehnert, A.; Bodenstedt, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2015-03-01

    One of the most complex and difficult tasks for surgeons during minimally invasive interventions is suturing. A prerequisite to assist the suturing process is the tracking of the needle. The endoscopic images provide a rich source of information which can be used for needle tracking. In this paper, we present an image-based method for markerless needle tracking. The method uses a color-based and geometry-based segmentation to detect the needle. Once an initial needle detection is obtained, a region of interest enclosing the extracted needle contour is passed on to a reduced segmentation. It is evaluated with in vivo images from da Vinci interventions.

  7. Automated Selection of Regions of Interest for Intensity-based FRET Analysis of Transferrin Endocytic Trafficking in Normal vs. Cancer Cells

    PubMed Central

    Talati, Ronak; Vanderpoel, Andrew; Eladdadi, Amina; Anderson, Kate; Abe, Ken; Barroso, Margarida

    2013-01-01

    The overexpression of certain membrane-bound receptors is a hallmark of cancer progression and it has been suggested to affect the organization, activation, recycling and down-regulation of receptor-ligand complexes in human cancer cells. Thus, comparing receptor trafficking pathways in normal vs. cancer cells requires the ability to image cells expressing dramatically different receptor expression levels. Here, we have presented a significant technical advance to the analysis and processing of images collected using intensity based Förster resonance energy transfer (FRET) confocal microscopy. An automated Image J macro was developed to select region of interests (ROI) based on intensity and statistical-based thresholds within cellular images with reduced FRET signal. Furthermore, SSMD (strictly standardized mean differences), a statistical signal-to-noise ratio (SNR) evaluation parameter, was used to validate the quality of FRET analysis, in particular of ROI database selection. The Image J ROI selection macro together with SSMD as an evaluation parameter of SNR levels, were used to investigate the endocytic recycling of Tfn-TFR complexes at nanometer range resolution in human normal vs. breast cancer cells expressing significantly different levels of endogenous TFR. Here, the FRET-based assay demonstrates that Tfn-TFR complexes in normal epithelial vs. breast cancer cells show a significantly different E% behavior during their endocytic recycling pathway. Since E% is a relative measure of distance, we propose that these changes in E% levels represent conformational changes in Tfn-TFR complexes during endocytic pathway. Thus, our results indicate that Tfn-TFR complexes undergo different conformational changes in normal vs. cancer cells, indicating that the organization of Tfn-TFR complexes at the nanometer range is significantly altered during the endocytic recycling pathway in cancer cells. In summary, improvements in the automated selection of FRET ROI datasets allowed us to detect significant changes in E% with potential biological significance in human normal vs. cancer cells. PMID:23994873

  8. Intelligence and cortical thickness in children with complex partial seizures.

    PubMed

    Tosun, Duygu; Caplan, Rochelle; Siddarth, Prabha; Seidenberg, Michael; Gurbani, Suresh; Toga, Arthur W; Hermann, Bruce

    2011-07-15

    Prior studies on healthy children have demonstrated regional variations and a complex and dynamic relationship between intelligence and cerebral tissue. Yet, there is little information regarding the neuroanatomical correlates of general intelligence in children with epilepsy compared to healthy controls. In vivo imaging techniques, combined with methods for advanced image processing and analysis, offer the potential to examine quantitative mapping of brain development and its abnormalities in childhood epilepsy. A surface-based, computational high resolution 3-D magnetic resonance image analytic technique was used to compare the relationship of cortical thickness with age and intelligence quotient (IQ) in 65 children and adolescents with complex partial seizures (CPS) and 58 healthy controls, aged 6-18 years. Children were grouped according to health status (epilepsy; controls) and IQ level (average and above; below average) and compared on age-related patterns of cortical thickness. Our cross-sectional findings suggest that disruption in normal age-related cortical thickness expression is associated with intelligence in pediatric CPS patients both with average and below average IQ scores. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. A modeling analysis program for the JPL Table Mountain Io sodium cloud data

    NASA Technical Reports Server (NTRS)

    Smyth, W. H.; Goldberg, B. A.

    1986-01-01

    Progress and achievements in the second year are discussed in three main areas: (1) data quality review of the 1981 Region B/C images; (2) data processing activities; and (3) modeling activities. The data quality review revealed that almost all 1981 Region B/C images are of sufficient quality to be valuable in the analyses of the JPL data set. In the second area, the major milestone reached was the successful development and application of complex image-processing software required to render the original image data suitable for modeling analysis studies. In the third area, the lifetime description of sodium atoms in the planet magnetosphere was improved in the model to include the offset dipole nature of the magnetic field as well as an east-west electric field. These improvements are important in properly representing the basic morphology as well as the east-west asymmetries of the sodium cloud.

  10. Further optimization of SeDDaRA blind image deconvolution algorithm and its DSP implementation

    NASA Astrophysics Data System (ADS)

    Wen, Bo; Zhang, Qiheng; Zhang, Jianlin

    2011-11-01

    Efficient algorithm for blind image deconvolution and its high-speed implementation is of great value in practice. Further optimization of SeDDaRA is developed, from algorithm structure to numerical calculation methods. The main optimization covers that, the structure's modularization for good implementation feasibility, reducing the data computation and dependency of 2D-FFT/IFFT, and acceleration of power operation by segmented look-up table. Then the Fast SeDDaRA is proposed and specialized for low complexity. As the final implementation, a hardware system of image restoration is conducted by using the multi-DSP parallel processing. Experimental results show that, the processing time and memory demand of Fast SeDDaRA decreases 50% at least; the data throughput of image restoration system is over 7.8Msps. The optimization is proved efficient and feasible, and the Fast SeDDaRA is able to support the real-time application.

  11. In vivo PET imaging of neuroinflammation in Alzheimer's disease.

    PubMed

    Lagarde, Julien; Sarazin, Marie; Bottlaender, Michel

    2018-05-01

    Increasing evidence suggests that neuroinflammation contributes to the pathophysiology of many neurodegenerative diseases, especially Alzheimer's disease (AD). Molecular imaging by PET may be a useful tool to assess neuroinflammation in vivo, thus helping to decipher the complex role of inflammatory processes in the pathophysiology of neurodegenerative diseases and providing a potential means of monitoring the effect of new therapeutic approaches. For this objective, the main target of PET studies is the 18 kDa translocator protein (TSPO), as it is overexpressed by activated microglia. In the present review, we describe the most widely used PET tracers targeting the TSPO, the methodological issues in tracer quantification and summarize the results obtained by TSPO PET imaging in AD, as well as in neurodegenerative disorders associated with AD, in psychiatric disorders and ageing. We also briefly describe alternative PET targets and imaging modalities to study neuroinflammation. Lastly, we question the meaning of PET imaging data in the context of a highly complex and multifaceted role of neuroinflammation in neurodegenerative diseases. This overview leads to the conclusion that PET imaging of neuroinflammation is a promising way of deciphering the enigma of the pathophysiology of AD and of monitoring the effect of new therapies.

  12. Effect of fringe-artifact correction on sub-tomogram averaging from Zernike phase-plate cryo-TEM

    PubMed Central

    Kishchenko, Gregory P.; Danev, Radostin; Fisher, Rebecca; He, Jie; Hsieh, Chyongere; Marko, Michael; Sui, Haixin

    2015-01-01

    Zernike phase-plate (ZPP) imaging greatly increases contrast in cryo-electron microscopy, however fringe artifacts appear in the images. A computational de-fringing method has been proposed, but it has not been widely employed, perhaps because the importance of de-fringing has not been clearly demonstrated. For testing purposes, we employed Zernike phase-plate imaging in a cryo-electron tomographic study of radial-spoke complexes attached to microtubule doublets. We found that the contrast enhancement by ZPP imaging made nonlinear denoising insensitive to the filtering parameters, such that simple low-frequency band-pass filtering made the same improvement in map quality. We employed sub-tomogram averaging, which compensates for the effect of the “missing wedge” and considerably improves map quality. We found that fringes (caused by the abrupt cut-on of the central hole in the phase plate) can lead to incorrect representation of a structure that is well-known from the literature. The expected structure was restored by amplitude scaling, as proposed in the literature. Our results show that de-fringing is an important part of image-processing for cryo-electron tomography of macromolecular complexes with ZPP imaging. PMID:26210582

  13. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  14. Nanostructures formed by cyclodextrin covered procainamide through supramolecular self assembly - Spectral and molecular modeling study

    NASA Astrophysics Data System (ADS)

    Rajendiran, N.; Mohandoss, T.; Sankaranarayanan, R. K.

    2015-02-01

    Inclusion complexation behavior of procainamide (PCA) with two cyclodextrins (α-CD and β-CD) were analyzed by absorption, fluorescence, scanning electron microscope (SEM), transmission electron microscope (TEM), Raman image, FT-IR, differential scanning colorimeter (DSC), Powder X ray diffraction (XRD) and 1H NMR. Blue shift was observed in β-CD whereas no significant spectral shift observed in α-CD. The inclusion complex formation results suggest that water molecules also present in the inside of the CD cavity. The present study revealed that the phenyl ring of the PCA drug is entrapped in the CD cavity. Cyclodextrin studies show that PCA forms 1:2 inclusion complex with α-CD and β-CD. PCA:α-CD complex form nano-sized particles (46 nm) and PCA:β-CD complex form self-assembled to micro-sized tubular structures. The shape-shifting of 2D nanosheets into 1D microtubes by simple rolling mechanism were analysed by micro-Raman and TEM images. Thermodynamic parameters (ΔH, ΔG and ΔS) of inclusion process were determined from semiempirical PM3 calculations.

  15. Visual Perception-Based Statistical Modeling of Complex Grain Image for Product Quality Monitoring and Supervision on Assembly Production Line

    PubMed Central

    Chen, Qing; Xu, Pengfei; Liu, Wenzhong

    2016-01-01

    Computer vision as a fast, low-cost, noncontact, and online monitoring technology has been an important tool to inspect product quality, particularly on a large-scale assembly production line. However, the current industrial vision system is far from satisfactory in the intelligent perception of complex grain images, comprising a large number of local homogeneous fragmentations or patches without distinct foreground and background. We attempt to solve this problem based on the statistical modeling of spatial structures of grain images. We present a physical explanation in advance to indicate that the spatial structures of the complex grain images are subject to a representative Weibull distribution according to the theory of sequential fragmentation, which is well known in the continued comminution of ore grinding. To delineate the spatial structure of the grain image, we present a method of multiscale and omnidirectional Gaussian derivative filtering. Then, a product quality classifier based on sparse multikernel–least squares support vector machine is proposed to solve the low-confidence classification problem of imbalanced data distribution. The proposed method is applied on the assembly line of a food-processing enterprise to classify (or identify) automatically the production quality of rice. The experiments on the real application case, compared with the commonly used methods, illustrate the validity of our method. PMID:26986726

  16. Neural basis of processing threatening voices in a crowded auditory world

    PubMed Central

    Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.

    2016-01-01

    In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543

  17. A comparison of neural network and fuzzy clustering techniques in segmenting magnetic resonance images of the brain.

    PubMed

    Hall, L O; Bensaid, A M; Clarke, L P; Velthuizen, R P; Silbiger, M S; Bezdek, J C

    1992-01-01

    Magnetic resonance (MR) brain section images are segmented and then synthetically colored to give visual representations of the original data with three approaches: the literal and approximate fuzzy c-means unsupervised clustering algorithms, and a supervised computational neural network. Initial clinical results are presented on normal volunteers and selected patients with brain tumors surrounded by edema. Supervised and unsupervised segmentation techniques provide broadly similar results. Unsupervised fuzzy algorithms were visually observed to show better segmentation when compared with raw image data for volunteer studies. For a more complex segmentation problem with tumor/edema or cerebrospinal fluid boundary, where the tissues have similar MR relaxation behavior, inconsistency in rating among experts was observed, with fuzz-c-means approaches being slightly preferred over feedforward cascade correlation results. Various facets of both approaches, such as supervised versus unsupervised learning, time complexity, and utility for the diagnostic process, are compared.

  18. Clustered functional MRI of overt speech production.

    PubMed

    Sörös, Peter; Sokoloff, Lisa Guttman; Bose, Arpita; McIntosh, Anthony R; Graham, Simon J; Stuss, Donald T

    2006-08-01

    To investigate the neural network of overt speech production, event-related fMRI was performed in 9 young healthy adult volunteers. A clustered image acquisition technique was chosen to minimize speech-related movement artifacts. Functional images were acquired during the production of oral movements and of speech of increasing complexity (isolated vowel as well as monosyllabic and trisyllabic utterances). This imaging technique and behavioral task enabled depiction of the articulo-phonologic network of speech production from the supplementary motor area at the cranial end to the red nucleus at the caudal end. Speaking a single vowel and performing simple oral movements involved very similar activation of the cortical and subcortical motor systems. More complex, polysyllabic utterances were associated with additional activation in the bilateral cerebellum, reflecting increased demand on speech motor control, and additional activation in the bilateral temporal cortex, reflecting the stronger involvement of phonologic processing.

  19. TASI: A software tool for spatial-temporal quantification of tumor spheroid dynamics.

    PubMed

    Hou, Yue; Konen, Jessica; Brat, Daniel J; Marcus, Adam I; Cooper, Lee A D

    2018-05-08

    Spheroid cultures derived from explanted cancer specimens are an increasingly utilized resource for studying complex biological processes like tumor cell invasion and metastasis, representing an important bridge between the simplicity and practicality of 2-dimensional monolayer cultures and the complexity and realism of in vivo animal models. Temporal imaging of spheroids can capture the dynamics of cell behaviors and microenvironments, and when combined with quantitative image analysis methods, enables deep interrogation of biological mechanisms. This paper presents a comprehensive open-source software framework for Temporal Analysis of Spheroid Imaging (TASI) that allows investigators to objectively characterize spheroid growth and invasion dynamics. TASI performs spatiotemporal segmentation of spheroid cultures, extraction of features describing spheroid morpho-phenotypes, mathematical modeling of spheroid dynamics, and statistical comparisons of experimental conditions. We demonstrate the utility of this tool in an analysis of non-small cell lung cancer spheroids that exhibit variability in metastatic and proliferative behaviors.

  20. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    PubMed

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  1. Multispectral mapping of the lunar surface using groundbased telescopes

    NASA Technical Reports Server (NTRS)

    Mccord, T. B.; Pieters, C.; Feirberg, M. A.

    1976-01-01

    Images of the lunar surface were obtained at several wavelengths using a silicon vidicon imaging system and groundbased telescopes. These images were recorded and processed in digital form so that quantitative information is preserved. The photometric precision of the images is shown to be better than 1 percent. Ratio images calculated by dividing images obtained at two wavelengths (0.40/0.56 micrometer) and 0.95/0.56 micrometer are presented for about 50 percent of the lunar frontside. Spatial resolution is about 2 km at the sub-earth point. A complex of distinct units is evident in the images. Earlier work with the reflectance spectrum of lunar materials indicates that for the most part these units are compositionally distinct. Digital images of this precision are extremely useful to lunar geologists in disentangling the history of the lunar surface.

  2. The semiotics of medical image Segmentation.

    PubMed

    Baxter, John S H; Gibson, Eli; Eagleson, Roy; Peters, Terry M

    2018-02-01

    As the interaction between clinicians and computational processes increases in complexity, more nuanced mechanisms are required to describe how their communication is mediated. Medical image segmentation in particular affords a large number of distinct loci for interaction which can act on a deep, knowledge-driven level which complicates the naive interpretation of the computer as a symbol processing machine. Using the perspective of the computer as dialogue partner, we can motivate the semiotic understanding of medical image segmentation. Taking advantage of Peircean semiotic traditions and new philosophical inquiry into the structure and quality of metaphors, we can construct a unified framework for the interpretation of medical image segmentation as a sign exchange in which each sign acts as an interface metaphor. This allows for a notion of finite semiosis, described through a schematic medium, that can rigorously describe how clinicians and computers interpret the signs mediating their interaction. Altogether, this framework provides a unified approach to the understanding and development of medical image segmentation interfaces. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. MuLoG, or How to Apply Gaussian Denoisers to Multi-Channel SAR Speckle Reduction?

    PubMed

    Deledalle, Charles-Alban; Denis, Loic; Tabti, Sonia; Tupin, Florence

    2017-09-01

    Speckle reduction is a longstanding topic in synthetic aperture radar (SAR) imaging. Since most current and planned SAR imaging satellites operate in polarimetric, interferometric, or tomographic modes, SAR images are multi-channel and speckle reduction techniques must jointly process all channels to recover polarimetric and interferometric information. The distinctive nature of SAR signal (complex-valued, corrupted by multiplicative fluctuations) calls for the development of specialized methods for speckle reduction. Image denoising is a very active topic in image processing with a wide variety of approaches and many denoising algorithms available, almost always designed for additive Gaussian noise suppression. This paper proposes a general scheme, called MuLoG (MUlti-channel LOgarithm with Gaussian denoising), to include such Gaussian denoisers within a multi-channel SAR speckle reduction technique. A new family of speckle reduction algorithms can thus be obtained, benefiting from the ongoing progress in Gaussian denoising, and offering several speckle reduction results often displaying method-specific artifacts that can be dismissed by comparison between results.

  4. The Effect of Mental Rotation on Surgical Pathological Diagnosis.

    PubMed

    Park, Heejung; Kim, Hyun Soo; Cha, Yoon Jin; Choi, Junjeong; Minn, Yangki; Kim, Kyung Sik; Kim, Se Hoon

    2018-05-01

    Pathological diagnosis involves very delicate and complex consequent processing that is conducted by a pathologist. The recognition of false patterns might be an important cause of misdiagnosis in the field of surgical pathology. In this study, we evaluated the influence of visual and cognitive bias in surgical pathologic diagnosis, focusing on the influence of "mental rotation." We designed three sets of the same images of uterine cervix biopsied specimens (original, left to right mirror images, and 180-degree rotated images), and recruited 32 pathologists to diagnose the 3 set items individually. First, the items found to be adequate for analysis by classical test theory, Generalizability theory, and item response theory. The results showed statistically no differences in difficulty, discrimination indices, and response duration time between the image sets. Mental rotation did not influence the pathologists' diagnosis in practice. Interestingly, outliers were more frequent in rotated image sets, suggesting that the mental rotation process may influence the pathological diagnoses of a few individual pathologists. © Copyright: Yonsei University College of Medicine 2018.

  5. Navigable points estimation for mobile robots using binary image skeletonization

    NASA Astrophysics Data System (ADS)

    Martinez S., Fernando; Jacinto G., Edwar; Montiel A., Holman

    2017-02-01

    This paper describes the use of image skeletonization for the estimation of all the navigable points, inside a scene of mobile robots navigation. Those points are used for computing a valid navigation path, using standard methods. The main idea is to find the middle and the extreme points of the obstacles in the scene, taking into account the robot size, and create a map of navigable points, in order to reduce the amount of information for the planning algorithm. Those points are located by means of the skeletonization of a binary image of the obstacles and the scene background, along with some other digital image processing algorithms. The proposed algorithm automatically gives a variable number of navigable points per obstacle, depending on the complexity of its shape. As well as, the way how the algorithm can change some of their parameters in order to change the final number of the resultant key points is shown. The results shown here were obtained applying different kinds of digital image processing algorithms on static scenes.

  6. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  7. Application of process tomography in gas-solid fluidised beds in different scales and structures

    NASA Astrophysics Data System (ADS)

    Wang, H. G.; Che, H. Q.; Ye, J. M.; Tu, Q. Y.; Wu, Z. P.; Yang, W. Q.; Ocone, R.

    2018-04-01

    Gas-solid fluidised beds are commonly used in particle-related processes, e.g. for coal combustion and gasification in the power industry, and the coating and granulation process in the pharmaceutical industry. Because the operation efficiency depends on the gas-solid flow characteristics, it is necessary to investigate the flow behaviour. This paper is about the application of process tomography, including electrical capacitance tomography (ECT) and microwave tomography (MWT), in multi-scale gas-solid fluidisation processes in the pharmaceutical and power industries. This is the first time that both ECT and MWT have been applied for this purpose in multi-scale and complex structure. To evaluate the sensor design and image reconstruction and to investigate the effects of sensor structure and dimension on the image quality, a normalised sensitivity coefficient is introduced. In the meantime, computational fluid dynamic (CFD) analysis based on a computational particle fluid dynamic (CPFD) model and a two-phase fluid model (TFM) is used. Part of the CPFD-TFM simulation results are compared and validated by experimental results from ECT and/or MWT. By both simulation and experiment, the complex flow hydrodynamic behaviour in different scales is analysed. Time-series capacitance data are analysed both in time and frequency domains to reveal the flow characteristics.

  8. A Review of Tensors and Tensor Signal Processing

    NASA Astrophysics Data System (ADS)

    Cammoun, L.; Castaño-Moraga, C. A.; Muñoz-Moreno, E.; Sosa-Cabrera, D.; Acar, B.; Rodriguez-Florido, M. A.; Brun, A.; Knutsson, H.; Thiran, J. P.

    Tensors have been broadly used in mathematics and physics, since they are a generalization of scalars or vectors and allow to represent more complex properties. In this chapter we present an overview of some tensor applications, especially those focused on the image processing field. From a mathematical point of view, a lot of work has been developed about tensor calculus, which obviously is more complex than scalar or vectorial calculus. Moreover, tensors can represent the metric of a vector space, which is very useful in the field of differential geometry. In physics, tensors have been used to describe several magnitudes, such as the strain or stress of materials. In solid mechanics, tensors are used to define the generalized Hooke’s law, where a fourth order tensor relates the strain and stress tensors. In fluid dynamics, the velocity gradient tensor provides information about the vorticity and the strain of the fluids. Also an electromagnetic tensor is defined, that simplifies the notation of the Maxwell equations. But tensors are not constrained to physics and mathematics. They have been used, for instance, in medical imaging, where we can highlight two applications: the diffusion tensor image, which represents how molecules diffuse inside the tissues and is broadly used for brain imaging; and the tensorial elastography, which computes the strain and vorticity tensor to analyze the tissues properties. Tensors have also been used in computer vision to provide information about the local structure or to define anisotropic image filters.

  9. High-cadence Imaging and Imaging Spectroscopy at the GREGOR Solar Telescope—A Collaborative Research Environment for High-resolution Solar Physics

    NASA Astrophysics Data System (ADS)

    Denker, Carsten; Kuckein, Christoph; Verma, Meetu; González Manrique, Sergio J.; Diercke, Andrea; Enke, Harry; Klar, Jochen; Balthasar, Horst; Louis, Rohan E.; Dineva, Ekaterina

    2018-05-01

    In high-resolution solar physics, the volume and complexity of photometric, spectroscopic, and polarimetric ground-based data significantly increased in the last decade, reaching data acquisition rates of terabytes per hour. This is driven by the desire to capture fast processes on the Sun and the necessity for short exposure times “freezing” the atmospheric seeing, thus enabling ex post facto image restoration. Consequently, large-format and high-cadence detectors are nowadays used in solar observations to facilitate image restoration. Based on our experience during the “early science” phase with the 1.5 m GREGOR solar telescope (2014–2015) and the subsequent transition to routine observations in 2016, we describe data collection and data management tailored toward image restoration and imaging spectroscopy. We outline our approaches regarding data processing, analysis, and archiving for two of GREGOR’s post-focus instruments (see http://gregor.aip.de), i.e., the GREGOR Fabry–Pérot Interferometer (GFPI) and the newly installed High-Resolution Fast Imager (HiFI). The heterogeneous and complex nature of multidimensional data arising from high-resolution solar observations provides an intriguing but also a challenging example for “big data” in astronomy. The big data challenge has two aspects: (1) establishing a workflow for publishing the data for the whole community and beyond and (2) creating a collaborative research environment (CRE), where computationally intense data and postprocessing tools are colocated and collaborative work is enabled for scientists of multiple institutes. This requires either collaboration with a data center or frameworks and databases capable of dealing with huge data sets based on virtual observatory (VO) and other community standards and procedures.

  10. Medical Image Encryption: An Application for Improved Padding Based GGH Encryption Algorithm

    PubMed Central

    Sokouti, Massoud; Zakerolhosseini, Ali; Sokouti, Babak

    2016-01-01

    Medical images are regarded as important and sensitive data in the medical informatics systems. For transferring medical images over an insecure network, developing a secure encryption algorithm is necessary. Among the three main properties of security services (i.e., confidentiality, integrity, and availability), the confidentiality is the most essential feature for exchanging medical images among physicians. The Goldreich Goldwasser Halevi (GGH) algorithm can be a good choice for encrypting medical images as both the algorithm and sensitive data are represented by numeric matrices. Additionally, the GGH algorithm does not increase the size of the image and hence, its complexity will remain as simple as O(n2). However, one of the disadvantages of using the GGH algorithm is the Chosen Cipher Text attack. In our strategy, this shortcoming of GGH algorithm has been taken in to consideration and has been improved by applying the padding (i.e., snail tour XORing), before the GGH encryption process. For evaluating their performances, three measurement criteria are considered including (i) Number of Pixels Change Rate (NPCR), (ii) Unified Average Changing Intensity (UACI), and (iii) Avalanche effect. The results on three different sizes of images showed that padding GGH approach has improved UACI, NPCR, and Avalanche by almost 100%, 35%, and 45%, respectively, in comparison to the standard GGH algorithm. Also, the outcomes will make the padding GGH resist against the cipher text, the chosen cipher text, and the statistical attacks. Furthermore, increasing the avalanche effect of more than 50% is a promising achievement in comparison to the increased complexities of the proposed method in terms of encryption and decryption processes. PMID:27857824

  11. Real-time field programmable gate array architecture for computer vision

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar

    2001-01-01

    This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.

  12. Biological applications of phase-contrast electron microscopy.

    PubMed

    Nagayama, Kuniaki

    2014-01-01

    Here, I review the principles and applications of phase-contrast electron microscopy using phase plates. First, I develop the principle of phase contrast based on a minimal model of microscopy, introducing a double Fourier-transform process to mathematically formulate the image formation. Next, I explain four phase-contrast (PC) schemes, defocus PC, Zernike PC, Hilbert differential contrast, and schlieren optics, as image-filtering processes in the context of the minimal model, with particular emphases on the Zernike PC and corresponding Zernike phase plates. Finally, I review applications of Zernike PC cryo-electron microscopy to biological systems such as protein molecules, virus particles, and cells, including single-particle analysis to delineate three-dimensional (3D) structures of protein and virus particles and cryo-electron tomography to reconstruct 3D images of complex protein systems and cells.

  13. Modern Techniques in Acoustical Signal and Image Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J V

    2002-04-04

    Acoustical signal processing problems can lead to some complex and intricate techniques to extract the desired information from noisy, sometimes inadequate, measurements. The challenge is to formulate a meaningful strategy that is aimed at performing the processing required even in the face of uncertainties. This strategy can be as simple as a transformation of the measured data to another domain for analysis or as complex as embedding a full-scale propagation model into the processor. The aims of both approaches are the same--to extract the desired information and reject the extraneous, that is, develop a signal processing scheme to achieve thismore » goal. In this paper, we briefly discuss this underlying philosophy from a ''bottom-up'' approach enabling the problem to dictate the solution rather than visa-versa.« less

  14. Retinal Connectomics: Towards Complete, Accurate Networks

    PubMed Central

    Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott

    2013-01-01

    Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532

  15. Examples of Current and Future Uses of Neural-Net Image Processing for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.

    2004-01-01

    Feed forward artificial neural networks are very convenient for performing correlated interpolation of pairs of complex noisy data sets as well as detecting small changes in image data. Image-to-image, image-to-variable and image-to-index applications have been tested at Glenn. Early demonstration applications are summarized including image-directed alignment of optics, tomography, flow-visualization control of wind-tunnel operations and structural-model-trained neural networks. A practical application is reviewed that employs neural-net detection of structural damage from interference fringe patterns. Both sensor-based and optics-only calibration procedures are available for this technique. These accomplishments have generated the knowledge necessary to suggest some other applications for NASA and Government programs. A tomography application is discussed to support Glenn's Icing Research tomography effort. The self-regularizing capability of a neural net is shown to predict the expected performance of the tomography geometry and to augment fast data processing. Other potential applications involve the quantum technologies. It may be possible to use a neural net as an image-to-image controller of an optical tweezers being used for diagnostics of isolated nano structures. The image-to-image transformation properties also offer the potential for simulating quantum computing. Computer resources are detailed for implementing the black box calibration features of the neural nets.

  16. Translational research of optical molecular imaging for personalized medicine.

    PubMed

    Qin, C; Ma, X; Tian, J

    2013-12-01

    In the medical imaging field, molecular imaging is a rapidly developing discipline and forms many imaging modalities, providing us effective tools to visualize, characterize, and measure molecular and cellular mechanisms in complex biological processes of living organisms, which can deepen our understanding of biology and accelerate preclinical research including cancer study and medicine discovery. Among many molecular imaging modalities, although the penetration depth of optical imaging and the approved optical probes used for clinics are limited, it has evolved considerably and has seen spectacular advances in basic biomedical research and new drug development. With the completion of human genome sequencing and the emergence of personalized medicine, the specific drug should be matched to not only the right disease but also to the right person, and optical molecular imaging should serve as a strong adjunct to develop personalized medicine by finding the optimal drug based on an individual's proteome and genome. In this process, the computational methodology and imaging system as well as the biomedical application regarding optical molecular imaging will play a crucial role. This review will focus on recent typical translational studies of optical molecular imaging for personalized medicine followed by a concise introduction. Finally, the current challenges and the future development of optical molecular imaging are given according to the understanding of the authors, and the review is then concluded.

  17. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.

    PubMed

    Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-01-19

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  18. HALO: a reconfigurable image enhancement and multisensor fusion system

    NASA Astrophysics Data System (ADS)

    Wu, F.; Hickman, D. L.; Parker, Steve J.

    2014-06-01

    Contemporary high definition (HD) cameras and affordable infrared (IR) imagers are set to dramatically improve the effectiveness of security, surveillance and military vision systems. However, the quality of imagery is often compromised by camera shake, or poor scene visibility due to inadequate illumination or bad atmospheric conditions. A versatile vision processing system called HALO™ is presented that can address these issues, by providing flexible image processing functionality on a low size, weight and power (SWaP) platform. Example processing functions include video distortion correction, stabilisation, multi-sensor fusion and image contrast enhancement (ICE). The system is based around an all-programmable system-on-a-chip (SoC), which combines the computational power of a field-programmable gate array (FPGA) with the flexibility of a CPU. The FPGA accelerates computationally intensive real-time processes, whereas the CPU provides management and decision making functions that can automatically reconfigure the platform based on user input and scene content. These capabilities enable a HALO™ equipped reconnaissance or surveillance system to operate in poor visibility, providing potentially critical operational advantages in visually complex and challenging usage scenarios. The choice of an FPGA based SoC is discussed, and the HALO™ architecture and its implementation are described. The capabilities of image distortion correction, stabilisation, fusion and ICE are illustrated using laboratory and trials data.

  19. A novel configurable VLSI architecture design of window-based image processing method

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Sang, Hongshi; Shen, Xubang

    2018-03-01

    Most window-based image processing architecture can only achieve a certain kind of specific algorithms, such as 2D convolution, and therefore lack the flexibility and breadth of application. In addition, improper handling of the image boundary can cause loss of accuracy, or consume more logic resources. For the above problems, this paper proposes a new VLSI architecture of window-based image processing operations, which is configurable and based on consideration of the image boundary. An efficient technique is explored to manage the image borders by overlapping and flushing phases at the end of row and the end of frame, which does not produce new delay and reduce the overhead in real-time applications. Maximize the reuse of the on-chip memory data, in order to reduce the hardware complexity and external bandwidth requirements. To perform different scalar function and reduction function operations in pipeline, this can support a variety of applications of window-based image processing. Compared with the performance of other reported structures, the performance of the new structure has some similarities to some of the structures, but also superior to some other structures. Especially when compared with a systolic array processor CWP, this structure at the same frequency of approximately 12.9% of the speed increases. The proposed parallel VLSI architecture was implemented with SIMC 0.18-μm CMOS technology, and the maximum clock frequency, power consumption, and area are 125Mhz, 57mW, 104.8K Gates, respectively, furthermore the processing time is independent of the different window-based algorithms mapped to the structure

  20. Application of Laser Imaging for Bio/geophysical Studies

    NASA Technical Reports Server (NTRS)

    Hummel, J. R.; Goltz, S. M.; Depiero, N. L.; Degloria, D. P.; Pagliughi, F. M.

    1992-01-01

    SPARTA, Inc. has developed a low-cost, portable laser imager that, among other applications, can be used in bio/geophysical applications. In the application to be discussed here, the system was utilized as an imaging system for background features in a forested locale. The SPARTA mini-ladar system was used at the International Paper Northern Experimental Forest near Howland, Maine to assist in a project designed to study the thermal and radiometric phenomenology at forest edges. The imager was used to obtain data from three complex sites, a 'seed' orchard, a forest edge, and a building. The goal of the study was to demonstrate the usefulness of the laser imager as a tool to obtain geometric and internal structure data about complex 3-D objects in a natural background. The data from these images have been analyzed to obtain information about the distributions of the objects in a scene. A range detection algorithm has been used to identify individual objects in a laser image and an edge detection algorithm then applied to highlight the outlines of discrete objects. An example of an image processed in such a manner is shown. Described here are the results from the study. In addition, results are presented outlining how the laser imaging system could be used to obtain other important information about bio/geophysical systems, such as the distribution of woody material in forests.

  1. Adaptive optics imaging of geographic atrophy.

    PubMed

    Gocho, Kiyoko; Sarda, Valérie; Falah, Sabrina; Sahel, José-Alain; Sennlaub, Florian; Benchaboune, Mustapha; Ullern, Martine; Paques, Michel

    2013-05-01

    To report the findings of en face adaptive optics (AO) near infrared (NIR) reflectance fundus flood imaging in eyes with geographic atrophy (GA). Observational clinical study of AO NIR fundus imaging was performed in 12 eyes of nine patients with GA, and in seven controls using a flood illumination camera operating at 840 nm, in addition to routine clinical examination. To document short term and midterm changes, AO imaging sessions were repeated in four patients (mean interval between sessions 21 days; median follow up 6 months). As compared with scanning laser ophthalmoscope imaging, AO NIR imaging improved the resolution of the changes affecting the RPE. Multiple hyporeflective clumps were seen within and around GA areas. Time-lapse imaging revealed micrometric-scale details of the emergence and progression of areas of atrophy as well as the complex kinetics of some hyporeflective clumps. Such dynamic changes were observed within as well as outside atrophic areas. in eyes affected by GA, AO nir imaging allows high resolution documentation of the extent of RPE damage. this also revealed that a complex, dynamic process of redistribution of hyporeflective clumps throughout the posterior pole precedes and accompanies the emergence and progression of atrophy. therefore, these clumps are probably also a biomarker of rpe damage. AO NIR imaging may, therefore, be of interest to detect the earliest stages, to document the retinal pathology and to monitor the progression oF GA. (ClinicalTrials.gov number, NCT01546181.).

  2. Applying a visual language for image processing as a graphical teaching tool in medical imaging

    NASA Astrophysics Data System (ADS)

    Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.

  3. Quantitative imaging reveals real-time Pou5f3–Nanog complexes driving dorsoventral mesendoderm patterning in zebrafish

    PubMed Central

    Perez-Camps, Mireia; Tian, Jing; Chng, Serene C; Sem, Kai Pin; Sudhaharan, Thankiah; Teh, Cathleen; Wachsmuth, Malte; Korzh, Vladimir; Ahmed, Sohail; Reversade, Bruno

    2016-01-01

    Formation of the three embryonic germ layers is a fundamental developmental process that initiates differentiation. How the zebrafish pluripotency factor Pou5f3 (homologous to mammalian Oct4) drives lineage commitment is unclear. Here, we introduce fluorescence lifetime imaging microscopy and fluorescence correlation spectroscopy to assess the formation of Pou5f3 complexes with other transcription factors in real-time in gastrulating zebrafish embryos. We show, at single-cell resolution in vivo, that Pou5f3 complexes with Nanog to pattern mesendoderm differentiation at the blastula stage. Later, during gastrulation, Sox32 restricts Pou5f3–Nanog complexes to the ventrolateral mesendoderm by binding Pou5f3 or Nanog in prospective dorsal endoderm. In the ventrolateral endoderm, the Elabela / Aplnr pathway limits Sox32 levels, allowing the formation of Pou5f3–Nanog complexes and the activation of downstream BMP signaling. This quantitative model shows that a balance in the spatiotemporal distribution of Pou5f3–Nanog complexes, modulated by Sox32, regulates mesendoderm specification along the dorsoventral axis. DOI: http://dx.doi.org/10.7554/eLife.11475.001 PMID:27684073

  4. Fast X-ray imaging of cavitating flows

    DOE PAGES

    Khlifa, Ilyass; Vabre, Alexandre; Hočevar, Marko; ...

    2017-10-20

    A new method based on ultra-fast X-ray imaging was developed in this work for simultaneous investigations of the dynamics and the structures of complex two-phase flows. Here in this paper, cavitation was created inside a millimetric 2D Venturi-type test section, while seeding particles were injected into the flow. Thanks to the phase-contrast enhancement technique provided by the APS (Advanced Photon Source) synchrotron beam, high definition X-ray images of the complex cavitating flows were obtained. These images contain valuable information about both the liquid and the gaseous phases. By means of image processing, the two phases were separated, and velocity fieldsmore » of each phase were therefore calculated using image cross-correlations. The local vapour volume fractions were also obtained thanks to the local intensity levels within the recorded images. These simultaneous measurements, provided by this new technique, afford more insight into the structure and the dynamic of two-phase flows as well as the interactions between then, and hence enable to improve our understanding of their behavior. In the case of cavitating flows inside a Venturi-type test section, the X-ray measurements demonstrates, for the first time, the presence of significant slip velocities between the phases within sheet cavities for both steady and unsteady flow configurations.« less

  5. Fast X-ray imaging of cavitating flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khlifa, Ilyass; Vabre, Alexandre; Hočevar, Marko

    A new method based on ultra-fast X-ray imaging was developed in this work for simultaneous investigations of the dynamics and the structures of complex two-phase flows. Here in this paper, cavitation was created inside a millimetric 2D Venturi-type test section, while seeding particles were injected into the flow. Thanks to the phase-contrast enhancement technique provided by the APS (Advanced Photon Source) synchrotron beam, high definition X-ray images of the complex cavitating flows were obtained. These images contain valuable information about both the liquid and the gaseous phases. By means of image processing, the two phases were separated, and velocity fieldsmore » of each phase were therefore calculated using image cross-correlations. The local vapour volume fractions were also obtained thanks to the local intensity levels within the recorded images. These simultaneous measurements, provided by this new technique, afford more insight into the structure and the dynamic of two-phase flows as well as the interactions between then, and hence enable to improve our understanding of their behavior. In the case of cavitating flows inside a Venturi-type test section, the X-ray measurements demonstrates, for the first time, the presence of significant slip velocities between the phases within sheet cavities for both steady and unsteady flow configurations.« less

  6. Geometric processing workflow for vertical and oblique hyperspectral frame images collected using UAV

    NASA Astrophysics Data System (ADS)

    Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.

    2014-08-01

    Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.

  7. A physiology-based parametric imaging method for FDG-PET data

    NASA Astrophysics Data System (ADS)

    Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele

    2017-12-01

    Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.

  8. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  9. Exploring the complementarity of THz pulse imaging and DCE-MRIs: Toward a unified multi-channel classification and a deep learning framework.

    PubMed

    Yin, X-X; Zhang, Y; Cao, J; Wu, J-L; Hadjiloucas, S

    2016-12-01

    We provide a comprehensive account of recent advances in biomedical image analysis and classification from two complementary imaging modalities: terahertz (THz) pulse imaging and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The work aims to highlight underlining commonalities in both data structures so that a common multi-channel data fusion framework can be developed. Signal pre-processing in both datasets is discussed briefly taking into consideration advances in multi-resolution analysis and model based fractional order calculus system identification. Developments in statistical signal processing using principal component and independent component analysis are also considered. These algorithms have been developed independently by the THz-pulse imaging and DCE-MRI communities, and there is scope to place them in a common multi-channel framework to provide better software standardization at the pre-processing de-noising stage. A comprehensive discussion of feature selection strategies is also provided and the importance of preserving textural information is highlighted. Feature extraction and classification methods taking into consideration recent advances in support vector machine (SVM) and extreme learning machine (ELM) classifiers and their complex extensions are presented. An outlook on Clifford algebra classifiers and deep learning techniques suitable to both types of datasets is also provided. The work points toward the direction of developing a new unified multi-channel signal processing framework for biomedical image analysis that will explore synergies from both sensing modalities for inferring disease proliferation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Yang, D

    2015-06-15

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less

  11. A unified framework of image latent feature learning on Sina microblog

    NASA Astrophysics Data System (ADS)

    Wei, Jinjin; Jin, Zhigang; Zhou, Yuan; Zhang, Rui

    2015-10-01

    Large-scale user-contributed images with texts are rapidly increasing on the social media websites, such as Sina microblog. However, the noise and incomplete correspondence between the images and the texts give rise to the difficulty in precise image retrieval and ranking. In this paper, a hypergraph-based learning framework is proposed for image ranking, which simultaneously utilizes visual feature, textual content and social link information to estimate the relevance between images. Representing each image as a vertex in the hypergraph, complex relationship between images can be reflected exactly. Then updating the weight of hyperedges throughout the hypergraph learning process, the effect of different edges can be adaptively modulated in the constructed hypergraph. Furthermore, the popularity degree of the image is employed to re-rank the retrieval results. Comparative experiments on a large-scale Sina microblog data-set demonstrate the effectiveness of the proposed approach.

  12. The fast iris image clarity evaluation based on Tenengrad and ROI selection

    NASA Astrophysics Data System (ADS)

    Gao, Shuqin; Han, Min; Cheng, Xu

    2018-04-01

    In iris recognition system, the clarity of iris image is an important factor that influences recognition effect. In the process of recognition, the blurred image may possibly be rejected by the automatic iris recognition system, which will lead to the failure of identification. Therefore it is necessary to evaluate the iris image definition before recognition. Considered the existing evaluation methods on iris image definition, we proposed a fast algorithm to evaluate the definition of iris image in this paper. In our algorithm, firstly ROI (Region of Interest) is extracted based on the reference point which is determined by using the feature of the light spots within the pupil, then Tenengrad operator is used to evaluate the iris image's definition. Experiment results show that, the iris image definition algorithm proposed in this paper could accurately distinguish the iris images of different clarity, and the algorithm has the merit of low computational complexity and more effectiveness.

  13. Multidimensional Processing and Visual Rendering of Complex 3D Biomedical Images

    NASA Technical Reports Server (NTRS)

    Sams, Clarence F.

    2016-01-01

    The proposed technology uses advanced image analysis techniques to maximize the resolution and utility of medical imaging methods being used during spaceflight. We utilize COTS technology for medical imaging, but our applications require higher resolution assessment of the medical images than is routinely applied with nominal system software. By leveraging advanced data reduction and multidimensional imaging techniques utilized in analysis of Planetary Sciences and Cell Biology imaging, it is possible to significantly increase the information extracted from the onboard biomedical imaging systems. Year 1 focused on application of these techniques to the ocular images collected on ground test subjects and ISS crewmembers. Focus was on the choroidal vasculature and the structure of the optic disc. Methods allowed for increased resolution and quantitation of structural changes enabling detailed assessment of progression over time. These techniques enhance the monitoring and evaluation of crew vision issues during space flight.

  14. AMISS - Active and passive MIcrowaves for Security and Subsurface imaging

    NASA Astrophysics Data System (ADS)

    Soldovieri, Francesco; Slob, Evert; Turk, Ahmet Serdar; Crocco, Lorenzo; Catapano, Ilaria; Di Matteo, Francesca

    2013-04-01

    The FP7-IRSES project AMISS - Active and passive MIcrowaves for Security and Subsurface imaging is based on a well-combined network among research institutions of EU, Associate and Third Countries (National Research Council of Italy - Italy, Technische Universiteit Delft - The Netherlands, Yildiz Technical University - Turkey, Bauman Moscow State Technical University - Russia, Usikov Institute for Radio-physics and Electronics and State Research Centre of Superconductive Radioelectronics "Iceberg" - Ukraine and University of Sao Paulo - Brazil) with the aims of achieving scientific advances in the framework of microwave and millimeter imaging systems and techniques for security and safety social issues. In particular, the involved partners are leaders in the scientific areas of passive and active imaging and are sharing their complementary knowledge to address two main research lines. The first one regards the design, characterization and performance evaluation of new passive and active microwave devices, sensors and measurement set-ups able to mitigate clutter and increase information content. The second line faces the requirements to make State-of-the-Art processing tools compliant with the instrumentations developed in the first line, suitable to work in electromagnetically complex scenarios and able to exploit the unexplored possibilities offered by new instrumentations. The main goals of the project are: 1) Development/improvement and characterization of new sensors and systems for active and passive microwave imaging; 2) Set up, analysis and validation of state of art/novel data processing approach for GPR in critical infrastructure and subsurface imaging; 3) Integration of state of art and novel imaging hardware and characterization approaches to tackle realistic situations in security, safety and subsurface prospecting applications; 4) Development and feasibility study of bio-radar technology (system and data processing) for vital signs detection and detection/characterization of human beings in complex scenarios. These goals are planned to be reached following a plan of research activities and researchers secondments which cover a period of three years. ACKNOWLEDGMENTS This research has been performed in the framework of the "Active and Passive Microwaves for Security and Subsurface imaging (AMISS)" EU 7th Framework Marie Curie Actions IRSES project (PIRSES-GA-2010-269157).

  15. An image compression survey and algorithm switching based on scene activity

    NASA Technical Reports Server (NTRS)

    Hart, M. M.

    1985-01-01

    Data compression techniques are presented. A description of these techniques is provided along with a performance evaluation. The complexity of the hardware resulting from their implementation is also addressed. The compression effect on channel distortion and the applicability of these algorithms to real-time processing are presented. Also included is a proposed new direction for an adaptive compression technique for real-time processing.

  16. Using Atomic Force Microscopy to Characterize the Conformational Properties of Proteins and Protein-DNA Complexes That Carry Out DNA Repair.

    PubMed

    LeBlanc, Sharonda; Wilkins, Hunter; Li, Zimeng; Kaur, Parminder; Wang, Hong; Erie, Dorothy A

    2017-01-01

    Atomic force microscopy (AFM) is a scanning probe technique that allows visualization of single biomolecules and complexes deposited on a surface with nanometer resolution. AFM is a powerful tool for characterizing protein-protein and protein-DNA interactions. It can be used to capture snapshots of protein-DNA solution dynamics, which in turn, enables the characterization of the conformational properties of transient protein-protein and protein-DNA interactions. With AFM, it is possible to determine the stoichiometries and binding affinities of protein-protein and protein-DNA associations, the specificity of proteins binding to specific sites on DNA, and the conformations of the complexes. We describe methods to prepare and deposit samples, including surface treatments for optimal depositions, and how to quantitatively analyze images. We also discuss a new electrostatic force imaging technique called DREEM, which allows the visualization of the path of DNA within proteins in protein-DNA complexes. Collectively, these methods facilitate the development of comprehensive models of DNA repair and provide a broader understanding of all protein-protein and protein-nucleic acid interactions. The structural details gleaned from analysis of AFM images coupled with biochemistry provide vital information toward establishing the structure-function relationships that govern DNA repair processes. © 2017 Elsevier Inc. All rights reserved.

  17. Real-time embedded atmospheric compensation for long-range imaging using the average bispectrum speckle method

    NASA Astrophysics Data System (ADS)

    Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.

    2009-02-01

    While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.

  18. Use of micro computed-tomography and 3D printing for reverse engineering of mouse embryo nasal capsule

    NASA Astrophysics Data System (ADS)

    Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.

    2016-03-01

    Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.

  19. Efficient processing of fluorescence images using directional multiscale representations.

    PubMed

    Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M

    2014-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.

  20. Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network

    PubMed Central

    Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-01-01

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838

  1. Efficient processing of fluorescence images using directional multiscale representations

    PubMed Central

    Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.

    2017-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225

  2. Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.

    PubMed

    Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-04-13

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.

  3. Monitoring system for the quality assessment in additive manufacturing

    NASA Astrophysics Data System (ADS)

    Carl, Volker

    2015-03-01

    Additive Manufacturing (AM) refers to a process by which a set of digital data -representing a certain complex 3dim design - is used to grow the respective 3dim real structure equal to the corresponding design. For the powder-based EOS manufacturing process a variety of plastic and metal materials can be used. Thereby, AM is in many aspects a very powerful tool as it can help to overcome particular limitations in conventional manufacturing. AM enables more freedom of design, complex, hollow and/or lightweight structures as well as product individualisation and functional integration. As such it is a promising approach with respect to the future design and manufacturing of complex 3dim structures. On the other hand, it certainly calls for new methods and standards in view of quality assessment. In particular, when utilizing AM for the design of complex parts used in aviation and aerospace technologies, appropriate monitoring systems are mandatory. In this respect, recently, sustainable progress has been accomplished by joining the common efforts and concerns of a manufacturer Additive Manufacturing systems and respective materials (EOS), along with those of an operator of such systems (MTU Aero Engines) and experienced application engineers (Carl Metrology), using decent know how in the field of optical and infrared methods regarding non-destructive-examination (NDE). The newly developed technology is best described by a high-resolution layer by layer inspection technique, which allows for a 3D tomography-analysis of the complex part at any time during the manufacturing process. Thereby, inspection costs are kept rather low by using smart image-processing methods as well as CMOS sensors instead of infrared detectors. Moreover, results from conventional physical metallurgy may easily be correlated with the predictive results of the monitoring system which not only allows for improvements of the AM monitoring system, but finally leads to an optimisation of the quality and insurance of material security of the complex structure being manufactured. Both, our poster and our oral presentation will explain the data flow between the above mentioned parties involved. A suitable monitoring system for Additive Manufacturing will be introduced, along with a presentation of the respective high resolution data acquisition, as well as the image processing and the data analysis allowing for a precise control of the 3dim growth-process.

  4. Monitoring system for the quality assessment in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl, Volker, E-mail: carl@t-zfp.de

    Additive Manufacturing (AM) refers to a process by which a set of digital data -representing a certain complex 3dim design - is used to grow the respective 3dim real structure equal to the corresponding design. For the powder-based EOS manufacturing process a variety of plastic and metal materials can be used. Thereby, AM is in many aspects a very powerful tool as it can help to overcome particular limitations in conventional manufacturing. AM enables more freedom of design, complex, hollow and/or lightweight structures as well as product individualisation and functional integration. As such it is a promising approach with respectmore » to the future design and manufacturing of complex 3dim structures. On the other hand, it certainly calls for new methods and standards in view of quality assessment. In particular, when utilizing AM for the design of complex parts used in aviation and aerospace technologies, appropriate monitoring systems are mandatory. In this respect, recently, sustainable progress has been accomplished by joining the common efforts and concerns of a manufacturer Additive Manufacturing systems and respective materials (EOS), along with those of an operator of such systems (MTU Aero Engines) and experienced application engineers (Carl Metrology), using decent know how in the field of optical and infrared methods regarding non-destructive-examination (NDE). The newly developed technology is best described by a high-resolution layer by layer inspection technique, which allows for a 3D tomography-analysis of the complex part at any time during the manufacturing process. Thereby, inspection costs are kept rather low by using smart image-processing methods as well as CMOS sensors instead of infrared detectors. Moreover, results from conventional physical metallurgy may easily be correlated with the predictive results of the monitoring system which not only allows for improvements of the AM monitoring system, but finally leads to an optimisation of the quality and insurance of material security of the complex structure being manufactured. Both, our poster and our oral presentation will explain the data flow between the above mentioned parties involved. A suitable monitoring system for Additive Manufacturing will be introduced, along with a presentation of the respective high resolution data acquisition, as well as the image processing and the data analysis allowing for a precise control of the 3dim growth-process.« less

  5. Design, fabrication and actuation of a MEMS-based image stabilizer for photographic cell phone applications

    NASA Astrophysics Data System (ADS)

    Chiou, Jin-Chern; Hung, Chen-Chun; Lin, Chun-Ying

    2010-07-01

    This work presents a MEMS-based image stabilizer applied for anti-shaking function in photographic cell phones. The proposed stabilizer is designed as a two-axis decoupling XY stage 1.4 × 1.4 × 0.1 mm3 in size, and adequately strong to suspend an image sensor for anti-shaking photographic function. This stabilizer is fabricated by complex fabrication processes, including inductively coupled plasma (ICP) processes and flip-chip bonding technique. Based on the special designs of a hollow handle layer and a corresponding wire-bonding assisted holder, electrical signals of the suspended image sensor can be successfully sent out with 32 signal springs without incurring damage during wire-bonding packaging. The longest calculated traveling distance of the stabilizer is 25 µm which is sufficient to resolve the anti-shaking problem in a three-megapixel image sensor. Accordingly, the applied voltage for the 25 µm moving distance is 38 V. Moreover, the resonant frequency of the actuating device with the image sensor is 1.123 kHz.

  6. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    NASA Astrophysics Data System (ADS)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  7. Non-stationary noise estimation using dictionary learning and Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Hughes, James M.; Rockmore, Daniel N.; Wang, Yang

    2014-02-01

    Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.

  8. Applications of independent component analysis in SAR images

    NASA Astrophysics Data System (ADS)

    Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping

    2009-07-01

    The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.

  9. CINCH (confocal incoherent correlation holography) super resolution fluorescence microscopy based upon FINCH (Fresnel incoherent correlation holography)

    PubMed Central

    Siegel, Nisan; Storrie, Brian; Bruce, Marc

    2016-01-01

    FINCH holographic fluorescence microscopy creates high resolution super-resolved images with enhanced depth of focus. The simple addition of a real-time Nipkow disk confocal image scanner in a conjugate plane of this incoherent holographic system is shown to reduce the depth of focus, and the combination of both techniques provides a simple way to enhance the axial resolution of FINCH in a combined method called “CINCH”. An important feature of the combined system allows for the simultaneous real-time image capture of widefield and holographic images or confocal and confocal holographic images for ready comparison of each method on the exact same field of view. Additional GPU based complex deconvolution processing of the images further enhances resolution. PMID:26839443

  10. Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography

    PubMed Central

    Wang, Yu; Peng, Bo; Jiang, Jingfeng

    2017-01-01

    Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330

  11. Pixel-based OPC optimization based on conjugate gradients.

    PubMed

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  12. LLSURE: local linear SURE-based edge-preserving image filtering.

    PubMed

    Qiu, Tianshuang; Wang, Aiqi; Yu, Nannan; Song, Aimin

    2013-01-01

    In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.

  13. Studies of superresolution range-Doppler imaging

    NASA Astrophysics Data System (ADS)

    Zhu, Zhaoda; Ye, Zhenru; Wu, Xiaoqing; Yin, Jun; She, Zhishun

    1993-02-01

    This paper presents three superresolution imaging methods, including the linear prediction data extrapolation DFT (LPDEDFT), the dynamic optimization linear least squares (DOLLS), and the Hopfield neural network nonlinear least squares (HNNNLS). Live data of a metalized scale model B-52 aircraft, mounted on a rotating platform in a microwave anechoic chamber, have in this way been processed, as has a flying Boeing-727 aircraft. The imaging results indicate that, compared to the conventional Fourier method, either higher resolution for the same effective bandwidth of transmitted signals and total rotation angle in imaging, or equal-quality images from smaller bandwidth and total rotation, angle may be obtained by these superresolution approaches. Moreover, these methods are compared in respect of their resolution capability and computational complexity.

  14. Monovalent Streptavidin that Senses Oligonucleotides**

    PubMed Central

    Wang, Jingxian; Kostic, Natasa; Stojanovic, Milan N.

    2013-01-01

    We report a straightforward chemical route to monovalent streptavidin, a valuable reagent for imaging. The one-step process is based on a (tris)biotinylated-oligonucleotide blocking three of streptavidin’s four biotin binding sites. Further, the complex is highly sensitive to single-base differences - whereby perfectly matched oligonucleotides trigger dissociation of the biotin-streptavidin interaction at higher rates than single-base mismatches. Unique properties and ease of synthesis open wide opportunities for practical applications in imaging and biosensing. PMID:23606329

  15. Planetary Data Workshop, Part 2

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Technical aspects of the Planetary Data System (PDS) are addressed. Methods and tools for maintaining and accessing large, complex sets of data are discussed. The specific software and applications needed for processing imaging and non-imaging science data are reviewed. The need for specific software that provides users with information on the location and geometry of scientific observations is discussed. Computer networks and user interface to the PDS are covered along with Computer hardware available to this data system.

  16. CHARACTERIZING TRANSFER OF SURFACE RESIDUES TO SKIN USING A VIDEO-FLUORESCENT IMAGING TECHNIQUE

    EPA Science Inventory

    Surface-to-skin transfer of contaminants is a complex process. For children's residential exposure, transfer of chemicals from contaminated surfaces such as floors and furniture is potentially significant. Once on the skin, residues and contaminated particles can be transferred b...

  17. Pseudo-shading technique in the two-dimensional domain: a post-processing algorithm for enhancing the Z-buffer of a three-dimensional binary image.

    PubMed

    Tan, A C; Richards, R

    1989-01-01

    Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.

  18. Micro-Spectroscopic Imaging of Lignin-Carbohydrate Complexes in Plant Cell Walls and Their Migration During Biomass Pretreatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Yining; Zhao, Shuai; Wei, Hui

    2015-04-27

    In lignocellulosic biomass, lignin is the second most abundant biopolymer. In plant cell walls, lignin is associated with polysaccharides to form lignin-carbohydrate complexes (LCC). LCC have been considered to be a major factor that negatively affects the process of deconstructing biomass to simple sugars by cellulosic enzymes. Here, we report a micro-spectroscopic approach that combines fluorescence lifetime imaging microscopy and Stimulated Raman Scattering microscopy to probe in situ lignin concentration and conformation at each cell wall layer. This technique does not require extensive sample preparation or any external labels. Using poplar as a feedstock, for example, we observe variation ofmore » LCC in untreated tracheid poplar cell walls. The redistribution of LCC at tracheid poplar cell wall layers is also investigated when the chemical linkages between lignin and hemicellulose are cleaved during pretreatment. Our study would provide new insights into further improvement of the biomass pretreatment process.« less

  19. A Virtual Reality Visualization Tool for Neuron Tracing

    PubMed Central

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Angelucci, Alessandra; Pascucci, Valerio

    2017-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists. PMID:28866520

  20. A Virtual Reality Visualization Tool for Neuron Tracing.

    PubMed

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio

    2018-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.

  1. Preparing images for publication: part 2.

    PubMed

    Bengel, Wolfgang; Devigus, Alessandro

    2006-08-01

    The transition from conventional to digital photography presents many advantages for authors and photographers in the field of dentistry, but also many complexities and potential problems. No uniform procedures for authors and publishers exist at present for producing high-quality dental photographs. This two-part article aims to provide guidelines for preparing images for publication and improving communication between these two parties. Part 1 provided information about basic color principles, factors that can affect color perception, and digital color management. Part 2 describes the camera setup, discusses how to take a photograph suitable for publication, and outlines steps for the image editing process.

  2. Results from the Mars Pathfinder camera.

    PubMed

    Smith, P H; Bell, J F; Bridges, N T; Britt, D T; Gaddis, L; Greeley, R; Keller, H U; Herkenhoff, K E; Jaumann, R; Johnson, J R; Kirk, R L; Lemmon, M; Maki, J N; Malin, M C; Murchie, S L; Oberst, J; Parker, T J; Reid, R J; Sablotny, R; Soderblom, L A; Stoker, C; Sullivan, R; Thomas, N; Tomasko, M G; Wegryn, E

    1997-12-05

    Images of the martian surface returned by the Imager for Mars Pathfinder (IMP) show a complex surface of ridges and troughs covered by rocks that have been transported and modified by fluvial, aeolian, and impact processes. Analysis of the spectral signatures in the scene (at 440- to 1000-nanometer wavelength) reveal three types of rock and four classes of soil. Upward-looking IMP images of the predawn sky show thin, bluish clouds that probably represent water ice forming on local atmospheric haze (opacity approximately 0.5). Haze particles are about 1 micrometer in radius and the water vapor column abundance is about 10 precipitable micrometers.

  3. [An integrated segmentation method for 3D ultrasound carotid artery].

    PubMed

    Yang, Xin; Wu, Huihui; Liu, Yang; Xu, Hongwei; Liang, Huageng; Cai, Wenjuan; Fang, Mengjie; Wang, Yujie

    2013-07-01

    An integrated segmentation method for 3D ultrasound carotid artery was proposed. 3D ultrasound image was sliced into transverse, coronal and sagittal 2D images on the carotid bifurcation point. Then, the three images were processed respectively, and the carotid artery contours and thickness were obtained finally. This paper tries to overcome the disadvantages of current computer aided diagnosis method, such as high computational complexity, easily introduced subjective errors et al. The proposed method could get the carotid artery overall information rapidly, accurately and completely. It could be transplanted into clinical usage for atherosclerosis diagnosis and prevention.

  4. 3-D surface reconstruction of patient specific anatomic data using a pre-specified number of polygons.

    PubMed

    Aharon, S; Robb, R A

    1997-01-01

    Virtual reality environments provide highly interactive, natural control of the visualization process, significantly enhancing the scientific value of the data produced by medical imaging systems. Due to the computational and real time display update requirements of virtual reality interfaces, however, the complexity of organ and tissue surfaces which can be displayed is limited. In this paper, we present a new algorithm for the production of a polygonal surface containing a pre-specified number of polygons from patient or subject specific volumetric image data. The advantage of this new algorithm is that it effectively tiles complex structures with a specified number of polygons selected to optimize the trade-off between surface detail and real-time display rates.

  5. Modeling Patient-Specific Deformable Mitral Valves.

    PubMed

    Ginty, Olivia; Moore, John; Peters, Terry; Bainbridge, Daniel

    2018-06-01

    Medical imaging has advanced enormously over the last few decades, revolutionizing patient diagnostics and care. At the same time, additive manufacturing has emerged as a means of reproducing physical shapes and models previously not possible. In combination, they have given rise to 3-dimensional (3D) modeling, an entirely new technology for physicians. In an era in which 3D imaging has become a standard for aiding in the diagnosis and treatment of cardiac disease, this visualization now can be taken further by bringing the patient's anatomy into physical reality as a model. The authors describe the generalized process of creating a model of cardiac anatomy from patient images and their experience creating patient-specific dynamic mitral valve models. This involves a combination of image processing software and 3D printing technology. In this article, the complexity of 3D modeling is described and the decision-making process for cardiac anesthesiologists is summarized. The management of cardiac disease has been altered with the emergence of 3D echocardiography, and 3D modeling represents the next paradigm shift. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Spectral-element simulations of wave propagation in complex exploration-industry models: Imaging and adjoint tomography

    NASA Astrophysics Data System (ADS)

    Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.

    2008-12-01

    Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.

  7. Building Development Monitoring in Multitemporal Remotely Sensed Image Pairs with Stochastic Birth-Death Dynamics.

    PubMed

    Benedek, C; Descombes, X; Zerubia, J

    2012-01-01

    In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.

  8. Comparative study of different approaches for multivariate image analysis in HPTLC fingerprinting of natural products such as plant resin.

    PubMed

    Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka

    2017-01-01

    Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Underwater image enhancement based on the dark channel prior and attenuation compensation

    NASA Astrophysics Data System (ADS)

    Guo, Qingwen; Xue, Lulu; Tang, Ruichun; Guo, Lingrui

    2017-10-01

    Aimed at the two problems of underwater imaging, fog effect and color cast, an Improved Segmentation Dark Channel Prior (ISDCP) defogging method is proposed to solve the fog effects caused by physical properties of water. Due to mass refraction of light in the process of underwater imaging, fog effects would lead to image blurring. And color cast is closely related to different degree of attenuation while light with different wavelengths is traveling in water. The proposed method here integrates the ISDCP and quantitative histogram stretching techniques into the image enhancement procedure. Firstly, the threshold value is set during the refinement process of the transmission maps to identify the original mismatching, and to conduct the differentiated defogging process further. Secondly, a method of judging the propagating distance of light is adopted to get the attenuation degree of energy during the propagation underwater. Finally, the image histogram is stretched quantitatively in Red-Green-Blue channel respectively according to the degree of attenuation in each color channel. The proposed method ISDCP can reduce the computational complexity and improve the efficiency in terms of defogging effect to meet the real-time requirements. Qualitative and quantitative comparison for several different underwater scenes reveals that the proposed method can significantly improve the visibility compared with previous methods.

  10. Ingestible roasted barley for contrast-enhanced photoacoustic imaging in animal and human subjects.

    PubMed

    Wang, Depeng; Lee, Dong Hyeun; Huang, Haoyuan; Vu, Tri; Lim, Rachel Su Ann; Nyayapathi, Nikhila; Chitgupi, Upendra; Liu, Maggie; Geng, Jumin; Xia, Jun; Lovell, Jonathan F

    2018-08-01

    Photoacoustic computed tomography (PACT) is an emerging imaging modality. While many contrast agents have been developed for PACT, these typically cannot immediately be used in humans due to the lengthy regulatory process. We screened two hundred types of ingestible foodstuff samples for photoacoustic contrast with 1064 nm pulse laser excitation, and identified roasted barley as a promising candidate. Twenty brands of roasted barley were further screened to identify the one with the strongest contrast, presumably based on complex chemical modifications incurred during the roasting process. Individual roasted barley particles could be detected through 3.5 cm of chicken-breast tissue and through the whole hand of healthy human volunteers. With PACT, but not ultrasound imaging, a single grain of roasted barley was detected in a field of hundreds of non-roasted particles. Upon oral administration, roasted barley enabled imaging of the gut and peristalsis in mice. Prepared roasted barley tea could be detected through 2.5 cm chicken breast tissue. When barley tea was administered to humans, photoacoustic imaging visualized swallowing dynamics in healthy volunteers. Thus, roasted barley represents an edible foodstuff that should be considered for photoacoustic contrast imaging of swallowing and gut processes, with immediate potential for clinical translation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Detecting tympanostomy tubes from otoscopic images via offline and online training.

    PubMed

    Wang, Xin; Valdez, Tulio A; Bi, Jinbo

    2015-06-01

    Tympanostomy tube placement has been commonly used nowadays as a surgical treatment for otitis media. Following the placement, regular scheduled follow-ups for checking the status of the tympanostomy tubes are important during the treatment. The complexity of performing the follow up care mainly lies on identifying the presence and patency of the tympanostomy tube. An automated tube detection program will largely reduce the care costs and enhance the clinical efficiency of the ear nose and throat specialists and general practitioners. In this paper, we develop a computer vision system that is able to automatically detect a tympanostomy tube in an otoscopic image of the ear drum. The system comprises an offline classifier training process followed by a real-time refinement stage performed at the point of care. The offline training process constructs a three-layer cascaded classifier with each layer reflecting specific characteristics of the tube. The real-time refinement process enables the end users to interact and adjust the system over time based on their otoscopic images and patient care. The support vector machine (SVM) algorithm has been applied to train all of the classifiers. Empirical evaluation of the proposed system on both high quality hospital images and low quality internet images demonstrates the effectiveness of the system. The offline classifier trained using 215 images could achieve a 90% accuracy in terms of classifying otoscopic images with and without a tympanostomy tube, and then the real-time refinement process could improve the classification accuracy by 3-5% based on additional 20 images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Eliminating chromatic aberration of lens and recognition of thermal images with artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung

    2007-11-01

    Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.

  13. Rapid Disaster Damage Estimation

    NASA Astrophysics Data System (ADS)

    Vu, T. T.

    2012-07-01

    The experiences from recent disaster events showed that detailed information derived from high-resolution satellite images could accommodate the requirements from damage analysts and disaster management practitioners. Richer information contained in such high-resolution images, however, increases the complexity of image analysis. As a result, few image analysis solutions can be practically used under time pressure in the context of post-disaster and emergency responses. To fill the gap in employment of remote sensing in disaster response, this research develops a rapid high-resolution satellite mapping solution built upon a dual-scale contextual framework to support damage estimation after a catastrophe. The target objects are building (or building blocks) and their condition. On the coarse processing level, statistical region merging deployed to group pixels into a number of coarse clusters. Based on majority rule of vegetation index, water and shadow index, it is possible to eliminate the irrelevant clusters. The remaining clusters likely consist of building structures and others. On the fine processing level details, within each considering clusters, smaller objects are formed using morphological analysis. Numerous indicators including spectral, textural and shape indices are computed to be used in a rule-based object classification. Computation time of raster-based analysis highly depends on the image size or number of processed pixels in order words. Breaking into 2 level processing helps to reduce the processed number of pixels and the redundancy of processing irrelevant information. In addition, it allows a data- and tasks- based parallel implementation. The performance is demonstrated with QuickBird images captured a disaster-affected area of Phanga, Thailand by the 2004 Indian Ocean tsunami are used for demonstration of the performance. The developed solution will be implemented in different platforms as well as a web processing service for operational uses.

  14. Comparison of binary mask defect printability analysis using virtual stepper system and aerial image microscope system

    NASA Astrophysics Data System (ADS)

    Phan, Khoi A.; Spence, Chris A.; Dakshina-Murthy, S.; Bala, Vidya; Williams, Alvina M.; Strener, Steve; Eandi, Richard D.; Li, Junling; Karklin, Linard

    1999-12-01

    As advanced process technologies in the wafer fabs push the patterning processes toward lower k1 factor for sub-wavelength resolution printing, reticles are required to use optical proximity correction (OPC) and phase-shifted mask (PSM) for resolution enhancement. For OPC/PSM mask technology, defect printability is one of the major concerns. Current reticle inspection tools available on the market sometimes are not capable of consistently differentiating between an OPC feature and a true random defect. Due to the process complexity and high cost associated with the making of OPC/PSM reticles, it is important for both mask shops and lithography engineers to understand the impact of different defect types and sizes to the printability. Aerial Image Measurement System (AIMS) has been used in the mask shops for a number of years for reticle applications such as aerial image simulation and transmission measurement of repaired defects. The Virtual Stepper System (VSS) provides an alternative method to do defect printability simulation and analysis using reticle images captured by an optical inspection or review system. In this paper, pre- programmed defects and repairs from a Defect Sensitivity Monitor (DSM) reticle with 200 nm minimum features (at 1x) will be studied for printability. The simulated resist lines by AIMS and VSS are both compared to SEM images of resist wafers qualitatively and quantitatively using CD verification.Process window comparison between unrepaired and repaired defects for both good and bad repair cases will be shown. The effect of mask repairs to resist pattern images for the binary mask case will be discussed. AIMS simulation was done at the International Sematech, Virtual stepper simulation at Zygo and resist wafers were processed at AMD-Submicron Development Center using a DUV lithographic process for 0.18 micrometer Logic process technology.

  15. Geological mysteries on Ganymede

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This image shows some unusual features on the surface of Jupiter's moon, Ganymede. NASA's Galileo spacecraft imaged this region as it passed Ganymede during its second orbit through the Jovian system. The region is located at 31 degrees latitude, 186 degrees longitude in the north of Marius Regio, a region of ancient dark terrain, and is near the border of a large swathe of younger, heavily tectonised bright terrain known as Nippur Sulcus. Situated in the transitional region between these two terrain types, the area shown here contains many complex tectonic structures, and small fractures can be seen crisscrossing the image. North is to the top-left of the picture, and the sun illuminates the surface from the southeast. This image is centered on an unusual semicircular structure about 33 kilometers (20 miles) across. A 38 kilometer (24 miles) long, remarkably linear feature cuts across its northern extent, and a wide east-west fault system marks its southern boundary. The origin of these features is the subject of much debate among scientists analyzing the data. Was the arcuate structure part of a larger feature? Is the straight lineament the result of internal or external processes? Scientists continue to study this data in order to understand the surface processes occurring on this complex satellite.

    The image covers an area approximately 80 kilometers (50 miles) by 52 kilometers (32 miles) across. The resolution is 189 meters (630 feet) per picture element. The images were taken on September 6, 1996 at a range of 9,971 kilometers (6,232 miles) by the solid state imaging (CCD) system on NASA's Galileo spacecraft.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  16. Graphical user interface to optimize image contrast parameters used in object segmentation - biomed 2009.

    PubMed

    Anderson, Jeffrey R; Barrett, Steven F

    2009-01-01

    Image segmentation is the process of isolating distinct objects within an image. Computer algorithms have been developed to aid in the process of object segmentation, but a completely autonomous segmentation algorithm has yet to be developed [1]. This is because computers do not have the capability to understand images and recognize complex objects within the image. However, computer segmentation methods [2], requiring user input, have been developed to quickly segment objects in serial sectioned images, such as magnetic resonance images (MRI) and confocal laser scanning microscope (CLSM) images. In these cases, the segmentation process becomes a powerful tool in visualizing the 3D nature of an object. The user input is an important part of improving the performance of many segmentation methods. A double threshold segmentation method has been investigated [3] to separate objects in gray scaled images, where the gray level of the object is among the gray levels of the background. In order to best determine the threshold values for this segmentation method the image must be manipulated for optimal contrast. The same is true of other segmentation and edge detection methods as well. Typically, the better the image contrast, the better the segmentation results. This paper describes a graphical user interface (GUI) that allows the user to easily change image contrast parameters that will optimize the performance of subsequent object segmentation. This approach makes use of the fact that the human brain is extremely effective in object recognition and understanding. The GUI provides the user with the ability to define the gray scale range of the object of interest. These lower and upper bounds of this range are used in a histogram stretching process to improve image contrast. Also, the user can interactively modify the gamma correction factor that provides a non-linear distribution of gray scale values, while observing the corresponding changes to the image. This interactive approach gives the user the power to make optimal choices in the contrast enhancement parameters.

  17. Improved accuracy of markerless motion tracking on bone suppression images: preliminary study for image-guided radiation therapy (IGRT)

    NASA Astrophysics Data System (ADS)

    Tanaka, Rie; Sanada, Shigeru; Sakuta, Keita; Kawashima, Hiroki

    2015-05-01

    The bone suppression technique based on advanced image processing can suppress the conspicuity of bones on chest radiographs, creating soft tissue images obtained by the dual-energy subtraction technique. This study was performed to evaluate the usefulness of bone suppression image processing in image-guided radiation therapy. We demonstrated the improved accuracy of markerless motion tracking on bone suppression images. Chest fluoroscopic images of nine patients with lung nodules during respiration were obtained using a flat-panel detector system (120 kV, 0.1 mAs/pulse, 5 fps). Commercial bone suppression image processing software was applied to the fluoroscopic images to create corresponding bone suppression images. Regions of interest were manually located on lung nodules and automatic target tracking was conducted based on the template matching technique. To evaluate the accuracy of target tracking, the maximum tracking error in the resulting images was compared with that of conventional fluoroscopic images. The tracking errors were decreased by half in eight of nine cases. The average maximum tracking errors in bone suppression and conventional fluoroscopic images were 1.3   ±   1.0 and 3.3   ±   3.3 mm, respectively. The bone suppression technique was especially effective in the lower lung area where pulmonary vessels, bronchi, and ribs showed complex movements. The bone suppression technique improved tracking accuracy without special equipment and implantation of fiducial markers, and with only additional small dose to the patient. Bone suppression fluoroscopy is a potential measure for respiratory displacement of the target. This paper was presented at RSNA 2013 and was carried out at Kanazawa University, JAPAN.

  18. Composition of the cellular infiltrate in patients with simple and complex appendicitis.

    PubMed

    Gorter, Ramon R; Wassenaar, Emma C E; de Boer, Onno J; Bakx, Roel; Roelofs, Joris J T H; Bunders, Madeleine J; van Heurn, L W Ernst; Heij, Hugo A

    2017-06-15

    It is now well established that there are two types of appendicitis: simple (nonperforating) and complex (perforating). This study evaluates differences in the composition of the immune cellular infiltrate in children with simple and complex appendicitis. A total of 47 consecutive children undergoing appendectomy for acute appendicitis between January 2011 and December 2012 were included. Intraoperative criteria were used to identify patients with either simple or complex appendicitis and were confirmed histopathologically. Immune histochemical techniques were used to identify immune cell markers in the appendiceal specimens. Digital imaging analysis was performed using Image J. In the specimens of patients with complex appendicitis, significantly more myeloperoxidase positive cells (neutrophils) (8.7% versus 1.2%, P < 0.001) were detected compared to patients with a simple appendicitis. In contrast, fewer CD8+ T cells (0.4% versus 1.3%, P = 0.016), CD20 + cells (2.9% versus 9.0%, P = 0.027), and CD21 + cells (0.2% versus 0.6%, P = 0.028) were present in tissue from patients with complex compared to simple appendicitis. The increase in proinflammatory innate cells and decrease of adaptive cells in patients with complex appendicitis suggest potential aggravating processes in complex appendicitis. Further research into the underlying mechanisms may identify novel biomarkers to be able to differentiate simple and complex appendicitis. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  20. A software platform for phase contrast x-ray breast imaging research.

    PubMed

    Bliznakova, K; Russo, P; Mettivier, G; Requardt, H; Popov, P; Bravin, A; Buliev, I

    2015-06-01

    To present and validate a computer-based simulation platform dedicated for phase contrast x-ray breast imaging research. The software platform, developed at the Technical University of Varna on the basis of a previously validated x-ray imaging software simulator, comprises modules for object creation and for x-ray image formation. These modules were updated to take into account the refractive index for phase contrast imaging as well as implementation of the Fresnel-Kirchhoff diffraction theory of the propagating x-ray waves. Projection images are generated in an in-line acquisition geometry. To test and validate the platform, several phantoms differing in their complexity were constructed and imaged at 25 keV and 60 keV at the beamline ID17 of the European Synchrotron Radiation Facility. The software platform was used to design computational phantoms that mimic those used in the experimental study and to generate x-ray images in absorption and phase contrast modes. The visual and quantitative results of the validation process showed an overall good correlation between simulated and experimental images and show the potential of this platform for research in phase contrast x-ray imaging of the breast. The application of the platform is demonstrated in a feasibility study for phase contrast images of complex inhomogeneous and anthropomorphic breast phantoms, compared to x-ray images generated in absorption mode. The improved visibility of mammographic structures suggests further investigation and optimisation of phase contrast x-ray breast imaging, especially when abnormalities are present. The software platform can be exploited also for educational purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Communication in diagnostic radiology: meeting the challenges of complexity.

    PubMed

    Larson, David B; Froehle, Craig M; Johnson, Neil D; Towbin, Alexander J

    2014-11-01

    As patients and information flow through the imaging process, value is added step-by-step when information is acquired, interpreted, and communicated back to the referring clinician. However, radiology information systems are often plagued with communication errors and delays. This article presents theories and recommends strategies to continuously improve communication in the complex environment of modern radiology. Communication theories, methods, and systems that have proven their effectiveness in other environments can serve as models for radiology.

  2. WHAMM links actin assembly via the Arp2/3 complex to autophagy.

    PubMed

    Kast, David J; Dominguez, Roberto

    2015-01-01

    Macroautophagy (hereafter autophagy) is the process by which cytosolic material destined for degradation is enclosed inside a double-membrane cisterna known as the autophagosome and processed for secretion and/or recycling. This process requires a large collection of proteins that converge on certain sites of the ER membrane to generate the autophagosome membrane. Recently, it was shown that actin accumulates around autophagosome precursors and could play a role in this process, but the mechanism and role of actin polymerization in autophagy were unknown. Here, we discuss our recent finding that the nucleation-promoting factor (NPF) WHAMM recruits and activates the Arp2/3 complex for actin assembly at sites of autophagosome formation on the ER. Using high-resolution, live-cell imaging, we showed that WHAMM forms dynamic puncta on the ER that comigrate with several autophagy markers, and propels the spiral movement of these puncta by an Arp2/3 complex-dependent actin comet tail mechanism. In starved cells, WHAMM accumulates at the interface between neighboring autophagosomes, whose number and size increases with WHAMM expression. Conversely, knocking down WHAMM, inhibiting the Arp2/3 complex or interfering with actin polymerization reduces the size and number of autophagosomes. These findings establish a link between Arp2/3 complex-mediated actin assembly and autophagy.

  3. Research on lossless compression of true color RGB image with low time and space complexity

    NASA Astrophysics Data System (ADS)

    Pan, ShuLin; Xie, ChengJun; Xu, Lin

    2008-12-01

    Eliminating correlated redundancy of space and energy by using a DWT lifting scheme and reducing the complexity of the image by using an algebraic transform among the RGB components. An improved Rice Coding algorithm, in which presents an enumerating DWT lifting scheme that fits any size images by image renormalization has been proposed in this paper. This algorithm has a coding and decoding process without backtracking for dealing with the pixels of an image. It support LOCO-I and it can also be applied to Coder / Decoder. Simulation analysis indicates that the proposed method can achieve a high image compression. Compare with Lossless-JPG, PNG(Microsoft), PNG(Rene), PNG(Photoshop), PNG(Anix PicViewer), PNG(ACDSee), PNG(Ulead photo Explorer), JPEG2000, PNG(KoDa Inc), SPIHT and JPEG-LS, the lossless image compression ratio improved 45%, 29%, 25%, 21%, 19%, 17%, 16%, 15%, 11%, 10.5%, 10% separately with 24 pieces of RGB image provided by KoDa Inc. Accessing the main memory in Pentium IV,CPU2.20GHZ and 256MRAM, the coding speed of the proposed coder can be increased about 21 times than the SPIHT and the efficiency of the performance can be increased 166% or so, the decoder's coding speed can be increased about 17 times than the SPIHT and the efficiency of the performance can be increased 128% or so.

  4. Food-pics: an image database for experimental research on eating and appetite.

    PubMed

    Blechert, Jens; Meule, Adrian; Busch, Niko A; Ohla, Kathrin

    2014-01-01

    Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior.

  5. Food-pics: an image database for experimental research on eating and appetite

    PubMed Central

    Blechert, Jens; Meule, Adrian; Busch, Niko A.; Ohla, Kathrin

    2014-01-01

    Our current environment is characterized by the omnipresence of food cues. The sight and smell of real foods, but also graphically depictions of appetizing foods, can guide our eating behavior, for example, by eliciting food craving and influencing food choice. The relevance of visual food cues on human information processing has been demonstrated by a growing body of studies employing food images across the disciplines of psychology, medicine, and neuroscience. However, currently used food image sets vary considerably across laboratories and image characteristics (contrast, brightness, etc.) and food composition (calories, macronutrients, etc.) are often unspecified. These factors might have contributed to some of the inconsistencies of this research. To remedy this, we developed food-pics, a picture database comprising 568 food images and 315 non-food images along with detailed meta-data. A total of N = 1988 individuals with large variance in age and weight from German speaking countries and North America provided normative ratings of valence, arousal, palatability, desire to eat, recognizability and visual complexity. Furthermore, data on macronutrients (g), energy density (kcal), and physical image characteristics (color composition, contrast, brightness, size, complexity) are provided. The food-pics image database is freely available under the creative commons license with the hope that the set will facilitate standardization and comparability across studies and advance experimental research on the determinants of eating behavior. PMID:25009514

  6. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.

    PubMed

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué

    2015-10-01

    In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.

  7. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis

    PubMed Central

    Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué

    2015-01-01

    In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638

  8. Hypercomplex Fourier transforms of color images.

    PubMed

    Ell, Todd A; Sangwine, Stephen J

    2007-01-01

    Fourier transforms are a fundamental tool in signal and image processing, yet, until recently, there was no definition of a Fourier transform applicable to color images in a holistic manner. In this paper, hypercomplex numbers, specifically quaternions, are used to define a Fourier transform applicable to color images. The properties of the transform are developed, and it is shown that the transform may be computed using two standard complex fast Fourier transforms. The resulting spectrum is explained in terms of familiar phase and modulus concepts, and a new concept of hypercomplex axis. A method for visualizing the spectrum using color graphics is also presented. Finally, a convolution operational formula in the spectral domain is discussed.

  9. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  10. Construction of mammographic examination process ontology using bottom-up hierarchical task analysis.

    PubMed

    Yagahara, Ayako; Yokooka, Yuki; Jiang, Guoqian; Tsuji, Shintarou; Fukuda, Akihisa; Nishimoto, Naoki; Kurowarabi, Kunio; Ogasawara, Katsuhiko

    2018-03-01

    Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.

  11. Milli-arcsecond images of the Herbig Ae star HD 163296

    NASA Astrophysics Data System (ADS)

    Renard, S.; Malbet, F.; Benisty, M.; Thiébaut, E.; Berger, J.-P.

    2010-09-01

    Context. The very close environments of young stars are the hosts of fundamental physical processes, such as planet formation, star-disk interactions, mass accretion, and ejection. The complex morphological structure of these environments has been confirmed by the now quite rich data sets obtained for a few objects by near-infrared long-baseline interferometry. Aims: We gathered numerous interferometric measurements for the young star HD 163296 with various interferometers (VLTI, IOTA, KeckI and CHARA), allowing for the first time an image independent of any a priori model to be reconstructed. Methods: Using the Multi-aperture image Reconstruction Algorithm (MiRA), we reconstruct images of HD 163296 in the H and K bands. We compare these images with reconstructed images obtained from simulated data using a physical model of the environment of HD 163296. Results: We obtain model-independent H and K-band images of the surroundings of HD 163296. The images detect several significant features that we can relate to an inclined asymmetric flared disk around HD 163296 with the strongest intensity at about 4-5 mas. Because of the incomplete spatial frequency coverage, we cannot state whether each of them individually is peculiar in any way. Conclusions: For the first time, milli-arcsecond images of the environment of a young star are produced. These images confirm that the morphology of the close environment of young stars is more complex than the simple models used in the literature so far.

  12. Venus - Lakshmi Planum

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This image is a full-resolution mosaic of several Magellan images and is centered at 61 degrees north latitude and 341 degrees east longitude. The image is 250 kilometers wide (150 miles). The radar smooth region in the northern part of the image is Lakshmi Planum, a high plateau region roughly 3.5 kilometers (2.2 miles) above the mean planetary radius. Lakshmi Planum is ringed by intensely deformed terrain, some of which is shown in the southern portion of the image and is called Clotho Tessera. The 64-kilometer (40 mile) diameter circular feature in the image is a depression called Siddons and may be a volcanic caldera. This view is supported by the collapsed lava tubes surrounding the feature. By carefully studying this and other surrounding images scientists hope to discover what tectonic and volcanic processes formed this complex region. The solid black parts of the image represent data gaps that may be filled in by the Magellan extended mission.

  13. Guest Editorial Image Quality

    NASA Astrophysics Data System (ADS)

    Cheatham, Patrick S.

    1982-02-01

    The term image quality can, unfortunately, apply to anything from a public relations firm's discussion to a comparison between corner drugstores' film processing. If we narrow the discussion to optical systems, we clarify the problem somewhat, but only slightly. We are still faced with a multitude of image quality measures all different, and all couched in different terminology. Optical designers speak of MTF values, digital processors talk about summations of before and after image differences, pattern recognition engineers allude to correlation values, and radar imagers use side-lobe response values measured in decibels. Further complexity is introduced by terms such as information content, bandwidth, Strehl ratios, and, of course, limiting resolution. The problem is to compare these different yardsticks and try to establish some concrete ideas about evaluation of a final image. We need to establish the image attributes which are the most important to perception of the image in question and then begin to apply the different system parameters to those attributes.

  14. Optimizing MR imaging-guided navigation for focused ultrasound interventions in the brain

    NASA Astrophysics Data System (ADS)

    Werner, B.; Martin, E.; Bauer, R.; O'Gorman, R.

    2017-03-01

    MR imaging during transcranial MR imaging-guided Focused Ultrasound surgery (tcMRIgFUS) is challenging due to the complex ultrasound transducer setup and the water bolus used for acoustic coupling. Achievable image quality in the tcMRIgFUS setup using the standard body coil is significantly inferior to current neuroradiologic standards. As a consequence, MR image guidance for precise navigation in functional neurosurgical interventions using tcMRIgFUS is basically limited to the acquisition of MR coordinates of salient landmarks such as the anterior and posterior commissure for aligning a stereotactic atlas. Here, we show how improved MR image quality provided by a custom built MR coil and optimized MR imaging sequences can support imaging-guided navigation for functional tcMRIgFUS neurosurgery by visualizing anatomical landmarks that can be integrated into the navigation process to accommodate for patient specific anatomy.

  15. A Versatile Mounting Method for Long Term Imaging of Zebrafish Development.

    PubMed

    Hirsinger, Estelle; Steventon, Ben

    2017-01-26

    Zebrafish embryos offer an ideal experimental system to study complex morphogenetic processes due to their ease of accessibility and optical transparency. In particular, posterior body elongation is an essential process in embryonic development by which multiple tissue deformations act together to direct the formation of a large part of the body axis. In order to observe this process by long-term time-lapse imaging it is necessary to utilize a mounting technique that allows sufficient support to maintain samples in the correct orientation during transfer to the microscope and acquisition. In addition, the mounting must also provide sufficient freedom of movement for the outgrowth of the posterior body region without affecting its normal development. Finally, there must be a certain degree in versatility of the mounting method to allow imaging on diverse imaging set-ups. Here, we present a mounting technique for imaging the development of posterior body elongation in the zebrafish D. rerio. This technique involves mounting embryos such that the head and yolk sac regions are almost entirely included in agarose, while leaving out the posterior body region to elongate and develop normally. We will show how this can be adapted for upright, inverted and vertical light-sheet microscopy set-ups. While this protocol focuses on mounting embryos for imaging for the posterior body, it could easily be adapted for the live imaging of multiple aspects of zebrafish development.

  16. Multitemporal and Multiscaled Fractal Analysis of Landsat Satellite Data Using the Image Characterization and Modeling System (ICAMS)

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Emerson, Charles W.; Lam, Nina Siu-Ngan; Laymon, Charles A.

    1997-01-01

    The Image Characterization And Modeling System (ICAMS) is a public domain software package that is designed to provide scientists with innovative spatial analytical tools to visualize, measure, and characterize landscape patterns so that environmental conditions or processes can be assessed and monitored more effectively. In this study ICAMS has been used to evaluate how changes in fractal dimension, as a landscape characterization index, and resolution, are related to differences in Landsat images collected at different dates for the same area. Landsat Thematic Mapper (TM) data obtained in May and August 1993 over a portion of the Great Basin Desert in eastern Nevada were used for analysis. These data represent contrasting periods of peak "green-up" and "dry-down" for the study area. The TM data sets were converted into Normalized Difference Vegetation Index (NDVI) images to expedite analysis of differences in fractal dimension between the two dates. These NDVI images were also resampled to resolutions of 60, 120, 240, 480, and 960 meters from the original 30 meter pixel size, to permit an assessment of how fractal dimension varies with spatial resolution. Tests of fractal dimension for two dates at various pixel resolutions show that the D values in the August image become increasingly more complex as pixel size increases to 480 meters. The D values in the May image show an even more complex relationship to pixel size than that expressed in the August image. Fractal dimension for a difference image computed for the May and August dates increase with pixel size up to a resolution of 120 meters, and then decline with increasing pixel size. This means that the greatest complexity in the difference images occur around a resolution of 120 meters, which is analogous to the operational domain of changes in vegetation and snow cover that constitute differences between the two dates.

  17. Live cell imaging of in vitro human trophoblast syncytialization.

    PubMed

    Wang, Rui; Dang, Yan-Li; Zheng, Ru; Li, Yue; Li, Weiwei; Lu, Xiaoyin; Wang, Li-Juan; Zhu, Cheng; Lin, Hai-Yan; Wang, Hongmei

    2014-06-01

    Human trophoblast syncytialization, a process of cell-cell fusion, is one of the most important yet least understood events during placental development. Investigating the fusion process in a placenta in vivo is very challenging given the complexity of this process. Application of primary cultured cytotrophoblast cells isolated from term placentas and BeWo cells derived from human choriocarcinoma formulates a biphasic strategy to achieve the mechanism of trophoblast cell fusion, as the former can spontaneously fuse to form the multinucleated syncytium and the latter is capable of fusing under the treatment of forskolin (FSK). Live-cell imaging is a powerful tool that is widely used to investigate many physiological or pathological processes in various animal models or humans; however, to our knowledge, the mechanism of trophoblast cell fusion has not been reported using a live- cell imaging manner. In this study, a live-cell imaging system was used to delineate the fusion process of primary term cytotrophoblast cells and BeWo cells. By using live staining with Hoechst 33342 or cytoplasmic dyes or by stably transfecting enhanced green fluorescent protein (EGFP) and DsRed2-Nuc reporter plasmids, we observed finger-like protrusions on the cell membranes of fusion partners before fusion and the exchange of cytoplasmic contents during fusion. In summary, this study provides the first video recording of the process of trophoblast syncytialization. Furthermore, the various live-cell imaging systems used in this study will help to yield molecular insights into the syncytialization process during placental development. © 2014 by the Society for the Study of Reproduction, Inc.

  18. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  19. Neuroanatomical Correlates of Oral Reading in Acute Left Hemispheric Stroke

    ERIC Educational Resources Information Center

    Cloutman, Lauren L.; Newhart, Melisssa; Davis, Cameron L.; Heidler-Gary, Jennifer; Hillis, Argye E.

    2011-01-01

    Oral reading is a complex skill involving the interaction of orthographic, phonological, and semantic processes. Functional imaging studies with nonimpaired adult readers have identified a widely distributed network of frontal, inferior parietal, posterior temporal, and occipital brain regions involved in the task. However, while functional…

  20. Understanding the visual resource

    Treesearch

    Floyd L. Newby

    1971-01-01

    Understanding our visual resources involves a complex interweaving of motivation and cognitive recesses; but, more important, it requires that we understand and can identify those characteristics of a landscape that influence the image formation process. From research conducted in Florida, three major variables were identified that appear to have significant effect...

Top