Sample records for imaging large scale

  1. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2010-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...research, we designed a pilot study utilizing large scale parallel Grid computing harnessing nationwide infrastructure for medical image analysis . Also

  3. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    NASA Astrophysics Data System (ADS)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  4. Grid-Enabled Quantitative Analysis of Breast Cancer

    DTIC Science & Technology

    2009-10-01

    large-scale, multi-modality computerized image analysis . The central hypothesis of this research is that large-scale image analysis for breast cancer...pilot study to utilize large scale parallel Grid computing to harness the nationwide cluster infrastructure for optimization of medical image ... analysis parameters. Additionally, we investigated the use of cutting edge dataanalysis/ mining techniques as applied to Ultrasound, FFDM, and DCE-MRI Breast

  5. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  6. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  7. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. MANGO Imager Network Observations of Geomagnetic Storm Impact on Midlatitude 630 nm Airglow Emissions

    NASA Astrophysics Data System (ADS)

    Kendall, E. A.; Bhatt, A.

    2017-12-01

    The Midlatitude Allsky-imaging Network for GeoSpace Observations (MANGO) is a network of imagers filtered at 630 nm spread across the continental United States. MANGO is used to image large-scale airglow and aurora features and observes the generation, propagation, and dissipation of medium and large-scale wave activity in the subauroral, mid and low-latitude thermosphere. This network consists of seven all-sky imagers providing continuous coverage over the United States and extending south into Mexico. This network sees high levels of medium and large scale wave activity due to both neutral and geomagnetic storm forcing. The geomagnetic storm observations largely fall into two categories: Stable Auroral Red (SAR) arcs and Large-scale traveling ionospheric disturbances (LSTIDs). In addition, less-often observed effects include anomalous airglow brightening, bright swirls, and frozen-in traveling structures. We will present an analysis of multiple events observed over four years of MANGO network operation. We will provide both statistics on the cumulative observations and a case study of the "Memorial Day Storm" on May 27, 2017.

  9. Inexpensive Tools To Quantify And Map Vegetative Cover For Large-Scale Research Or Management Decisions.

    USDA-ARS?s Scientific Manuscript database

    Vegetative cover can be quantified quickly and consistently and often at lower cost with image analysis of color digital images than with visual assessments. Image-based mapping of vegetative cover for large-scale research and management decisions can now be considered with the accuracy of these met...

  10. Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis

    NASA Astrophysics Data System (ADS)

    Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi

    2017-03-01

    Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.

  11. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation.

    PubMed

    Reeves, Anthony P; Xie, Yiting; Liu, Shuang

    2017-04-01

    With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.

  12. Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales

    PubMed Central

    Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.

    2014-01-01

    Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949

  13. a Coarse-To Model for Airplane Detection from Large Remote Sensing Images Using Saliency Modle and Deep Learning

    NASA Astrophysics Data System (ADS)

    Song, Z. N.; Sui, H. G.

    2018-04-01

    High resolution remote sensing images are bearing the important strategic information, especially finding some time-sensitive-targets quickly, like airplanes, ships, and cars. Most of time the problem firstly we face is how to rapidly judge whether a particular target is included in a large random remote sensing image, instead of detecting them on a given image. The problem of time-sensitive-targets target finding in a huge image is a great challenge: 1) Complex background leads to high loss and false alarms in tiny object detection in a large-scale images. 2) Unlike traditional image retrieval, what we need to do is not just compare the similarity of image blocks, but quickly find specific targets in a huge image. In this paper, taking the target of airplane as an example, presents an effective method for searching aircraft targets in large scale optical remote sensing images. Firstly, we used an improved visual attention model utilizes salience detection and line segment detector to quickly locate suspected regions in a large and complicated remote sensing image. Then for each region, without region proposal method, a single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation is adopted to search small airplane objects. Unlike sliding window and region proposal-based techniques, we can do entire image (region) during training and test time so it implicitly encodes contextual information about classes as well as their appearance. Experimental results show the proposed method is quickly identify airplanes in large-scale images.

  14. Multi-scale approaches for high-speed imaging and analysis of large neural populations

    PubMed Central

    Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam

    2017-01-01

    Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570

  15. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  16. Weak gravitational lensing due to large-scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Jaroszynski, Michal; Park, Changbom; Paczynski, Bohdan; Gott, J. Richard, III

    1990-01-01

    The effect of the large-scale structure of the universe on the propagation of light rays is studied. The development of the large-scale density fluctuations in the omega = 1 universe is calculated within the cold dark matter scenario using a smooth particle approximation. The propagation of about 10 to the 6th random light rays between the redshift z = 5 and the observer was followed. It is found that the effect of shear is negligible, and the amplification of single images is dominated by the matter in the beam. The spread of amplifications is very small. Therefore, the filled-beam approximation is very good for studies of strong lensing by galaxies or clusters of galaxies. In the simulation, the column density was averaged over a comoving area of approximately (1/h Mpc)-squared. No case of a strong gravitational lensing was found, i.e., no 'over-focused' image that would suggest that a few images might be present. Therefore, the large-scale structure of the universe as it is presently known does not produce multiple images with gravitational lensing on a scale larger than clusters of galaxies.

  17. Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.

    PubMed

    Li, Yeqing; Liu, Wei; Huang, Junzhou

    2018-06-01

    Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.

  18. Large-scale image region documentation for fully automated image biomarker algorithm development and evaluation

    PubMed Central

    Reeves, Anthony P.; Xie, Yiting; Liu, Shuang

    2017-01-01

    Abstract. With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset. PMID:28612037

  19. Ship detection using STFT sea background statistical modeling for large-scale oceansat remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan

    2018-03-01

    Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.

  20. DISRUPTION OF LARGE-SCALE NEURAL NETWORKS IN NON-FLUENT/AGRAMMATIC VARIANT PRIMARY PROGRESSIVE APHASIA ASSOCIATED WITH FRONTOTEMPORAL DEGENERATION PATHOLOGY

    PubMed Central

    Grossman, Murray; Powers, John; Ash, Sherry; McMillan, Corey; Burkholder, Lisa; Irwin, David; Trojanowski, John Q.

    2012-01-01

    Non-fluent/agrammatic primary progressive aphasia (naPPA) is a progressive neurodegenerative condition most prominently associated with slowed, effortful speech. A clinical imaging marker of naPPA is disease centered in the left inferior frontal lobe. We used multimodal imaging to assess large-scale neural networks underlying effortful expression in 15 patients with sporadic naPPA due to frontotemporal lobar degeneration (FTLD) spectrum pathology. Effortful speech in these patients is related in part to impaired grammatical processing, and to phonologic speech errors. Gray matter (GM) imaging shows frontal and anterior-superior temporal atrophy, most prominently in the left hemisphere. Diffusion tensor imaging reveals reduced fractional anisotropy in several white matter (WM) tracts mediating projections between left frontal and other GM regions. Regression analyses suggest disruption of three large-scale GM-WM neural networks in naPPA that support fluent, grammatical expression. These findings emphasize the role of large-scale neural networks in language, and demonstrate associated language deficits in naPPA. PMID:23218686

  1. Studying time of flight imaging through scattering media across multiple size scales (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Velten, Andreas

    2017-05-01

    Light scattering is a primary obstacle to optical imaging in a variety of different environments and across many size and time scales. Scattering complicates imaging on large scales when imaging through the atmosphere when imaging from airborne or space borne platforms, through marine fog, or through fog and dust in vehicle navigation, for example in self driving cars. On smaller scales, scattering is the major obstacle when imaging through human tissue in biomedical applications. Despite the large variety of participating materials and size scales, light transport in all these environments is usually described with very similar scattering models that are defined by the same small set of parameters, including scattering and absorption length and phase function. We attempt a study of scattering and methods of imaging through scattering across different scales and media, particularly with respect to the use of time of flight information. We can show that using time of flight, in addition to spatial information, provides distinct advantages in scattering environments. By performing a comparative study of scattering across scales and media, we are able to suggest scale models for scattering environments to aid lab research. We also can transfer knowledge and methodology between different fields.

  2. MOCC: A Fast and Robust Correlation-Based Method for Interest Point Matching under Large Scale Changes

    NASA Astrophysics Data System (ADS)

    Zhao, Feng; Huang, Qingming; Wang, Hao; Gao, Wen

    2010-12-01

    Similarity measures based on correlation have been used extensively for matching tasks. However, traditional correlation-based image matching methods are sensitive to rotation and scale changes. This paper presents a fast correlation-based method for matching two images with large rotation and significant scale changes. Multiscale oriented corner correlation (MOCC) is used to evaluate the degree of similarity between the feature points. The method is rotation invariant and capable of matching image pairs with scale changes up to a factor of 7. Moreover, MOCC is much faster in comparison with the state-of-the-art matching methods. Experimental results on real images show the robustness and effectiveness of the proposed method.

  3. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  4. Web tools for large-scale 3D biological images and atlases

    PubMed Central

    2012-01-01

    Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296

  5. Algorithm and Application of Gcp-Independent Block Adjustment for Super Large-Scale Domestic High Resolution Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.

    2018-04-01

    The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.

  6. Generalized Chirp Scaling Combined with Baseband Azimuth Scaling Algorithm for Large Bandwidth Sliding Spotlight SAR Imaging

    PubMed Central

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057

  7. X6.9-CLASS FLARE-INDUCED VERTICAL KINK OSCILLATIONS IN A LARGE-SCALE PLASMA CURTAIN AS OBSERVED BY THE SOLAR DYNAMICS OBSERVATORY/ATMOSPHERIC IMAGING ASSEMBLY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Srivastava, A. K.; Goossens, M.

    2013-11-01

    We present rare observational evidence of vertical kink oscillations in a laminar and diffused large-scale plasma curtain as observed by the Atmospheric Imaging Assembly on board the Solar Dynamics Observatory. The X6.9-class flare in active region 11263 on 2011 August 9 induces a global large-scale disturbance that propagates in a narrow lane above the plasma curtain and creates a low density region that appears as a dimming in the observational image data. This large-scale propagating disturbance acts as a non-periodic driver that interacts asymmetrically and obliquely with the top of the plasma curtain and triggers the observed oscillations. In themore » deeper layers of the curtain, we find evidence of vertical kink oscillations with two periods (795 s and 530 s). On the magnetic surface of the curtain where the density is inhomogeneous due to coronal dimming, non-decaying vertical oscillations are also observed (period ≈ 763-896 s). We infer that the global large-scale disturbance triggers vertical kink oscillations in the deeper layers as well as on the surface of the large-scale plasma curtain. The properties of the excited waves strongly depend on the local plasma and magnetic field conditions.« less

  8. Large scale particle image velocimetry with helium filled soap bubbles

    NASA Astrophysics Data System (ADS)

    Bosbach, Johannes; Kühn, Matthias; Wagner, Claus

    2009-03-01

    The application of Particle Image Velocimetry (PIV) to measurement of flows on large scales is a challenging necessity especially for the investigation of convective air flows. Combining helium filled soap bubbles as tracer particles with high power quality switched solid state lasers as light sources allows conducting PIV on scales of the order of several square meters. The technique was applied to mixed convection in a full scale double aisle aircraft cabin mock-up for validation of Computational Fluid Dynamics simulations.

  9. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  10. Enterprise PACS and image distribution.

    PubMed

    Huang, H K

    2003-01-01

    Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.

  11. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  12. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates.

    PubMed

    Zhang, Hao; Li, Xianqi; Chen, Yunmei; Park, Jewook; Li, An-Ping; Zhang, X-G

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a "rubber band" model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data.

  13. BSIFT: toward data-independent codebook for large scale image search.

    PubMed

    Zhou, Wengang; Li, Houqiang; Hong, Richang; Lu, Yijuan; Tian, Qi

    2015-03-01

    Bag-of-Words (BoWs) model based on Scale Invariant Feature Transform (SIFT) has been widely used in large-scale image retrieval applications. Feature quantization by vector quantization plays a crucial role in BoW model, which generates visual words from the high- dimensional SIFT features, so as to adapt to the inverted file structure for the scalable retrieval. Traditional feature quantization approaches suffer several issues, such as necessity of visual codebook training, limited reliability, and update inefficiency. To avoid the above problems, in this paper, a novel feature quantization scheme is proposed to efficiently quantize each SIFT descriptor to a descriptive and discriminative bit-vector, which is called binary SIFT (BSIFT). Our quantizer is independent of image collections. In addition, by taking the first 32 bits out from BSIFT as code word, the generated BSIFT naturally lends itself to adapt to the classic inverted file structure for image indexing. Moreover, the quantization error is reduced by feature filtering, code word expansion, and query sensitive mask shielding. Without any explicit codebook for quantization, our approach can be readily applied in image search in some resource-limited scenarios. We evaluate the proposed algorithm for large scale image search on two public image data sets. Experimental results demonstrate the index efficiency and retrieval accuracy of our approach.

  14. Dynamics of Large-scale Coronal Structures as Imaged during the 2012 and 2013 Total Solar Eclipses

    NASA Astrophysics Data System (ADS)

    Alzate, Nathalia; Habbal, Shadia R.; Druckmüller, Miloslav; Emmanouilidis, Constantinos; Morgan, Huw

    2017-10-01

    White light images acquired at the peak of solar activity cycle 24, during the total solar eclipses of 2012 November 13 and 2013 November 3, serendipitously captured erupting prominences accompanied by CMEs. Application of state-of-the-art image processing techniques revealed the intricate details of two “atypical” large-scale structures, with strikingly sharp boundaries. By complementing the processed white light eclipse images with processed images from co-temporal Solar Dynamics Observatory/AIA and SOHO/LASCO observations, we show how the shape of these atypical structures matches the shape of faint CME shock fronts, which traversed the inner corona a few hours prior to the eclipse observations. The two events were not associated with any prominence eruption but were triggered by sudden brightening events on the solar surface accompanied by sprays and jets. The discovery of the indelible impact that frequent and innocuous transient events in the low corona can have on large-scale coronal structures was enabled by the radial span of the high-resolution white light eclipse images, starting from the solar surface out to several solar radii, currently unmatched by any coronagraphic instrumentation. These findings raise the interesting question as to whether large-scale coronal structures can ever be considered stationary. They also point to the existence of a much larger number of CMEs that goes undetected from the suite of instrumentation currently observing the Sun.

  15. Integrating concept ontology and multitask learning to achieve more effective classifier training for multilevel image annotation.

    PubMed

    Fan, Jianping; Gao, Yuli; Luo, Hangzai

    2008-03-01

    In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.

  16. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  17. An efficient implementation of 3D high-resolution imaging for large-scale seismic data with GPU/CPU heterogeneous parallel computing

    NASA Astrophysics Data System (ADS)

    Xu, Jincheng; Liu, Wei; Wang, Jin; Liu, Linong; Zhang, Jianfeng

    2018-02-01

    De-absorption pre-stack time migration (QPSTM) compensates for the absorption and dispersion of seismic waves by introducing an effective Q parameter, thereby making it an effective tool for 3D, high-resolution imaging of seismic data. Although the optimal aperture obtained via stationary-phase migration reduces the computational cost of 3D QPSTM and yields 3D stationary-phase QPSTM, the associated computational efficiency is still the main problem in the processing of 3D, high-resolution images for real large-scale seismic data. In the current paper, we proposed a division method for large-scale, 3D seismic data to optimize the performance of stationary-phase QPSTM on clusters of graphics processing units (GPU). Then, we designed an imaging point parallel strategy to achieve an optimal parallel computing performance. Afterward, we adopted an asynchronous double buffering scheme for multi-stream to perform the GPU/CPU parallel computing. Moreover, several key optimization strategies of computation and storage based on the compute unified device architecture (CUDA) were adopted to accelerate the 3D stationary-phase QPSTM algorithm. Compared with the initial GPU code, the implementation of the key optimization steps, including thread optimization, shared memory optimization, register optimization and special function units (SFU), greatly improved the efficiency. A numerical example employing real large-scale, 3D seismic data showed that our scheme is nearly 80 times faster than the CPU-QPSTM algorithm. Our GPU/CPU heterogeneous parallel computing framework significant reduces the computational cost and facilitates 3D high-resolution imaging for large-scale seismic data.

  18. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    NASA Astrophysics Data System (ADS)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  19. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  20. Image segmentation evaluation for very-large datasets

    NASA Astrophysics Data System (ADS)

    Reeves, Anthony P.; Liu, Shuang; Xie, Yiting

    2016-03-01

    With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.

  1. Imaging detectors and electronics—a view of the future

    NASA Astrophysics Data System (ADS)

    Spieler, Helmuth

    2004-09-01

    Imaging sensors and readout electronics have made tremendous strides in the past two decades. The application of modern semiconductor fabrication techniques and the introduction of customized monolithic integrated circuits have made large-scale imaging systems routine in high-energy physics. This technology is now finding its way into other areas, such as space missions, synchrotron light sources, and medical imaging. I review current developments and discuss the promise and limits of new technologies. Several detector systems are described as examples of future trends. The discussion emphasizes semiconductor detector systems, but I also include recent developments for large-scale superconducting detector arrays.

  2. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  3. Postprocessing Algorithm for Driving Conventional Scanning Tunneling Microscope at Fast Scan Rates

    PubMed Central

    Zhang, Hao; Li, Xianqi; Park, Jewook; Li, An-Ping

    2017-01-01

    We present an image postprocessing framework for Scanning Tunneling Microscope (STM) to reduce the strong spurious oscillations and scan line noise at fast scan rates and preserve the features, allowing an order of magnitude increase in the scan rate without upgrading the hardware. The proposed method consists of two steps for large scale images and four steps for atomic scale images. For large scale images, we first apply for each line an image registration method to align the forward and backward scans of the same line. In the second step we apply a “rubber band” model which is solved by a novel Constrained Adaptive and Iterative Filtering Algorithm (CIAFA). The numerical results on measurement from copper(111) surface indicate the processed images are comparable in accuracy to data obtained with a slow scan rate, but are free of the scan drift error commonly seen in slow scan data. For atomic scale images, an additional first step to remove line-by-line strong background fluctuations and a fourth step of replacing the postprocessed image by its ranking map as the final atomic resolution image are required. The resulting image restores the lattice image that is nearly undetectable in the original fast scan data. PMID:29362664

  4. The observation of possible reconnection events in the boundary changes of solar coronal holes

    NASA Technical Reports Server (NTRS)

    Kahler, S. W.; Moses, J. Daniel

    1989-01-01

    Coronal holes are large scale regions of magnetically open fields which are easily observed in solar soft X-ray images. The boundaries of coronal holes are separatrices between large scale regions of open and closed magnetic fields where one might expect to observe evidence of solar magnetic reconnection. Previous studies by Nolte and colleagues using Skylab X-ray images established that large scale (greater than or equal to 9 x 10(4) km) changes in coronal hole boundaries were due to coronal processes, i.e., magnetic reconnection, rather than to photospheric motions. Those studies were limited to time scales of about one day, and no conclusion could be drawn about the size and time scales of the reconnection process at hole boundaries. Sequences of appropriate Skylab X-ray images were used with a time resolution of about 90 min during times of the central meridian passages of the coronal hole labelled Coronal Hole 1 to search for hole boundary changes which can yield the spatial and temporal scales of coronal magnetic reconnection. It was found that 29 of 32 observed boundary changes could be associated with bright points. The appearance of the bright point may be the signature of reconnection between small scale and large scale magnetic fields. The observed boundary changes contributed to the quasi-rigid rotation of Coronal Hole 1.

  5. Advanced Image Processing Techniques for Maximum Information Recovery

    DTIC Science & Technology

    2006-11-01

    0704-0188), 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302. Respondents should be aware that notwithstanding any other provision...available information from an image. Some radio frequency and optical sensors collect large-scale sets of spatial imagery data whose content is often...Some radio frequency and optical sensors collect large- scale sets of spatial imagery data whose content is often obscured by fog, clouds, foliage

  6. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    PubMed

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  7. Characteristics of medium- and large-scale TIDs over Japan derived from OI 630-nm nightglow observation

    NASA Astrophysics Data System (ADS)

    Kubota, M.; Fukunishi, H.; Okano, S.

    2001-07-01

    A new optical instrument for studying upper atmospheric dynamics, called the Multicolor All-sky Imaging System (MAIS), has been developed. The MAIS can obtain all-sky images of airglow emission at two different wavelengths simultaneously with a time resolution of several minutes. Since December 1991, imaging observations with the MAIS have been conducted at the Zao observatory (38.09°N, 140.56°E). From these observations, two interesting events with wave structures have been detected in OI 630-nm nightglow images. The first event was observed on the night of June 2/3, 1992 during a geomagnetically quiet period. Simultaneous data of ionospheric parameters showed that they are caused by propagation of the medium-scale traveling ionospheric disturbance (TID). Phase velocity and horizontal wavelength determined from the image data are 45-100 m/s and ~280 km, and the propagation direction is south-westward. The second event was observed on the night of February 27/28, 1992 during a geomagnetic storm. It is found that a large enhancement of OI 630-nm emission is caused by a propagation of the large-scale TID. Meridional components of phase velocities and wavelengths determined from ionospheric data are 305-695 m/s (southward) and 930-5250 km. The source of this large-scale TID appears to be auroral processes at high latitudes.

  8. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.

    PubMed

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D

    2015-05-08

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  9. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    PubMed Central

    Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.

    2015-01-01

    A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714

  10. A comprehensive study on urban true orthorectification

    USGS Publications Warehouse

    Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao

    2005-01-01

    To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.

  11. Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations.

    PubMed

    Huang, Rongyong; Zheng, Shunyi; Hu, Kun

    2018-06-01

    Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.

  12. Large-scale automated image analysis for computational profiling of brain tissue surrounding implanted neuroprosthetic devices using Python.

    PubMed

    Rey-Villamizar, Nicolas; Somasundar, Vinay; Megjhani, Murad; Xu, Yan; Lu, Yanbin; Padmanabhan, Raghav; Trett, Kristen; Shain, William; Roysam, Badri

    2014-01-01

    In this article, we describe the use of Python for large-scale automated server-based bio-image analysis in FARSIGHT, a free and open-source toolkit of image analysis methods for quantitative studies of complex and dynamic tissue microenvironments imaged by modern optical microscopes, including confocal, multi-spectral, multi-photon, and time-lapse systems. The core FARSIGHT modules for image segmentation, feature extraction, tracking, and machine learning are written in C++, leveraging widely used libraries including ITK, VTK, Boost, and Qt. For solving complex image analysis tasks, these modules must be combined into scripts using Python. As a concrete example, we consider the problem of analyzing 3-D multi-spectral images of brain tissue surrounding implanted neuroprosthetic devices, acquired using high-throughput multi-spectral spinning disk step-and-repeat confocal microscopy. The resulting images typically contain 5 fluorescent channels. Each channel consists of 6000 × 10,000 × 500 voxels with 16 bits/voxel, implying image sizes exceeding 250 GB. These images must be mosaicked, pre-processed to overcome imaging artifacts, and segmented to enable cellular-scale feature extraction. The features are used to identify cell types, and perform large-scale analysis for identifying spatial distributions of specific cell types relative to the device. Python was used to build a server-based script (Dell 910 PowerEdge servers with 4 sockets/server with 10 cores each, 2 threads per core and 1TB of RAM running on Red Hat Enterprise Linux linked to a RAID 5 SAN) capable of routinely handling image datasets at this scale and performing all these processing steps in a collaborative multi-user multi-platform environment. Our Python script enables efficient data storage and movement between computers and storage servers, logs all the processing steps, and performs full multi-threaded execution of all codes, including open and closed-source third party libraries.

  13. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  14. Thermospheric Airglow Perturbations in the Upper Atmosphere Caused by Hurricane Harvey

    NASA Astrophysics Data System (ADS)

    Bhatt, A.; Kendall, E. A.

    2017-12-01

    The Midlatitude Allsky imaging Network for Geophysical Observations (MANGO) consists of seven allsky imagers distributed across the United States recording observations of large-scale airglow perturbations. The imagers are filtered at 630 nm, a forbidden oxygen line, in order to record the predominant source of airglow at 250 km altitude. While the ubiquitous airglow layer is challenging to observe when under uniform conditions, waves in the upper atmosphere cause ripples in the airglow layer which can easily be imaged by appropriate instrumentation. MANGO is the first network to record perturbations in the airglow layer on a continent-size scale. Large and Mid-scale Traveling Ionospheric Disturbances (LSTIDs and MSTIDs) are recorded that are caused by auroral forcing, mountain turbulence, and tidal variations. On August 25, airglow perturbations centered on the Hurricane Harvey path were observed by MANGO. These images and connections to other complimentary data sets such as GPS will be presented.

  15. Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases

    PubMed Central

    Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert

    2010-01-01

    Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820

  16. Stream Flow Prediction by Remote Sensing and Genetic Programming

    NASA Technical Reports Server (NTRS)

    Chang, Ni-Bin

    2009-01-01

    A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.

  17. Investigating the Potential of Deep Neural Networks for Large-Scale Classification of Very High Resolution Satellite Images

    NASA Astrophysics Data System (ADS)

    Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.

    2017-05-01

    Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.

  18. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain.

    PubMed

    Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G

    2016-05-25

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.

  19. Large-scale Scanning Transmission Electron Microscopy (Nanotomy) of Healthy and Injured Zebrafish Brain

    PubMed Central

    Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.

    2016-01-01

    Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162

  20. Closed Large Cell Clouds

    Atmospheric Science Data Center

    2013-04-19

    article title:  Closed Large Cell Clouds in the South Pacific ... the Multi-angle Imaging SpectroRadiometer (MISR) provide an example of very large scale closed cells, and can be contrasted with the  ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...

  1. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  2. Imaging mouse cerebellum with serial optical coherence scanner (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Liu, Chao J.; Williams, Kristen; Orr, Harry; Taner, Akkin

    2017-02-01

    We present the serial optical coherence scanner (SOCS), which consists of a polarization sensitive optical coherence tomography and a vibratome with associated controls for serial imaging, to visualize the cerebellum and adjacent brainstem of mouse. The cerebellar cortical layers and white matter are distinguished by using intrinsic optical contrasts. Images from serial scans reveal the large-scale anatomy in detail and map the nerve fiber pathways in the cerebellum and adjacent brainstem. The optical system, which has 5.5 μm axial resolution, utilizes a scan lens or a water-immersion microscope objective resulting in 10 μm or 4 μm lateral resolution, respectively. The large-scale brain imaging at high resolution requires an efficient way to collect large datasets. It is important to improve the SOCS system to deal with large-scale and large number of samples in a reasonable time. The imaging and slicing procedure for a section took about 4 minutes due to a low speed of the vibratome blade to maintain slicing quality. SOCS has potential to investigate pathological changes and monitor the effects of therapeutic drugs in cerebellar diseases such as spinocerebellar ataxia 1 (SCA1). The SCA1 is a neurodegenerative disease characterized by atrophy and eventual loss of Purkinje cells from the cerebellar cortex, and the optical contrasts provided by SOCS is being evaluated for biomarkers of the disease.

  3. A unified framework of image latent feature learning on Sina microblog

    NASA Astrophysics Data System (ADS)

    Wei, Jinjin; Jin, Zhigang; Zhou, Yuan; Zhang, Rui

    2015-10-01

    Large-scale user-contributed images with texts are rapidly increasing on the social media websites, such as Sina microblog. However, the noise and incomplete correspondence between the images and the texts give rise to the difficulty in precise image retrieval and ranking. In this paper, a hypergraph-based learning framework is proposed for image ranking, which simultaneously utilizes visual feature, textual content and social link information to estimate the relevance between images. Representing each image as a vertex in the hypergraph, complex relationship between images can be reflected exactly. Then updating the weight of hyperedges throughout the hypergraph learning process, the effect of different edges can be adaptively modulated in the constructed hypergraph. Furthermore, the popularity degree of the image is employed to re-rank the retrieval results. Comparative experiments on a large-scale Sina microblog data-set demonstrate the effectiveness of the proposed approach.

  4. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  5. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  6. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  7. Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.

    PubMed

    Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner

    2016-01-01

    Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A 2D Fourier tool for the analysis of photo-elastic effect in large granular assemblies

    NASA Astrophysics Data System (ADS)

    Leśniewska, Danuta

    2017-06-01

    Fourier transforms are the basic tool in constructing different types of image filters, mainly those reducing optical noise. Some DIC or PIV software also uses frequency space to obtain displacement fields from a series of digital images of a deforming body. The paper presents series of 2D Fourier transforms of photo-elastic transmission images, representing large pseudo 2D granular assembly, deforming under varying boundary conditions. The images related to different scales were acquired using the same image resolution, but taken at different distance from the sample. Fourier transforms of images, representing different stages of deformation, reveal characteristic features at the three (`macro-`, `meso-` and `micro-`) scales, which can serve as a data to study internal order-disorder transition within granular materials.

  9. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  10. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  11. A detail enhancement and dynamic range adjustment algorithm for high dynamic range images

    NASA Astrophysics Data System (ADS)

    Xu, Bo; Wang, Huachuang; Liang, Mingtao; Yu, Cong; Hu, Jinlong; Cheng, Hua

    2014-08-01

    Although high dynamic range (HDR) images contain large amounts of information, they have weak texture and low contrast. What's more, these images are difficult to be reproduced on low dynamic range displaying mediums. If much more information is to be acquired when these images are displayed on PCs, some specific transforms, such as compressing the dynamic range, enhancing the portions of little difference in original contrast and highlighting the texture details on the premise of keeping the parts of large contrast, are needed. To this ends, a multi-scale guided filter enhancement algorithm which derives from the single-scale guided filter based on the analysis of non-physical model is proposed in this paper. Firstly, this algorithm decomposes the original HDR images into base image and detail images of different scales, and then it adaptively selects a transform function which acts on the enhanced detail images and original images. By comparing the treatment effects of HDR images and low dynamic range (LDR) images of different scene features, it proves that this algorithm, on the basis of maintaining the hierarchy and texture details of images, not only improves the contrast and enhances the details of images, but also adjusts the dynamic range well. Thus, it is much suitable for human observation or analytical processing of machines.

  12. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function

    NASA Astrophysics Data System (ADS)

    Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.

    2016-03-01

    Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.

  13. Multiscale properties of weighted total variation flow with applications to denoising and registration.

    PubMed

    Athavale, Prashant; Xu, Robert; Radau, Perry; Nachman, Adrian; Wright, Graham A

    2015-07-01

    Images consist of structures of varying scales: large scale structures such as flat regions, and small scale structures such as noise, textures, and rapidly oscillatory patterns. In the hierarchical (BV, L(2)) image decomposition, Tadmor, et al. (2004) start with extracting coarse scale structures from a given image, and successively extract finer structures from the residuals in each step of the iterative decomposition. We propose to begin instead by extracting the finest structures from the given image and then proceed to extract increasingly coarser structures. In most images, noise could be considered as a fine scale structure. Thus, starting the image decomposition with finer scales, rather than large scales, leads to fast denoising. We note that our approach turns out to be equivalent to the nonstationary regularization in Scherzer and Weickert (2000). The continuous limit of this procedure leads to a time-scaled version of total variation flow. Motivated by specific clinical applications, we introduce an image depending weight in the regularization functional, and study the corresponding weighted TV flow. We show that the edge-preserving property of the multiscale representation of an input image obtained with the weighted TV flow can be enhanced and localized by appropriate choice of the weight. We use this in developing an efficient and edge-preserving denoising algorithm with control on speed and localization properties. We examine analytical properties of the weighted TV flow that give precise information about the denoising speed and the rate of change of energy of the images. An additional contribution of the paper is to use the images obtained at different scales for robust multiscale registration. We show that the inherently multiscale nature of the weighted TV flow improved performance for registration of noisy cardiac MRI images, compared to other methods such as bilateral or Gaussian filtering. A clinical application of the multiscale registration algorithm is also demonstrated for aligning viability assessment magnetic resonance (MR) images from 8 patients with previous myocardial infarctions. Copyright © 2015. Published by Elsevier B.V.

  14. Images from Galileo of the Venus cloud deck

    USGS Publications Warehouse

    Belton, M.J.S.; Gierasch, P.J.; Smith, M.D.; Helfenstein, P.; Schinder, P.J.; Pollack, James B.; Rages, K.A.; Ingersoll, A.P.; Klaasen, K.P.; Veverka, J.; Anger, C.D.; Carr, M.H.; Chapman, C.R.; Davies, M.E.; Fanale, F.P.; Greeley, R.; Greenberg, R.; Head, J. W.; Morrison, D.; Neukum, G.; Pilcher, C.B.

    1991-01-01

    Images of Venus taken at 418 (violet) and 986 [near-infrared (NIR)] nanometers show that the morphology and motions of large-scale features change with depth in the cloud deck. Poleward meridional velocities, seen in both spectral regions, are much reduced in the NIR. In the south polar region the markings in the two wavelength bands are strongly anticorrelated. The images follow the changing state of the upper cloud layer downwind of the subsolar point, and the zonal flow field shows a longitudinal periodicity that may be coupled to the formation of large-scale planetary waves. No optical lightning was detected.

  15. Recent developments in VSD imaging of small neuronal networks

    PubMed Central

    Hill, Evan S.; Bruno, Angela M.

    2014-01-01

    Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit mapping, network multifunctionality, the network basis of decision making, and the presence of variably participating neurons in networks. Analytical tools being developed and applied to large-scale VSD imaging data sets are discussed, and the future prospects for this exciting field are considered. PMID:25225295

  16. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  17. Automatic Matching of Large Scale Images and Terrestrial LIDAR Based on App Synergy of Mobile Phone

    NASA Astrophysics Data System (ADS)

    Xia, G.; Hu, C.

    2018-04-01

    The digitalization of Cultural Heritage based on ground laser scanning technology has been widely applied. High-precision scanning and high-resolution photography of cultural relics are the main methods of data acquisition. The reconstruction with the complete point cloud and high-resolution image requires the matching of image and point cloud, the acquisition of the homonym feature points, the data registration, etc. However, the one-to-one correspondence between image and corresponding point cloud depends on inefficient manual search. The effective classify and management of a large number of image and the matching of large image and corresponding point cloud will be the focus of the research. In this paper, we propose automatic matching of large scale images and terrestrial LiDAR based on APP synergy of mobile phone. Firstly, we develop an APP based on Android, take pictures and record related information of classification. Secondly, all the images are automatically grouped with the recorded information. Thirdly, the matching algorithm is used to match the global and local image. According to the one-to-one correspondence between the global image and the point cloud reflection intensity image, the automatic matching of the image and its corresponding laser radar point cloud is realized. Finally, the mapping relationship between global image, local image and intensity image is established according to homonym feature point. So we can establish the data structure of the global image, the local image in the global image, the local image corresponding point cloud, and carry on the visualization management and query of image.

  18. Human-Machine Cooperation in Large-Scale Multimedia Retrieval: A Survey

    ERIC Educational Resources Information Center

    Shirahama, Kimiaki; Grzegorzek, Marcin; Indurkhya, Bipin

    2015-01-01

    "Large-Scale Multimedia Retrieval" (LSMR) is the task to fast analyze a large amount of multimedia data like images or videos and accurately find the ones relevant to a certain semantic meaning. Although LSMR has been investigated for more than two decades in the fields of multimedia processing and computer vision, a more…

  19. Large-scale Activities Associated with the 2005 Sep. 7th Event

    NASA Astrophysics Data System (ADS)

    Zong, Weiguo

    We present a multi-wavelength study on large-scale activities associated with a significant solar event. On 2005 September 7, a flare classified as bigger than X17 was observed. Combining with Hα 6562.8 ˚, He I 10830 ˚and soft X-ray observations, three large-scale activities were A A found to propagate over a long distance on the solar surface. 1) The first large-scale activity emanated from the flare site, which propagated westward around the solar equator and appeared as sequential brightenings. With MDI longitudinal magnetic field map, the activity was found to propagate along the magnetic network. 2) The second large-scale activity could be well identified both in He I 10830 ˚images and soft X-ray images and appeared as diffuse emission A enhancement propagating away. The activity started later than the first one and was not centric on the flare site. Moreover, a rotation was found along with the bright front propagating away. 3) The third activity was ahead of the second one, which was identified as a "winking" filament. The three activities have different origins, which were seldom observed in one event. Therefore this study is useful to understand the mechanism of large-scale activities on solar surface.

  20. A Coarse-to-Fine Geometric Scale-Invariant Feature Transform for Large Size High Resolution Satellite Image Registration

    PubMed Central

    Chang, Xueli; Du, Siliang; Li, Yingying; Fang, Shenghui

    2018-01-01

    Large size high resolution (HR) satellite image matching is a challenging task due to local distortion, repetitive structures, intensity changes and low efficiency. In this paper, a novel matching approach is proposed for the large size HR satellite image registration, which is based on coarse-to-fine strategy and geometric scale-invariant feature transform (SIFT). In the coarse matching step, a robust matching method scale restrict (SR) SIFT is implemented at low resolution level. The matching results provide geometric constraints which are then used to guide block division and geometric SIFT in the fine matching step. The block matching method can overcome the memory problem. In geometric SIFT, with area constraints, it is beneficial for validating the candidate matches and decreasing searching complexity. To further improve the matching efficiency, the proposed matching method is parallelized using OpenMP. Finally, the sensing image is rectified to the coordinate of reference image via Triangulated Irregular Network (TIN) transformation. Experiments are designed to test the performance of the proposed matching method. The experimental results show that the proposed method can decrease the matching time and increase the number of matching points while maintaining high registration accuracy. PMID:29702589

  1. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data

    PubMed Central

    Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.

    2017-01-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896

  2. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.

    PubMed

    Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V

    2016-08-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.

  3. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  4. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  5. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    PubMed

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  6. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction

    PubMed Central

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-01-01

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270

  7. Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik

    2017-02-01

    The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.

  8. Validating a Geographical Image Retrieval System.

    ERIC Educational Resources Information Center

    Zhu, Bin; Chen, Hsinchun

    2000-01-01

    Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…

  9. High-throughput image analysis of tumor spheroids: a user-friendly software application to measure the size of spheroids automatically and accurately.

    PubMed

    Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y

    2014-07-08

    The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.

  10. Photographic images captured while sampling for bald eagles near the Davis Pond freshwater diversion structure in Barataria Bay, Louisiana (2009-10)

    USGS Publications Warehouse

    Jenkins, Jill A.; Jeske, Clinton W.; Allain, Larry K.

    2011-01-01

    The implementation of freshwater diversions in large-scale coastal restoration schemes presents several scientific and management considerations. Large-scale environmental restructuring necessitates aquatic biomonitoring, and during such field studies, photographs that document animals and habitat may be captured. Among the biomonitoring studies performed in conjunction with the Davis Pond freshwater diversion structure south of New Orleans, Louisiana, only postdiversion study images are readily available, and these are presented here.

  11. Remote Imaging Applied to Schistosomiasis Control: The Anning River Project

    NASA Technical Reports Server (NTRS)

    Seto, Edmund Y. W.; Maszle, Don R.; Spear, Robert C.; Gong, Peng

    1997-01-01

    The use of satellite imaging to remotely detect areas of high risk for transmission of infectious disease is an appealing prospect for large-scale monitoring of these diseases. The detection of large-scale environmental determinants of disease risk, often called landscape epidemiology, has been motivated by several authors (Pavlovsky 1966; Meade et al. 1988). The basic notion is that large-scale factors such as population density, air temperature, hydrological conditions, soil type, and vegetation can determine in a coarse fashion the local conditions contributing to disease vector abundance and human contact with disease agents. These large-scale factors can often be remotely detected by sensors or cameras mounted on satellite or aircraft platforms and can thus be used in a predictive model to mark high risk areas of transmission and to target control or monitoring efforts. A review of satellite technologies for this purpose was recently presented by Washino and Wood (1994) and Hay (1997) and Hay et al. (1997).

  12. Supervised graph hashing for histopathology image retrieval and classification.

    PubMed

    Shi, Xiaoshuang; Xing, Fuyong; Xu, KaiDi; Xie, Yuanpu; Su, Hai; Yang, Lin

    2017-12-01

    In pathology image analysis, morphological characteristics of cells are critical to grade many diseases. With the development of cell detection and segmentation techniques, it is possible to extract cell-level information for further analysis in pathology images. However, it is challenging to conduct efficient analysis of cell-level information on a large-scale image dataset because each image usually contains hundreds or thousands of cells. In this paper, we propose a novel image retrieval based framework for large-scale pathology image analysis. For each image, we encode each cell into binary codes to generate image representation using a novel graph based hashing model and then conduct image retrieval by applying a group-to-group matching method to similarity measurement. In order to improve both computational efficiency and memory requirement, we further introduce matrix factorization into the hashing model for scalable image retrieval. The proposed framework is extensively validated with thousands of lung cancer images, and it achieves 97.98% classification accuracy and 97.50% retrieval precision with all cells of each query image used. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Deep learning with non-medical training used for chest pathology identification

    NASA Astrophysics Data System (ADS)

    Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit

    2015-03-01

    In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.

  14. Multi scales based sparse matrix spectral clustering image segmentation

    NASA Astrophysics Data System (ADS)

    Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin

    2018-04-01

    In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.

  15. Investigation of multilayer domains in large-scale CVD monolayer graphene by optical imaging

    NASA Astrophysics Data System (ADS)

    Yu, Yuanfang; Li, Zhenzhen; Wang, Wenhui; Guo, Xitao; Jiang, Jie; Nan, Haiyan; Ni, Zhenhua

    2017-03-01

    CVD graphene is a promising candidate for optoelectronic applications due to its high quality and high yield. However, multi-layer domains could inevitably form at the nucleation centers during the growth. Here, we propose an optical imaging technique to precisely identify the multilayer domains and also the ratio of their coverage in large-scale CVD monolayer graphene. We have also shown that the stacking disorder in twisted bilayer graphene as well as the impurities on the graphene surface could be distinguished by optical imaging. Finally, we investigated the effects of bilayer domains on the optical and electrical properties of CVD graphene, and found that the carrier mobility of CVD graphene is seriously limited by scattering from bilayer domains. Our results could be useful for guiding future optoelectronic applications of large-scale CVD graphene. Project supported by the National Natural Science Foundation of China (Nos. 61422503, 61376104), the Open Research Funds of Key Laboratory of MEMS of Ministry of Education (SEU, China), and the Fundamental Research Funds for the Central Universities.

  16. Visual Systems for Interactive Exploration and Mining of Large-Scale Neuroimaging Data Archives

    PubMed Central

    Bowman, Ian; Joshi, Shantanu H.; Van Horn, John D.

    2012-01-01

    While technological advancements in neuroimaging scanner engineering have improved the efficiency of data acquisition, electronic data capture methods will likewise significantly expedite the populating of large-scale neuroimaging databases. As they do and these archives grow in size, a particular challenge lies in examining and interacting with the information that these resources contain through the development of compelling, user-driven approaches for data exploration and mining. In this article, we introduce the informatics visualization for neuroimaging (INVIZIAN) framework for the graphical rendering of, and dynamic interaction with the contents of large-scale neuroimaging data sets. We describe the rationale behind INVIZIAN, detail its development, and demonstrate its usage in examining a collection of over 900 T1-anatomical magnetic resonance imaging (MRI) image volumes from across a diverse set of clinical neuroimaging studies drawn from a leading neuroimaging database. Using a collection of cortical surface metrics and means for examining brain similarity, INVIZIAN graphically displays brain surfaces as points in a coordinate space and enables classification of clusters of neuroanatomically similar MRI images and data mining. As an initial step toward addressing the need for such user-friendly tools, INVIZIAN provides a highly unique means to interact with large quantities of electronic brain imaging archives in ways suitable for hypothesis generation and data mining. PMID:22536181

  17. Topography and Albedo Image of Grooved Terrain on Vesta

    NASA Image and Video Library

    2011-11-29

    These images from NASA Dawn spacecraft show part of the grooved terrain in asteroid Vesta Pinaria quadrangle, which is in the southern hemisphere. Large-scale grooves and depressions can be seen running diagonally across the image.

  18. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.

  19. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  20. Feature hashing for fast image retrieval

    NASA Astrophysics Data System (ADS)

    Yan, Lingyu; Fu, Jiarun; Zhang, Hongxin; Yuan, Lu; Xu, Hui

    2018-03-01

    Currently, researches on content based image retrieval mainly focus on robust feature extraction. However, due to the exponential growth of online images, it is necessary to consider searching among large scale images, which is very timeconsuming and unscalable. Hence, we need to pay much attention to the efficiency of image retrieval. In this paper, we propose a feature hashing method for image retrieval which not only generates compact fingerprint for image representation, but also prevents huge semantic loss during the process of hashing. To generate the fingerprint, an objective function of semantic loss is constructed and minimized, which combine the influence of both the neighborhood structure of feature data and mapping error. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. Experimental results show good performance of our approach.

  1. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  2. Preliminary investigation of Large Format Camera photography utility in soil mapping and related agricultural applications

    NASA Technical Reports Server (NTRS)

    Pelletier, R. E.; Hudnall, W. H.

    1987-01-01

    The use of Space Shuttle Large Format Camera (LFC) color, IR/color, and B&W images in large-scale soil mapping is discussed and illustrated with sample photographs from STS 41-6 (October 1984). Consideration is given to the characteristics of the film types used; the photographic scales available; geometric and stereoscopic factors; and image interpretation and classification for soil-type mapping (detecting both sharp and gradual boundaries), soil parent material topographic and hydrologic assessment, natural-resources inventory, crop-type identification, and stress analysis. It is suggested that LFC photography can play an important role, filling the gap between aerial and satellite remote sensing.

  3. Convolutional auto-encoder for image denoising of ultra-low-dose CT.

    PubMed

    Nishio, Mizuho; Nagashima, Chihiro; Hirabayashi, Saori; Ohnishi, Akinori; Sasaki, Kaori; Sagawa, Tomoyuki; Hamada, Masayuki; Yamashita, Tatsuo

    2017-08-01

    The purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom. Standard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality. For the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05. Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.

  4. Demonstration of nanoimprinted hyperlens array for high-throughput sub-diffraction imaging

    NASA Astrophysics Data System (ADS)

    Byun, Minsueop; Lee, Dasol; Kim, Minkyung; Kim, Yangdoo; Kim, Kwan; Ok, Jong G.; Rho, Junsuk; Lee, Heon

    2017-04-01

    Overcoming the resolution limit of conventional optics is regarded as the most important issue in optical imaging science and technology. Although hyperlenses, super-resolution imaging devices based on highly anisotropic dispersion relations that allow the access of high-wavevector components, have recently achieved far-field sub-diffraction imaging in real-time, the previously demonstrated devices have suffered from the extreme difficulties of both the fabrication process and the non-artificial objects placement. This results in restrictions on the practical applications of the hyperlens devices. While implementing large-scale hyperlens arrays in conventional microscopy is desirable to solve such issues, it has not been feasible to fabricate such large-scale hyperlens array with the previously used nanofabrication methods. Here, we suggest a scalable and reliable fabrication process of a large-scale hyperlens device based on direct pattern transfer techniques. We fabricate a 5 cm × 5 cm size hyperlenses array and experimentally demonstrate that it can resolve sub-diffraction features down to 160 nm under 410 nm wavelength visible light. The array-based hyperlens device will provide a simple solution for much more practical far-field and real-time super-resolution imaging which can be widely used in optics, biology, medical science, nanotechnology and other closely related interdisciplinary fields.

  5. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images

    PubMed Central

    Afshar, Yaser; Sbalzarini, Ivo F.

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144

  6. A Parallel Distributed-Memory Particle Method Enables Acquisition-Rate Segmentation of Large Fluorescence Microscopy Images.

    PubMed

    Afshar, Yaser; Sbalzarini, Ivo F

    2016-01-01

    Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.

  7. Open source database of images DEIMOS: extension for large-scale subjective image quality assessment

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav

    2014-09-01

    DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.

  8. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  9. Rotation invariant fast features for large-scale recognition

    NASA Astrophysics Data System (ADS)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  10. Fast large-scale object retrieval with binary quantization

    NASA Astrophysics Data System (ADS)

    Zhou, Shifu; Zeng, Dan; Shen, Wei; Zhang, Zhijiang; Tian, Qi

    2015-11-01

    The objective of large-scale object retrieval systems is to search for images that contain the target object in an image database. Where state-of-the-art approaches rely on global image representations to conduct searches, we consider many boxes per image as candidates to search locally in a picture. In this paper, a feature quantization algorithm called binary quantization is proposed. In binary quantization, a scale-invariant feature transform (SIFT) feature is quantized into a descriptive and discriminative bit-vector, which allows itself to adapt to the classic inverted file structure for box indexing. The inverted file, which stores the bit-vector and box ID where the SIFT feature is located inside, is compact and can be loaded into the main memory for efficient box indexing. We evaluate our approach on available object retrieval datasets. Experimental results demonstrate that the proposed approach is fast and achieves excellent search quality. Therefore, the proposed approach is an improvement over state-of-the-art approaches for object retrieval.

  11. Efficient estimation and large-scale evaluation of lateral chromatic aberration for digital image forensics

    NASA Astrophysics Data System (ADS)

    Gloe, Thomas; Borowka, Karsten; Winkler, Antje

    2010-01-01

    The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.

  12. Automated microscopy for high-content RNAi screening

    PubMed Central

    2010-01-01

    Fluorescence microscopy is one of the most powerful tools to investigate complex cellular processes such as cell division, cell motility, or intracellular trafficking. The availability of RNA interference (RNAi) technology and automated microscopy has opened the possibility to perform cellular imaging in functional genomics and other large-scale applications. Although imaging often dramatically increases the content of a screening assay, it poses new challenges to achieve accurate quantitative annotation and therefore needs to be carefully adjusted to the specific needs of individual screening applications. In this review, we discuss principles of assay design, large-scale RNAi, microscope automation, and computational data analysis. We highlight strategies for imaging-based RNAi screening adapted to different library and assay designs. PMID:20176920

  13. Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop

    NASA Astrophysics Data System (ADS)

    Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin

    2014-06-01

    Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.

  14. JunoCam's Images of Jupiter

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Ravine, M. A.; Caplinger, M. A.; Orton, G. S.; Ingersoll, A. P.; Jensen, E.; Lipkaman, L.; Krysak, D.; Zimdar, R.; Bolton, S. J.

    2016-12-01

    JunoCam is a visible imager on the Juno spacecraft in orbit around Jupiter. It is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm, designed for optimal imaging of Jupiter's poles. Juno's elliptical polar orbit will offer unique views of Jupiter's polar regions with a spatial scale of 50 km/pixel. At closest approach the images will have a spatial scale of 3 km/pixel. As a push-frame imager on a rotating spacecraft, JunoCam uses time-delayed integration to take advantage of the spacecraft spin to extend integration time to increase signal. Images of Jupiter's poles reveal a largely uncharted region of Jupiter, as nearly all earlier spacecraft have orbited or flown by in the equatorial plane. Most of the images of Jupiter will be acquired in the +/-2 hours surrounding closest approach. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color images of the aurora will be acquired if detectable. Stereo images and images taken with the methane filter will allow us to estimate cloud-top heights. Images of the cloud-tops will aid in understanding the data collected by other instruments on Juno that probe deeper in the atmosphere. During the two months that Jupiter is too close to the sun for ground-based observers to collect data, JunoCam will take images routinely to monitor large-scale features. Occasional, opportunistic images of the Galilean moons will be acquired.

  15. Measuring and correcting wobble in large-scale transmission radiography.

    PubMed

    Rogers, Thomas W; Ollier, James; Morton, Edward J; Griffin, Lewis D

    2017-01-01

    Large-scale transmission radiography scanners are used to image vehicles and cargo containers. Acquired images are inspected for threats by a human operator or a computer algorithm. To make accurate detections, it is important that image values are precise. However, due to the scale (∼5 m tall) of such systems, they can be mechanically unstable, causing the imaging array to wobble during a scan. This leads to an effective loss of precision in the captured image. We consider the measurement of wobble and amelioration of the consequent loss of image precision. Following our previous work, we use Beam Position Detectors (BPDs) to measure the cross-sectional profile of the X-ray beam, allowing for estimation, and thus correction, of wobble. We propose: (i) a model of image formation with a wobbling detector array; (ii) a method of wobble correction derived from this model; (iii) methods for calibrating sensor sensitivities and relative offsets; (iv) a Random Regression Forest based method for instantaneous estimation of detector wobble; and (v) using these estimates to apply corrections to captured images of difficult scenes. We show that these methods are able to correct for 87% of image error due wobble, and when applied to difficult images, a significant visible improvement in the intensity-windowed image quality is observed. The method improves the precision of wobble affected images, which should help improve detection of threats and the identification of different materials in the image.

  16. Spaceborne imaging radar - Geologic and oceanographic applications

    NASA Technical Reports Server (NTRS)

    Elachi, C.

    1980-01-01

    Synoptic, large-area radar images of the earth's land and ocean surface, obtained from the Seasat orbiting spacecraft, show the potential for geologic mapping and for monitoring of ocean surface patterns. Structural and topographic features such as lineaments, anticlines, folds and domes, drainage patterns, stratification, and roughness units can be mapped. Ocean surface waves, internal waves, current boundaries, and large-scale eddies have been observed in numerous images taken by the Seasat imaging radar. This article gives an illustrated overview of these applications.

  17. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  18. Saliency image of feature building for image quality assessment

    NASA Astrophysics Data System (ADS)

    Ju, Xinuo; Sun, Jiyin; Wang, Peng

    2011-11-01

    The purpose and method of image quality assessment are quite different for automatic target recognition (ATR) and traditional application. Local invariant feature detectors, mainly including corner detectors, blob detectors and region detectors etc., are widely applied for ATR. A saliency model of feature was proposed to evaluate feasibility of ATR in this paper. The first step consisted of computing the first-order derivatives on horizontal orientation and vertical orientation, and computing DoG maps in different scales respectively. Next, saliency images of feature were built based auto-correlation matrix in different scale. Then, saliency images of feature of different scales amalgamated. Experiment were performed on a large test set, including infrared images and optical images, and the result showed that the salient regions computed by this model were consistent with real feature regions computed by mostly local invariant feature extraction algorithms.

  19. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  20. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  1. Very Large Scale Aerial (VLSA) imagery for assessing postfire bitterbrush recovery

    Treesearch

    Corey A. Moffet; J. Bret Taylor; D. Terrance Booth

    2008-01-01

    Very large scale aerial (VLSA) imagery is an efficient tool for monitoring bare ground and cover on extensive rangelands. This study was conducted to determine whether VLSA images could be used to detect differences in antelope bitterbrush (Purshia tridentata Pursh DC) cover and density among similar ecological sites with varying postfire recovery...

  2. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  3. Edge-SIFT: discriminative binary descriptor for scalable partial-duplicate mobile search.

    PubMed

    Zhang, Shiliang; Tian, Qi; Lu, Ke; Huang, Qingming; Gao, Wen

    2013-07-01

    As the basis of large-scale partial duplicate visual search on mobile devices, image local descriptor is expected to be discriminative, efficient, and compact. Our study shows that the popularly used histogram-based descriptors, such as scale invariant feature transform (SIFT) are not optimal for this task. This is mainly because histogram representation is relatively expensive to compute on mobile platforms and loses significant spatial clues, which are important for improving discriminative power and matching near-duplicate image patches. To address these issues, we propose to extract a novel binary local descriptor named Edge-SIFT from the binary edge maps of scale- and orientation-normalized image patches. By preserving both locations and orientations of edges and compressing the sparse binary edge maps with a boosting strategy, the final Edge-SIFT shows strong discriminative power with compact representation. Furthermore, we propose a fast similarity measurement and an indexing framework with flexible online verification. Hence, the Edge-SIFT allows an accurate and efficient image search and is ideal for computation sensitive scenarios such as a mobile image search. Experiments on a large-scale dataset manifest that the Edge-SIFT shows superior retrieval accuracy to Oriented BRIEF (ORB) and is superior to SIFT in the aspects of retrieval precision, efficiency, compactness, and transmission cost.

  4. Domain-Adapted Convolutional Networks for Satellite Image Classification: A Large-Scale Interactive Learning Workflow

    DOE PAGES

    Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.; ...

    2018-02-06

    Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less

  5. Domain-Adapted Convolutional Networks for Satellite Image Classification: A Large-Scale Interactive Learning Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lunga, Dalton D.; Yang, Hsiuhan Lexie; Reith, Andrew E.

    Satellite imagery often exhibits large spatial extent areas that encompass object classes with considerable variability. This often limits large-scale model generalization with machine learning algorithms. Notably, acquisition conditions, including dates, sensor position, lighting condition, and sensor types, often translate into class distribution shifts introducing complex nonlinear factors and hamper the potential impact of machine learning classifiers. Here, this article investigates the challenge of exploiting satellite images using convolutional neural networks (CNN) for settlement classification where the class distribution shifts are significant. We present a large-scale human settlement mapping workflow based-off multiple modules to adapt a pretrained CNN to address themore » negative impact of distribution shift on classification performance. To extend a locally trained classifier onto large spatial extents areas we introduce several submodules: First, a human-in-the-loop element for relabeling of misclassified target domain samples to generate representative examples for model adaptation; second, an efficient hashing module to minimize redundancy and noisy samples from the mass-selected examples; and third, a novel relevance ranking module to minimize the dominance of source example on the target domain. The workflow presents a novel and practical approach to achieve large-scale domain adaptation with binary classifiers that are based-off CNN features. Experimental evaluations are conducted on areas of interest that encompass various image characteristics, including multisensors, multitemporal, and multiangular conditions. Domain adaptation is assessed on source–target pairs through the transfer loss and transfer ratio metrics to illustrate the utility of the workflow.« less

  6. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  7. Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks

    DOE PAGES

    Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy; ...

    2018-02-07

    The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less

  8. Cross-Scale Molecular Analysis of Chemical Heterogeneity in Shale Rocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Zhao; Bechtel, Hans A.; Kneafsey, Timothy

    The organic and mineralogical heterogeneity in shale at micrometer and nanometer spatial scales contributes to the quality of gas reserves, gas flow mechanisms and gas production. Here, we demonstrate two molecular imaging approaches based on infrared spectroscopy to obtain mineral and kerogen information at these mesoscale spatial resolutions in large-sized shale rock samples. The first method is a modified microscopic attenuated total reflectance measurement that utilizes a large germanium hemisphere combined with a focal plane array detector to rapidly capture chemical images of shale rock surfaces spanning hundreds of micrometers with micrometer spatial resolution. The second method, synchrotron infrared nano-spectroscopy,more » utilizes a metallic atomic force microscope tip to obtain chemical images of micrometer dimensions but with nanometer spatial resolution. This chemically "deconvoluted" imaging at the nano-pore scale is then used to build a machine learning model to generate a molecular distribution map across scales with a spatial span of 1000 times, which enables high-throughput geochemical characterization in greater details across the nano-pore and micro-grain scales and allows us to identify co-localization of mineral phases with chemically distinct organics and even with gas phase sorbents. Finally, this characterization is fundamental to understand mineral and organic compositions affecting the behavior of shales.« less

  9. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  10. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  11. Quantitative Large-Scale Three-Dimensional Imaging of Human Kidney Biopsies: A Bridge to Precision Medicine in Kidney Disease.

    PubMed

    Winfree, Seth; Dagher, Pierre C; Dunn, Kenneth W; Eadon, Michael T; Ferkowicz, Michael; Barwinska, Daria; Kelly, Katherine J; Sutton, Timothy A; El-Achkar, Tarek M

    2018-06-05

    Kidney biopsy remains the gold standard for uncovering the pathogenesis of acute and chronic kidney diseases. However, the ability to perform high resolution, quantitative, molecular and cellular interrogation of this precious tissue is still at a developing stage compared to other fields such as oncology. Here, we discuss recent advances in performing large-scale, three-dimensional (3D), multi-fluorescence imaging of kidney biopsies and quantitative analysis referred to as 3D tissue cytometry. This approach allows the accurate measurement of specific cell types and their spatial distribution in a thick section spanning the entire length of the biopsy. By uncovering specific disease signatures, including rare occurrences, and linking them to the biology in situ, this approach will enhance our understanding of disease pathogenesis. Furthermore, by providing accurate quantitation of cellular events, 3D cytometry may improve the accuracy of prognosticating the clinical course and response to therapy. Therefore, large-scale 3D imaging and cytometry of kidney biopsy is poised to become a bridge towards personalized medicine for patients with kidney disease. © 2018 S. Karger AG, Basel.

  12. Large-scale two-photon imaging revealed super-sparse population codes in the V1 superficial layer of awake monkeys.

    PubMed

    Tang, Shiming; Zhang, Yimeng; Li, Zhihao; Li, Ming; Liu, Fang; Jiang, Hongfei; Lee, Tai Sing

    2018-04-26

    One general principle of sensory information processing is that the brain must optimize efficiency by reducing the number of neurons that process the same information. The sparseness of the sensory representations in a population of neurons reflects the efficiency of the neural code. Here, we employ large-scale two-photon calcium imaging to examine the responses of a large population of neurons within the superficial layers of area V1 with single-cell resolution, while simultaneously presenting a large set of natural visual stimuli, to provide the first direct measure of the population sparseness in awake primates. The results show that only 0.5% of neurons respond strongly to any given natural image - indicating a ten-fold increase in the inferred sparseness over previous measurements. These population activities are nevertheless necessary and sufficient to discriminate visual stimuli with high accuracy, suggesting that the neural code in the primary visual cortex is both super-sparse and highly efficient. © 2018, Tang et al.

  13. A fast image simulation algorithm for scanning transmission electron microscopy.

    PubMed

    Ophus, Colin

    2017-01-01

    Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. We present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this method with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.

  14. A fast image simulation algorithm for scanning transmission electron microscopy

    DOE PAGES

    Ophus, Colin

    2017-05-10

    Image simulation for scanning transmission electron microscopy at atomic resolution for samples with realistic dimensions can require very large computation times using existing simulation algorithms. Here, we present a new algorithm named PRISM that combines features of the two most commonly used algorithms, namely the Bloch wave and multislice methods. PRISM uses a Fourier interpolation factor f that has typical values of 4-20 for atomic resolution simulations. We show that in many cases PRISM can provide a speedup that scales with f 4 compared to multislice simulations, with a negligible loss of accuracy. We demonstrate the usefulness of this methodmore » with large-scale scanning transmission electron microscopy image simulations of a crystalline nanoparticle on an amorphous carbon substrate.« less

  15. Large-scale quantitative analysis of painting arts.

    PubMed

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-12-11

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.

  16. STOCHASTIC OPTICS: A SCATTERING MITIGATION FRAMEWORK FOR RADIO INTERFEROMETRIC IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Michael D., E-mail: mjohnson@cfa.harvard.edu

    2016-12-10

    Just as turbulence in the Earth’s atmosphere can severely limit the angular resolution of optical telescopes, turbulence in the ionized interstellar medium fundamentally limits the resolution of radio telescopes. We present a scattering mitigation framework for radio imaging with very long baseline interferometry (VLBI) that partially overcomes this limitation. Our framework, “stochastic optics,” derives from a simplification of strong interstellar scattering to separate small-scale (“diffractive”) effects from large-scale (“refractive”) effects, thereby separating deterministic and random contributions to the scattering. Stochastic optics extends traditional synthesis imaging by simultaneously reconstructing an unscattered image and its refractive perturbations. Its advantages over direct imagingmore » come from utilizing the many deterministic properties of the scattering—such as the time-averaged “blurring,” polarization independence, and the deterministic evolution in frequency and time—while still accounting for the stochastic image distortions on large scales. These distortions are identified in the image reconstructions through regularization by their time-averaged power spectrum. Using synthetic data, we show that this framework effectively removes the blurring from diffractive scattering while reducing the spurious image features from refractive scattering. Stochastic optics can provide significant improvements over existing scattering mitigation strategies and is especially promising for imaging the Galactic Center supermassive black hole, Sagittarius A*, with the Global mm-VLBI Array and with the Event Horizon Telescope.« less

  17. Path Searching Based Crease Detection for Large Scale Scanned Document Images

    NASA Astrophysics Data System (ADS)

    Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun

    2017-12-01

    Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.

  18. High Resolution Imaging of the Sun with CORONAS-1

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita

    1998-01-01

    We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.

  19. Three-dimensional estimates of tree canopies: Scaling from high-resolution UAV data to satellite observations

    NASA Astrophysics Data System (ADS)

    Sankey, T.; Donald, J.; McVay, J.

    2015-12-01

    High resolution remote sensing images and datasets are typically acquired at a large cost, which poses big a challenge for many scientists. Northern Arizona University recently acquired a custom-engineered, cutting-edge UAV and we can now generate our own images with the instrument. The UAV has a unique capability to carry a large payload including a hyperspectral sensor, which images the Earth surface in over 350 spectral bands at 5 cm resolution, and a lidar scanner, which images the land surface and vegetation in 3-dimensions. Both sensors represent the newest available technology with very high resolution, precision, and accuracy. Using the UAV sensors, we are monitoring the effects of regional forest restoration treatment efforts. Individual tree canopy width and height are measured in the field and via the UAV sensors. The high-resolution UAV images are then used to segment individual tree canopies and to derive 3-dimensional estimates. The UAV image-derived variables are then correlated to the field-based measurements and scaled to satellite-derived tree canopy measurements. The relationships between the field-based and UAV-derived estimates are then extrapolated to a larger area to scale the tree canopy dimensions and to estimate tree density within restored and control forest sites.

  20. Detection of Neuron Membranes in Electron Microscopy Images Using Multi-scale Context and Radon-Like Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.

    2011-10-01

    Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less

  1. Automated geographic registration and radiometric correction for UAV-based mosaics

    USDA-ARS?s Scientific Manuscript database

    Texas A&M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to s...

  2. BigView Image Viewing on Tiled Displays

    NASA Technical Reports Server (NTRS)

    Sandstrom, Timothy

    2007-01-01

    BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.

  3. A Study of NetCDF as an Approach for High Performance Medical Image Storage

    NASA Astrophysics Data System (ADS)

    Magnus, Marcone; Coelho Prado, Thiago; von Wangenhein, Aldo; de Macedo, Douglas D. J.; Dantas, M. A. R.

    2012-02-01

    The spread of telemedicine systems increases every day. The systems and PACS based on DICOM images has become common. This rise reflects the need to develop new storage systems, more efficient and with lower computational costs. With this in mind, this article discusses a study for application in NetCDF data format as the basic platform for storage of DICOM images. The study case comparison adopts an ordinary database, the HDF5 and the NetCDF to storage the medical images. Empirical results, using a real set of images, indicate that the time to retrieve images from the NetCDF for large scale images has a higher latency compared to the other two methods. In addition, the latency is proportional to the file size, which represents a drawback to a telemedicine system that is characterized by a large amount of large image files.

  4. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  5. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  6. A Bayesian Nonparametric Approach to Image Super-Resolution.

    PubMed

    Polatkan, Gungor; Zhou, Mingyuan; Carin, Lawrence; Blei, David; Daubechies, Ingrid

    2015-02-01

    Super-resolution methods form high-resolution images from low-resolution images. In this paper, we develop a new Bayesian nonparametric model for super-resolution. Our method uses a beta-Bernoulli process to learn a set of recurring visual patterns, called dictionary elements, from the data. Because it is nonparametric, the number of elements found is also determined from the data. We test the results on both benchmark and natural images, comparing with several other models from the research literature. We perform large-scale human evaluation experiments to assess the visual quality of the results. In a first implementation, we use Gibbs sampling to approximate the posterior. However, this algorithm is not feasible for large-scale data. To circumvent this, we then develop an online variational Bayes (VB) algorithm. This algorithm finds high quality dictionaries in a fraction of the time needed by the Gibbs sampler.

  7. a Method for the Seamlines Network Automatic Selection Based on Building Vector

    NASA Astrophysics Data System (ADS)

    Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.

    2018-04-01

    In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.

  8. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  9. Classification of Large-Scale Remote Sensing Images for Automatic Identification of Health Hazards: Smoke Detection Using an Autologistic Regression Classifier.

    PubMed

    Wolters, Mark A; Dean, C B

    2017-01-01

    Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.

  10. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  11. Universal Batch Steganalysis

    DTIC Science & Technology

    2014-06-30

    steganalysis) in large-scale datasets such as might be obtained by monitoring a corporate network or social network. Identifying guilty actors...guilty’ user (of steganalysis) in large-scale datasets such as might be obtained by monitoring a corporate network or social network. Identifying guilty...floating point operations (1 TFLOPs) for a 1 megapixel image. We designed a new implementation using Compute Unified Device Architecture (CUDA) on NVIDIA

  12. Projecting Images of the "Good" and the "Bad School": Top Scorers in Educational Large-Scale Assessments as Reference Societies

    ERIC Educational Resources Information Center

    Waldow, Florian

    2017-01-01

    Researchers interested in the global flow of educational ideas and programmes have long been interested in the role of so-called "reference societies." The article investigates how top scorers in large-scale assessments are framed as positive or negative reference societies in the education policy-making debate in German mass media and…

  13. The study of integration about measurable image and 4D production

    NASA Astrophysics Data System (ADS)

    Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun

    2008-12-01

    In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.

  14. A large-scale solar dynamics observatory image dataset for computer vision applications.

    PubMed

    Kucuk, Ahmet; Banda, Juan M; Angryk, Rafal A

    2017-01-01

    The National Aeronautics Space Agency (NASA) Solar Dynamics Observatory (SDO) mission has given us unprecedented insight into the Sun's activity. By capturing approximately 70,000 images a day, this mission has created one of the richest and biggest repositories of solar image data available to mankind. With such massive amounts of information, researchers have been able to produce great advances in detecting solar events. In this resource, we compile SDO solar data into a single repository in order to provide the computer vision community with a standardized and curated large-scale dataset of several hundred thousand solar events found on high resolution solar images. This publicly available resource, along with the generation source code, will accelerate computer vision research on NASA's solar image data by reducing the amount of time spent performing data acquisition and curation from the multiple sources we have compiled. By improving the quality of the data with thorough curation, we anticipate a wider adoption and interest from the computer vision to the solar physics community.

  15. Atomic-scale imaging of DNA using scanning tunnelling microscopy.

    PubMed

    Driscoll, R J; Youngquist, M G; Baldeschwieler, J D

    1990-07-19

    The scanning tunnelling microscope (STM) has been used to visualize DNA under water, under oil and in air. Images of single-stranded DNA have shown that submolecular resolution is possible. Here we describe atomic-resolution imaging of duplex DNA. Topographic STM images of uncoated duplex DNA on a graphite substrate obtained in ultra-high vacuum are presented that show double-helical structure, base pairs, and atomic-scale substructure. Experimental STM profiles show excellent correlation with atomic contours of the van der Waals surface of A-form DNA derived from X-ray crystallography. A comparison of variations in the barrier to quantum mechanical tunnelling (barrier-height) with atomic-scale topography shows correlation over the phosphate-sugar backbone but anticorrelation over the base pairs. This relationship may be due to the different chemical characteristics of parts of the molecule. Further investigation of this phenomenon should lead to a better understanding of the physics of imaging adsorbates with the STM and may prove useful in sequencing DNA. The improved resolution compared with previously published STM images of DNA may be attributable to ultra-high vacuum, high data-pixel density, slow scan rate, a fortuitously clean and sharp tip and/or a relatively dilute and extremely clean sample solution. This work demonstrates the potential of the STM for characterization of large biomolecular structures, but additional development will be required to make such high resolution imaging of DNA and other large molecules routine.

  16. NDE application of ultrasonic tomography to a full-scale concrete structure.

    PubMed

    Choi, Hajin; Popovics, John S

    2015-06-01

    Newly developed ultrasonic imaging technology for large concrete elements, based on tomographic reconstruction, is presented. The developed 3-D internal images (velocity tomograms) are used to detect internal defects (polystyrene foam and pre-cracked concrete prisms) that represent structural damage within a large steel reinforced concrete element. A hybrid air-coupled/contact transducer system is deployed. Electrostatic air-coupled transducers are used to generate ultrasonic energy and contact accelerometers are attached on the opposing side of the concrete element to detect the ultrasonic pulses. The developed hybrid testing setup enables collection of a large amount of high-quality, through-thickness ultrasonic data without surface preparation to the concrete. The algebraic reconstruction technique is used to reconstruct p-wave velocity tomograms from the obtained time signal data. A comparison with a one-sided ultrasonic imaging method is presented for the same specimen. Through-thickness tomography shows some benefit over one-sided imaging for highly reinforced concrete elements. The results demonstrate that the proposed through-thickness ultrasonic technique shows great potential for evaluation of full-scale concrete structures in the field.

  17. Determination of plasma parameters from soft X-ray images for coronal holes /open magnetic field configurations/ and coronal large-scale structures /extended closed-field configurations/

    NASA Technical Reports Server (NTRS)

    Maxson, C. W.; Vaiana, G. S.

    1977-01-01

    In connection with high-quality solar soft X-ray images the 'quiet' features of the inner corona have been separated into two sharply different components, including the strongly reduced emission areas or coronal holes (CH) and the extended regions of looplike emission features or large-scale structures (LSS). Particular central meridian passage observations of the prominent CH1 on August 21, 1973, are selected for a quantitative study. Histogram photographic density distributions for full-disk images at other central meridian passages of CH 1 are also presented, and the techniques of converting low photographic density data to deposited energy are discussed, with particular emphasis on the problems associated with the CH data.

  18. Large-Scale Quantitative Analysis of Painting Arts

    PubMed Central

    Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong

    2014-01-01

    Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images – the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877

  19. Theoretical and Empirical Comparison of Big Data Image Processing with Apache Hadoop and Sun Grid Engine.

    PubMed

    Bao, Shunxing; Weitendorf, Frederick D; Plassard, Andrew J; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A

    2017-02-11

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and non-relevant for medical imaging.

  20. Theoretical and empirical comparison of big data image processing with Apache Hadoop and Sun Grid Engine

    NASA Astrophysics Data System (ADS)

    Bao, Shunxing; Weitendorf, Frederick D.; Plassard, Andrew J.; Huo, Yuankai; Gokhale, Aniruddha; Landman, Bennett A.

    2017-03-01

    The field of big data is generally concerned with the scale of processing at which traditional computational paradigms break down. In medical imaging, traditional large scale processing uses a cluster computer that combines a group of workstation nodes into a functional unit that is controlled by a job scheduler. Typically, a shared-storage network file system (NFS) is used to host imaging data. However, data transfer from storage to processing nodes can saturate network bandwidth when data is frequently uploaded/retrieved from the NFS, e.g., "short" processing times and/or "large" datasets. Recently, an alternative approach using Hadoop and HBase was presented for medical imaging to enable co-location of data storage and computation while minimizing data transfer. The benefits of using such a framework must be formally evaluated against a traditional approach to characterize the point at which simply "large scale" processing transitions into "big data" and necessitates alternative computational frameworks. The proposed Hadoop system was implemented on a production lab-cluster alongside a standard Sun Grid Engine (SGE). Theoretical models for wall-clock time and resource time for both approaches are introduced and validated. To provide real example data, three T1 image archives were retrieved from a university secure, shared web database and used to empirically assess computational performance under three configurations of cluster hardware (using 72, 109, or 209 CPU cores) with differing job lengths. Empirical results match the theoretical models. Based on these data, a comparative analysis is presented for when the Hadoop framework will be relevant and nonrelevant for medical imaging.

  1. Molecular Imaging of Kerogen and Minerals in Shale Rocks across Micro- and Nano- Scales

    NASA Astrophysics Data System (ADS)

    Hao, Z.; Bechtel, H.; Sannibale, F.; Kneafsey, T. J.; Gilbert, B.; Nico, P. S.

    2016-12-01

    Fourier transform infrared (FTIR) spectroscopy is a reliable and non-destructive quantitative method to evaluate mineralogy and kerogen content / maturity of shale rocks, although it is traditionally difficult to assess the organic and mineralogical heterogeneity at micrometer and nanometer scales due to the diffraction limit of the infrared light. However, it is truly at these scales that the kerogen and mineral content and their formation in share rocks determines the quality of shale gas reserve, the gas flow mechanisms and the gas production. Therefore, it's necessary to develop new approaches which can image across both micro- and nano- scales. In this presentation, we will describe two new molecular imaging approaches to obtain kerogen and mineral information in shale rocks at the unprecedented high spatial resolution, and a cross-scale quantitative multivariate analysis method to provide rapid geochemical characterization of large size samples. The two imaging approaches are enhanced at nearfield respectively by a Ge-hemisphere (GE) and by a metallic scanning probe (SINS). The GE method is a modified microscopic attenuated total reflectance (ATR) method which rapidly captures a chemical image of the shale rock surface at 1 to 5 micrometer resolution with a large field of view of 600 X 600 micrometer, while the SINS probes the surface at 20 nm resolution which provides a chemically "deconvoluted" map at the nano-pore level. The detailed geochemical distribution at nanoscale is then used to build a machine learning model to generate self-calibrated chemical distribution map at micrometer scale with the input of the GE images. A number of geochemical contents across these two important scales are observed and analyzed, including the minerals (oxides, carbonates, sulphides), the organics (carbohydrates, aromatics), and the absorbed gases. These approaches are self-calibrated, optics friendly and non-destructive, so they hold the potential to monitor shale gas flow at real time inside the micro- or nano- pore network, which is of great interest for optimizing the shale gas extraction.

  2. A result about scale transformation families in approximation

    NASA Astrophysics Data System (ADS)

    Apprato, Dominique; Gout, Christian

    2000-06-01

    Scale transformations are common in approximation. In surface approximation from rapidly varying data, one wants to suppress, or at least dampen the oscillations of the approximation near steep gradients implied by the data. In that case, scale transformations can be used to give some control over overshoot when the surface has large variations of its gradient. Conversely, in image analysis, scale transformations are used in preprocessing to enhance some features present on the image or to increase jumps of grey levels before segmentation of the image. In this paper, we establish the convergence of an approximation method which allows some control over the behavior of the approximation. More precisely, we study the convergence of an approximation from a data set of , while using scale transformations on the values before and after classical approximation. In addition, the construction of scale transformations is also given. The algorithm is presented with some numerical examples.

  3. Multiscale solvers and systematic upscaling in computational physics

    NASA Astrophysics Data System (ADS)

    Brandt, A.

    2005-07-01

    Multiscale algorithms can overcome the scale-born bottlenecks that plague most computations in physics. These algorithms employ separate processing at each scale of the physical space, combined with interscale iterative interactions, in ways which use finer scales very sparingly. Having been developed first and well known as multigrid solvers for partial differential equations, highly efficient multiscale techniques have more recently been developed for many other types of computational tasks, including: inverse PDE problems; highly indefinite (e.g., standing wave) equations; Dirac equations in disordered gauge fields; fast computation and updating of large determinants (as needed in QCD); fast integral transforms; integral equations; astrophysics; molecular dynamics of macromolecules and fluids; many-atom electronic structures; global and discrete-state optimization; practical graph problems; image segmentation and recognition; tomography (medical imaging); fast Monte-Carlo sampling in statistical physics; and general, systematic methods of upscaling (accurate numerical derivation of large-scale equations from microscopic laws).

  4. Global Carbon Dioxide Transport from AIRS Data, July 2008

    NASA Image and Video Library

    2008-09-24

    This image was created with data acquired by JPLa Atmospheric Infrared Sounder during July 2008. The image shows large scale patterns of carbon dioxide concentrations that are transported around the Earth by the general circulation of the atmosphere.

  5. Ball-scale based hierarchical multi-object recognition in 3D medical images

    NASA Astrophysics Data System (ADS)

    Bağci, Ulas; Udupa, Jayaram K.; Chen, Xinjian

    2010-03-01

    This paper investigates, using prior shape models and the concept of ball scale (b-scale), ways of automatically recognizing objects in 3D images without performing elaborate searches or optimization. That is, the goal is to place the model in a single shot close to the right pose (position, orientation, and scale) in a given image so that the model boundaries fall in the close vicinity of object boundaries in the image. This is achieved via the following set of key ideas: (a) A semi-automatic way of constructing a multi-object shape model assembly. (b) A novel strategy of encoding, via b-scale, the pose relationship between objects in the training images and their intensity patterns captured in b-scale images. (c) A hierarchical mechanism of positioning the model, in a one-shot way, in a given image from a knowledge of the learnt pose relationship and the b-scale image of the given image to be segmented. The evaluation results on a set of 20 routine clinical abdominal female and male CT data sets indicate the following: (1) Incorporating a large number of objects improves the recognition accuracy dramatically. (2) The recognition algorithm can be thought as a hierarchical framework such that quick replacement of the model assembly is defined as coarse recognition and delineation itself is known as finest recognition. (3) Scale yields useful information about the relationship between the model assembly and any given image such that the recognition results in a placement of the model close to the actual pose without doing any elaborate searches or optimization. (4) Effective object recognition can make delineation most accurate.

  6. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  7. Lessons Learned From Large-Scale Evapotranspiration and Root Zone Soil Moisture Mapping Using Ground Measurements (meteorological, LAS, EC) and Remote Sensing (METRIC)

    NASA Astrophysics Data System (ADS)

    Hendrickx, J. M. H.; Allen, R. G.; Myint, S. W.; Ogden, F. L.

    2015-12-01

    Large scale mapping of evapotranspiration and root zone soil moisture is only possible when satellite images are used. The spatial resolution of this imagery typically depends on its temporal resolution or the satellite overpass time. For example, the Landsat satellite acquires images at 30 m resolution every 16 days while the MODIS satellite acquires images at 250 m resolution every day. In this study we deal with optical/thermal imagery that is impacted by cloudiness contrary to radar imagery that penetrates through clouds. Due to cloudiness, the temporal resolution of Landsat drops from 16 days to about one clear sky Landsat image per month in the southwestern USA and about one every ten years in the humid tropics of Panama. Only by launching additional satellites can the temporal resolution be improved. Since this is too costly, an alternative is found by using ground measurements with high temporal resolution (from minutes to days) but poor spatial resolution. The challenge for large-scale evapotranspiration and root zone soil moisture mapping is to construct a layer stack consisting of N time layers covering the period of interest each containing M pixels covering the region of interest. We will present examples of the Phoenix Active Management Area in AZ (14,600 km2), Green River Basin in WY (44,000 km2), the Kishwaukee Watershed in IL (3,150 km2), the area covered by Landsat Path 28/Row 35 in OK (30,000 km2) and the Agua Salud Watershed in Panama (200 km2). In these regions we used Landsat or MODIS imagery for mapping evapotranspiration and root zone soil moisture by the algorithm Mapping EvapoTranspiration at high Resolution with Internalized Calibration (METRIC) together with meteorological measurements and sometimes either Large Aperture Scintillometers (LAS) or Eddy Covariance (EC). We conclude with lessons learned for future large-scale hydrological studies.

  8. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules.

    PubMed

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I; de Boer, Pascal; Hagen, Kees C W; Hoogenboom, Jacob P; Giepmans, Ben N G

    2017-04-07

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale 'color-EM' as a promising tool to unravel molecular (de)regulation in biomedicine.

  9. Multi-color electron microscopy by element-guided identification of cells, organelles and molecules

    PubMed Central

    Scotuzzi, Marijke; Kuipers, Jeroen; Wensveen, Dasha I.; de Boer, Pascal; Hagen, Kees (C.) W.; Hoogenboom, Jacob P.; Giepmans, Ben N. G.

    2017-01-01

    Cellular complexity is unraveled at nanometer resolution using electron microscopy (EM), but interpretation of macromolecular functionality is hampered by the difficulty in interpreting grey-scale images and the unidentified molecular content. We perform large-scale EM on mammalian tissue complemented with energy-dispersive X-ray analysis (EDX) to allow EM-data analysis based on elemental composition. Endogenous elements, labels (gold and cadmium-based nanoparticles) as well as stains are analyzed at ultrastructural resolution. This provides a wide palette of colors to paint the traditional grey-scale EM images for composition-based interpretation. Our proof-of-principle application of EM-EDX reveals that endocrine and exocrine vesicles exist in single cells in Islets of Langerhans. This highlights how elemental mapping reveals unbiased biomedical relevant information. Broad application of EM-EDX will further allow experimental analysis on large-scale tissue using endogenous elements, multiple stains, and multiple markers and thus brings nanometer-scale ‘color-EM’ as a promising tool to unravel molecular (de)regulation in biomedicine. PMID:28387351

  10. Multiple Object Retrieval in Image Databases Using Hierarchical Segmentation Tree

    ERIC Educational Resources Information Center

    Chen, Wei-Bang

    2012-01-01

    The purpose of this research is to develop a new visual information analysis, representation, and retrieval framework for automatic discovery of salient objects of user's interest in large-scale image databases. In particular, this dissertation describes a content-based image retrieval framework which supports multiple-object retrieval. The…

  11. AutoBD: Automated Bi-Level Description for Scalable Fine-Grained Visual Categorization.

    PubMed

    Yao, Hantao; Zhang, Shiliang; Yan, Chenggang; Zhang, Yongdong; Li, Jintao; Tian, Qi

    Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.Compared with traditional image classification, fine-grained visual categorization is a more challenging task, because it targets to classify objects belonging to the same species, e.g. , classify hundreds of birds or cars. In the past several years, researchers have made many achievements on this topic. However, most of them are heavily dependent on the artificial annotations, e.g., bounding boxes, part annotations, and so on . The requirement of artificial annotations largely hinders the scalability and application. Motivated to release such dependence, this paper proposes a robust and discriminative visual description named Automated Bi-level Description (AutoBD). "Bi-level" denotes two complementary part-level and object-level visual descriptions, respectively. AutoBD is "automated," because it only requires the image-level labels of training images and does not need any annotations for testing images. Compared with the part annotations labeled by the human, the image-level labels can be easily acquired, which thus makes AutoBD suitable for large-scale visual categorization. Specifically, the part-level description is extracted by identifying the local region saliently representing the visual distinctiveness. The object-level description is extracted from object bounding boxes generated with a co-localization algorithm. Although only using the image-level labels, AutoBD outperforms the recent studies on two public benchmark, i.e. , classification accuracy achieves 81.6% on CUB-200-2011 and 88.9% on Car-196, respectively. On the large-scale Birdsnap data set, AutoBD achieves the accuracy of 68%, which is currently the best performance to the best of our knowledge.

  12. Correlated Topic Vector for Scene Classification.

    PubMed

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  13. Magnetic resonance imaging of convection in laser-polarized xenon

    NASA Technical Reports Server (NTRS)

    Mair, R. W.; Tseng, C. H.; Wong, G. P.; Cory, D. G.; Walsworth, R. L.

    2000-01-01

    We demonstrate nuclear magnetic resonance (NMR) imaging of the flow and diffusion of laser-polarized xenon (129Xe) gas undergoing convection above evaporating laser-polarized liquid xenon. The large xenon NMR signal provided by the laser-polarization technique allows more rapid imaging than one can achieve with thermally polarized gas-liquid systems, permitting shorter time-scale events such as rapid gas flow and gas-liquid dynamics to be observed. Two-dimensional velocity-encoded imaging shows convective gas flow above the evaporating liquid xenon, and also permits the measurement of enhanced gas diffusion near regions of large velocity variation.

  14. Method for large and rapid terahertz imaging

    DOEpatents

    Williams, Gwyn P.; Neil, George R.

    2013-01-29

    A method of large-scale active THz imaging using a combination of a compact high power THz source (>1 watt), an optional optical system, and a camera for the detection of reflected or transmitted THz radiation, without the need for the burdensome power source or detector cooling systems required by similar prior art such devices. With such a system, one is able to image, for example, a whole person in seconds or less, whereas at present, using low power sources and scanning techniques, it takes several minutes or even hours to image even a 1 cm.times.1 cm area of skin.

  15. Large and small-scale structures in Saturn's rings

    NASA Astrophysics Data System (ADS)

    Albers, N.; Rehnberg, M. E.; Brown, Z. L.; Sremcevic, M.; Esposito, L. W.

    2017-09-01

    Observations made by the Cassini spacecraft have revealed both large and small scale structures in Saturn's rings in unprecedented detail. Analysis of high-resolution measurements by the Cassini Ultraviolet Spectrograph (UVIS) High Speed Photometer (HSP) and the Imaging Science Subsystem (ISS) show an abundance of intrinsic small-scale structures (or clumping) seen across the entire ring system. These include self-gravity wakes (50-100m), sub-km structure at the A and B ring edges, and "straw"/"ropy" structures (1-3km).

  16. Probing the brain with molecular fMRI.

    PubMed

    Ghosh, Souparno; Harvey, Peter; Simon, Jacob C; Jasanoff, Alan

    2018-06-01

    One of the greatest challenges of modern neuroscience is to incorporate our growing knowledge of molecular and cellular-scale physiology into integrated, organismic-scale models of brain function in behavior and cognition. Molecular-level functional magnetic resonance imaging (molecular fMRI) is a new technology that can help bridge these scales by mapping defined microscopic phenomena over large, optically inaccessible regions of the living brain. In this review, we explain how MRI-detectable imaging probes can be used to sensitize noninvasive imaging to mechanistically significant components of neural processing. We discuss how a combination of innovative probe design, advanced imaging methods, and strategies for brain delivery can make molecular fMRI an increasingly successful approach for spatiotemporally resolved studies of diverse neural phenomena, perhaps eventually in people. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. MIGHTEE: The MeerKAT International GHz Tiered Extragalactic Exploration

    NASA Astrophysics Data System (ADS)

    Taylor, A. Russ; Jarvis, Matt

    2017-05-01

    The MeerKAT telescope is the precursor of the Square Kilometre Array mid-frequency dish array to be deployed later this decade on the African continent. MIGHTEE is one of the MeerKAT large survey projects designed to pathfind SKA key science in cosmology and galaxy evolution. Through a tiered radio continuum deep imaging project including several fields totaling 20 square degrees to microJy sensitivities and an ultra-deep image of a single 1 square degree field of view, MIGHTEE will explore dark matter and large scale structure, the evolution of galaxies, including AGN activity and star formation as a function of cosmic time and environment, the emergence and evolution of magnetic fields in galaxies, and the magnetic counter part to large scale structure of the universe.

  18. Crowdsourcing scoring of immunohistochemistry images: Evaluating Performance of the Crowd and an Automated Computational Method

    NASA Astrophysics Data System (ADS)

    Irshad, Humayun; Oh, Eun-Yeong; Schmolze, Daniel; Quintana, Liza M.; Collins, Laura; Tamimi, Rulla M.; Beck, Andrew H.

    2017-02-01

    The assessment of protein expression in immunohistochemistry (IHC) images provides important diagnostic, prognostic and predictive information for guiding cancer diagnosis and therapy. Manual scoring of IHC images represents a logistical challenge, as the process is labor intensive and time consuming. Since the last decade, computational methods have been developed to enable the application of quantitative methods for the analysis and interpretation of protein expression in IHC images. These methods have not yet replaced manual scoring for the assessment of IHC in the majority of diagnostic laboratories and in many large-scale research studies. An alternative approach is crowdsourcing the quantification of IHC images to an undefined crowd. The aim of this study is to quantify IHC images for labeling of ER status with two different crowdsourcing approaches, image-labeling and nuclei-labeling, and compare their performance with automated methods. Crowdsourcing- derived scores obtained greater concordance with the pathologist interpretations for both image-labeling and nuclei-labeling tasks (83% and 87%), as compared to the pathologist concordance achieved by the automated method (81%) on 5,338 TMA images from 1,853 breast cancer patients. This analysis shows that crowdsourcing the scoring of protein expression in IHC images is a promising new approach for large scale cancer molecular pathology studies.

  19. A New 4D Imaging Method for Three-Phase Analogue Experiments in Volcanology (and Other Three-Phase Systems)

    NASA Astrophysics Data System (ADS)

    Oppenheimer, J.; Patel, K. B.; Lev, E.; Hillman, E. M. C.

    2017-12-01

    Bubbles and crystals suspended in magmas interact with each other on a small scale, which affects large-scale volcanic processes. Studying these interactions on relevant scales of time and space is a long-standing challenge. Therefore, the fundamental explanations for the behavior of bubble- and crystal-rich magmas are still largely speculative. Recent application of X-ray tomography to experiments with synthetic magmas has already improved our understanding of small-scale 4D (3D + time) phenomena. However, this technique has low imaging rates < 20 volumes per second (vps) and does not work well with analogues, making experiments costly and slow. We demonstrate a novel methodology for imaging bubble-particle interactions in analogue suspensions by utilizing Swept Confocally Aligned Planar Excitation (SCAPE) microscopy. This method based on laser-fluorescence has been used to image live biological processes at high speed and in 3D. It allows imaging rates of up to several hundred vps and image volumes up to 1 x 1 x 0.5 mm3, with a trade-off between speed and spatial resolution. We ran two sets of experiments with silicone oil and soda-lime glass beads of <50 µm diameter, contained within a vertical glass casing 50 x 5 x 4 mm3. We used two different bubble generation methods. In the first set of experiments, small air bubbles (< 1 mm) were introduced through a hole at the bottom of the sample and allowed to rise through a suspension with low-viscosity oil. We successfully imaged bubble rise and particle movements around the bubble. In the second set, bubbles were generated by mixing acetone into the suspension and decreasing the surface pressure to cause a phase change to gaseous acetone. This bubble generation method compared favorably with previous gum rosin-acetone experiments: they provided similar degassing behaviors, along with more control on suspension viscosity and optimal optical properties for laser transmission. Large volumes of suspended bubbles, however, interfered with the laser path. In this set, we were able to track bubble nucleation sites and nucleation rates in 4D. This promising technique allows the study of small-scale interactions in two- and three-phase systems, at high imaging rates and at low cost.

  20. Recovering the fine structures in solar images

    NASA Technical Reports Server (NTRS)

    Karovska, Margarita; Habbal, S. R.; Golub, L.; Deluca, E.; Hudson, Hugh S.

    1994-01-01

    Several examples of the capability of the blind iterative deconvolution (BID) technique to recover the real point spread function, when limited a priori information is available about its characteristics. To demonstrate the potential of image post-processing for probing the fine scale and temporal variability of the solar atmosphere, the BID technique is applied to different samples of solar observations from space. The BID technique was originally proposed for correction of the effects of atmospheric turbulence on optical images. The processed images provide a detailed view of the spatial structure of the solar atmosphere at different heights in regions with different large-scale magnetic field structures.

  1. High-Throughput Screening Using iPSC-Derived Neuronal Progenitors to Identify Compounds Counteracting Epigenetic Gene Silencing in Fragile X Syndrome.

    PubMed

    Kaufmann, Markus; Schuffenhauer, Ansgar; Fruh, Isabelle; Klein, Jessica; Thiemeyer, Anke; Rigo, Pierre; Gomez-Mancilla, Baltazar; Heidinger-Millot, Valerie; Bouwmeester, Tewis; Schopfer, Ulrich; Mueller, Matthias; Fodor, Barna D; Cobos-Correa, Amanda

    2015-10-01

    Fragile X syndrome (FXS) is the most common form of inherited mental retardation, and it is caused in most of cases by epigenetic silencing of the Fmr1 gene. Today, no specific therapy exists for FXS, and current treatments are only directed to improve behavioral symptoms. Neuronal progenitors derived from FXS patient induced pluripotent stem cells (iPSCs) represent a unique model to study the disease and develop assays for large-scale drug discovery screens since they conserve the Fmr1 gene silenced within the disease context. We have established a high-content imaging assay to run a large-scale phenotypic screen aimed to identify compounds that reactivate the silenced Fmr1 gene. A set of 50,000 compounds was tested, including modulators of several epigenetic targets. We describe an integrated drug discovery model comprising iPSC generation, culture scale-up, and quality control and screening with a very sensitive high-content imaging assay assisted by single-cell image analysis and multiparametric data analysis based on machine learning algorithms. The screening identified several compounds that induced a weak expression of fragile X mental retardation protein (FMRP) and thus sets the basis for further large-scale screens to find candidate drugs or targets tackling the underlying mechanism of FXS with potential for therapeutic intervention. © 2015 Society for Laboratory Automation and Screening.

  2. The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey

    NASA Astrophysics Data System (ADS)

    Squires, Gordon K.; Lubin, L. M.; Gal, R. R.

    2007-05-01

    We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.

  3. SfM with MRFs: discrete-continuous optimization for large-scale structure from motion.

    PubMed

    Crandall, David J; Owens, Andrew; Snavely, Noah; Huttenlocher, Daniel P

    2013-12-01

    Recent work in structure from motion (SfM) has built 3D models from large collections of images downloaded from the Internet. Many approaches to this problem use incremental algorithms that solve progressively larger bundle adjustment problems. These incremental techniques scale poorly as the image collection grows, and can suffer from drift or local minima. We present an alternative framework for SfM based on finding a coarse initial solution using hybrid discrete-continuous optimization and then improving that solution using bundle adjustment. The initial optimization step uses a discrete Markov random field (MRF) formulation, coupled with a continuous Levenberg-Marquardt refinement. The formulation naturally incorporates various sources of information about both the cameras and points, including noisy geotags and vanishing point (VP) estimates. We test our method on several large-scale photo collections, including one with measured camera positions, and show that it produces models that are similar to or better than those produced by incremental bundle adjustment, but more robustly and in a fraction of the time.

  4. Deep learning-based fine-grained car make/model classification for visual surveillance

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.

  5. Evaluation of LANDSAT multispectral scanner images for mapping altered rocks in the east Tintic Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Rowan, L. C.; Abrams, M. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Positive findings of earlier evaluations of the color-ratio compositing technique for mapping limonitic altered rocks in south-central Nevada are confirmed, but important limitations in the approach used are pointed out. These limitations arise from environmental, geologic, and image processing factors. The greater vegetation density in the East Tintic Mountains required several modifications in procedures to improve the overall mapping accuracy of the CRC approach. Large format ratio images provide better internal registration of the diazo films and avoids the problems associated with magnifications required in the original procedure. Use of the Linoscan 204 color recognition scanner permits accurate consistent extraction of the green pixels representing limonitic bedrock maps that can be used for mapping at large scales as well as for small scale reconnaissance.

  6. Rotation-invariant convolutional neural networks for galaxy morphology prediction

    NASA Astrophysics Data System (ADS)

    Dieleman, Sander; Willett, Kyle W.; Dambre, Joni

    2015-06-01

    Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time consuming and does not scale to large (≳104) numbers of images. Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images. We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project. For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy (>99 per cent) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts' workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the Large Synoptic Survey Telescope.

  7. Sampling and Visualizing Creases with Scale-Space Particles

    PubMed Central

    Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik

    2010-01-01

    Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216

  8. Investigation of Electronic Generation of Visual Images for Air Force Technical Training. Interim Report for Period May 1974-October 1975.

    ERIC Educational Resources Information Center

    Filinger, Ronald H.; Hall, Paul W.

    Because large scale individualized learning systems place excessive demands on conventional means of producing audiovisual software, electronic image generation has been investigated as an alternative. A prototype, experimental device, Scanimate-500, was designed and built by the Computer Image Corporation. It uses photographic, television, and…

  9. Full-scale high-speed ``Edgerton'' retroreflective shadowgraphy of gunshots

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2005-11-01

    Almost 1/2 century ago, H. E. ``Doc'' Edgerton demonstrated a simple and elegant direct-shadowgraph technique for imaging large-scale events like explosions and gunshots. Only a retroreflective screen, flashlamp illumination, and an ordinary view camera were required. Retroreflective shadowgraphy has seen occasional use since then, but its unique combination of large scale, simplicity and portability has barely been tapped. It functions well in environments hostile to most optical diagnostics, such as full-scale outdoor daylight ballistics and explosives testing. Here, shadowgrams cast upon a 2.4 m square retroreflective screen are imaged by a Photron Fastcam APX-RS digital camera that is capable of megapixel image resolution at 3000 frames/sec up to 250,000 frames/sec at lower resolution. Microsecond frame exposures are used to examine the external ballistics of several firearms, including a high-powered rifle, an AK-47 submachine gun, and several pistols and revolvers. Muzzle blast phenomena and the mechanism of gunpowder residue deposition on the shooter's hands are clearly visualized. In particular, observing the firing of a pistol with and without a silencer (suppressor) suggests that some of the muzzle blast energy is converted by the silencer into supersonic jet noise.

  10. Fluid Lensing, Applications to High-Resolution 3D Subaqueous Imaging & Automated Remote Biosphere Assessment from Airborne and Space-borne Platforms

    NASA Astrophysics Data System (ADS)

    Chirayath, V.

    2014-12-01

    Fluid Lensing is a theoretical model and algorithm I present for fluid-optical interactions in turbulent flows as well as two-fluid surface boundaries that, when coupled with an unique computer vision and image-processing pipeline, may be used to significantly enhance the angular resolution of a remote sensing optical system with applicability to high-resolution 3D imaging of subaqueous regions and through turbulent fluid flows. This novel remote sensing technology has recently been implemented on a quadcopter-based UAS for imaging shallow benthic systems to create the first dataset of a biosphere with unprecedented sub-cm-level imagery in 3D over areas as large as 15 square kilometers. Perturbed two-fluid boundaries with different refractive indices, such as the surface between the ocean and air, may be exploited for use as lensing elements for imaging targets on either side of the interface with enhanced angular resolution. I present theoretical developments behind Fluid Lensing and experimental results from its recent implementation for the Reactive Reefs project to image shallow reef ecosystems at cm scales. Preliminary results from petabyte-scale aerial survey efforts using Fluid Lensing to image at-risk coral reefs in American Samoa (August, 2013) show broad applicability to large-scale automated species identification, morphology studies and reef ecosystem characterization for shallow marine environments and terrestrial biospheres, of crucial importance to understanding climate change's impact on coastal zones, global oxygen production and carbon sequestration.

  11. Detecting natural occlusion boundaries using local cues

    PubMed Central

    DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.

    2012-01-01

    Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731

  12. Three Mars Years of Surface Albedo Changes Observed by the Mars Reconnaissance Orbiter MARCI Investigation

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Wellington, D. F.; Anderson, R. B.; Wolff, M. J.; Supulver, K. D.; Cantor, B. A.; Malin, M. C.

    2012-12-01

    The NASA Mars Reconnaissance Orbiter (MRO) spacecraft has been in its prime mapping orbit of the Red Planet since November 2006, a little over three Mars years. MRO's Mars Color Imager (MARCI) investigation has been acquiring wide-angle, approximately 1 km/pixel resolution multispectral images (from the UV to the short-wave near-IR) throughout the mission from the spacecraft's 300 km circular polar orbit. As of fall 2012, MARCI has acquired more than 25,000 image sequences, with its 180 degree field of view covering local solar times of approximately 15:00 +/- 2 hours at the equator. These images can be merged and map projected to provide near-global imaging coverage of Mars for almost every sol of the mission. These maps have been used to characterize and monitor changes in seasonal and interannual dust and water ice cloud opacity, growth and decay of local- to global-scale dust storms, and polar cap growth and recession. The data are also well-suited for studying small- to large-scale changes in surface albedo markings, important for understanding the nature of aeolian transport of dust and sand in the current Martian environment, as well as for modeling the radiative influence of the darker (warmer) or brighter (cooler) surface on local-scale atmospheric circulation and storm systems. We are using calibrated, map-projected, coregistered subsets of MARCI images to characterize and investigate surface albedo changes in a number of specific regions of interest, based on past Viking Orbiter, Hubble Space Telescope, and Mars Global Surveyor images of changing large-scale surface albedo patterns over recent decades, as well as recent surface missions that have characterized small-scale changes in surface albedo. Specific areas of study of large-scale changes include the dark areas Syrtis Major, Acidalia, Cimmeria, Sirenum, and Solis Lacus, and our initial focus areas for small-scale variations include regions in and around the landing sites of the Mars Exploration Rovers Spirit (Gusev crater) and Opportunity (Meridiani Planum), as well as Gale crater, the landing site for the Mars Science Laboratory rover Curiosity. Time-lapse animations of albedo changes in and around Gale crater, for example, reveal tens of km-scale changes in low albedo surface markings both within the crater (including near the rover's planned traverse path) as well as within the 500 km long low albedo wind streak south of the crater. Combined with morphologic, thermal inertia, and compositional/mineralogic constraints from other data sets, MARCI albedo variation measurements can help to constrain present rates of dust and sand transport in a variety of environments on Mars.

  13. Global Scale Solar Disturbances

    NASA Astrophysics Data System (ADS)

    Title, A. M.; Schrijver, C. J.; DeRosa, M. L.

    2013-12-01

    The combination of the STEREO and SDO missions have allowed for the first time imagery of the entire Sun. This coupled with the high cadence, broad thermal coverage, and the large dynamic range of the Atmospheric Imaging Assembly on SDO has allowed discovery of impulsive solar disturbances that can significantly affect a hemisphere or more of the solar volume. Such events are often, but not always, associated with M and X class flares. GOES C and even B class flares are also associated with these large scale disturbances. Key to the recognition of the large scale disturbances was the creation of log difference movies. By taking the log of images before differencing events in the corona become much more evident. Because such events cover such a large portion of the solar volume their passage can effect the dynamics of the entire corona as it adjusts to and recovers from their passage. In some cases this may lead to a another flare or filament ejection, but in general direct causal evidence of 'sympathetic' behavior is lacking. However, evidence is accumulating these large scale events create an environment that encourages other solar instabilities to occur. Understanding the source of these events and how the energy that drives them is built up, stored, and suddenly released is critical to understanding the origins of space weather. Example events and comments of their relevance will be presented.

  14. Automatic initialization and quality control of large-scale cardiac MRI segmentations.

    PubMed

    Albà, Xènia; Lekadir, Karim; Pereañez, Marco; Medrano-Gracia, Pau; Young, Alistair A; Frangi, Alejandro F

    2018-01-01

    Continuous advances in imaging technologies enable ever more comprehensive phenotyping of human anatomy and physiology. Concomitant reduction of imaging costs has resulted in widespread use of imaging in large clinical trials and population imaging studies. Magnetic Resonance Imaging (MRI), in particular, offers one-stop-shop multidimensional biomarkers of cardiovascular physiology and pathology. A wide range of analysis methods offer sophisticated cardiac image assessment and quantification for clinical and research studies. However, most methods have only been evaluated on relatively small databases often not accessible for open and fair benchmarking. Consequently, published performance indices are not directly comparable across studies and their translation and scalability to large clinical trials or population imaging cohorts is uncertain. Most existing techniques still rely on considerable manual intervention for the initialization and quality control of the segmentation process, becoming prohibitive when dealing with thousands of images. The contributions of this paper are three-fold. First, we propose a fully automatic method for initializing cardiac MRI segmentation, by using image features and random forests regression to predict an initial position of the heart and key anatomical landmarks in an MRI volume. In processing a full imaging database, the technique predicts the optimal corrective displacements and positions in relation to the initial rough intersections of the long and short axis images. Second, we introduce for the first time a quality control measure capable of identifying incorrect cardiac segmentations with no visual assessment. The method uses statistical, pattern and fractal descriptors in a random forest classifier to detect failures to be corrected or removed from subsequent statistical analysis. Finally, we validate these new techniques within a full pipeline for cardiac segmentation applicable to large-scale cardiac MRI databases. The results obtained based on over 1200 cases from the Cardiac Atlas Project show the promise of fully automatic initialization and quality control for population studies. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  16. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  17. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  18. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  19. The cosmic ray muon tomography facility based on large scale MRPC detectors

    NASA Astrophysics Data System (ADS)

    Wang, Xuewu; Zeng, Ming; Zeng, Zhi; Wang, Yi; Zhao, Ziran; Yue, Xiaoguang; Luo, Zhifei; Yi, Hengguan; Yu, Baihui; Cheng, Jianping

    2015-06-01

    Cosmic ray muon tomography is a novel technology to detect high-Z material. A prototype of TUMUTY with 73.6 cm×73.6 cm large scale position sensitive MRPC detectors has been developed and is introduced in this paper. Three test kits have been tested and image is reconstructed using MAP algorithm. The reconstruction results show that the prototype is working well and the objects with complex structure and small size (20 mm) can be imaged on it, while the high-Z material is distinguishable from the low-Z one. This prototype provides a good platform for our further studies of the physical characteristics and the performances of cosmic ray muon tomography.

  20. Light sheet theta microscopy for rapid high-resolution imaging of large biological samples.

    PubMed

    Migliori, Bianca; Datta, Malika S; Dupre, Christophe; Apak, Mehmet C; Asano, Shoh; Gao, Ruixuan; Boyden, Edward S; Hermanson, Ola; Yuste, Rafael; Tomer, Raju

    2018-05-29

    Advances in tissue clearing and molecular labeling methods are enabling unprecedented optical access to large intact biological systems. These developments fuel the need for high-speed microscopy approaches to image large samples quantitatively and at high resolution. While light sheet microscopy (LSM), with its high planar imaging speed and low photo-bleaching, can be effective, scaling up to larger imaging volumes has been hindered by the use of orthogonal light sheet illumination. To address this fundamental limitation, we have developed light sheet theta microscopy (LSTM), which uniformly illuminates samples from the same side as the detection objective, thereby eliminating limits on lateral dimensions without sacrificing the imaging resolution, depth, and speed. We present a detailed characterization of LSTM, and demonstrate its complementary advantages over LSM for rapid high-resolution quantitative imaging of large intact samples with high uniform quality. The reported LSTM approach is a significant step for the rapid high-resolution quantitative mapping of the structure and function of very large biological systems, such as a clarified thick coronal slab of human brain and uniformly expanded tissues, and also for rapid volumetric calcium imaging of highly motile animals, such as Hydra, undergoing non-isomorphic body shape changes.

  1. Precision measurements from very-large scale aerial digital imagery.

    PubMed

    Booth, D Terrance; Cox, Samuel E; Berryman, Robert D

    2006-01-01

    Managers need measurements and resource managers need the length/width of a variety of items including that of animals, logs, streams, plant canopies, man-made objects, riparian habitat, vegetation patches and other things important in resource monitoring and land inspection. These types of measurements can now be easily and accurately obtained from very large scale aerial (VLSA) imagery having spatial resolutions as fine as 1 millimeter per pixel by using the three new software programs described here. VLSA images have small fields of view and are used for intermittent sampling across extensive landscapes. Pixel-coverage among images is influenced by small changes in airplane altitude above ground level (AGL) and orientation relative to the ground, as well as by changes in topography. These factors affect the object-to-camera distance used for image-resolution calculations. 'ImageMeasurement' offers a user-friendly interface for accounting for pixel-coverage variation among images by utilizing a database. 'LaserLOG' records and displays airplane altitude AGL measured from a high frequency laser rangefinder, and displays the vertical velocity. 'Merge' sorts through large amounts of data generated by LaserLOG and matches precise airplane altitudes with camera trigger times for input to the ImageMeasurement database. We discuss application of these tools, including error estimates. We found measurements from aerial images (collection resolution: 5-26 mm/pixel as projected on the ground) using ImageMeasurement, LaserLOG, and Merge, were accurate to centimeters with an error less than 10%. We recommend these software packages as a means for expanding the utility of aerial image data.

  2. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  3. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  4. Large-Scale Document Automation: The Systems Integration Issue.

    ERIC Educational Resources Information Center

    Kalthoff, Robert J.

    1985-01-01

    Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…

  5. MINC 2.0: A Flexible Format for Multi-Modal Images.

    PubMed

    Vincent, Robert D; Neelin, Peter; Khalili-Mahani, Najmeh; Janke, Andrew L; Fonov, Vladimir S; Robbins, Steven M; Baghdadi, Leila; Lerch, Jason; Sled, John G; Adalat, Reza; MacDonald, David; Zijdenbos, Alex P; Collins, D Louis; Evans, Alan C

    2016-01-01

    It is often useful that an imaging data format can afford rich metadata, be flexible, scale to very large file sizes, support multi-modal data, and have strong inbuilt mechanisms for data provenance. Beginning in 1992, MINC was developed as a system for flexible, self-documenting representation of neuroscientific imaging data with arbitrary orientation and dimensionality. The MINC system incorporates three broad components: a file format specification, a programming library, and a growing set of tools. In the early 2000's the MINC developers created MINC 2.0, which added support for 64-bit file sizes, internal compression, and a number of other modern features. Because of its extensible design, it has been easy to incorporate details of provenance in the header metadata, including an explicit processing history, unique identifiers, and vendor-specific scanner settings. This makes MINC ideal for use in large scale imaging studies and databases. It also makes it easy to adapt to new scanning sequences and modalities.

  6. Macro optical projection tomography for large scale 3D imaging of plant structures and gene activity

    PubMed Central

    Lee, Karen J. I.; Calder, Grant M.; Hindle, Christopher R.; Newman, Jacob L.; Robinson, Simon N.; Avondo, Jerome J. H. Y.

    2017-01-01

    Abstract Optical projection tomography (OPT) is a well-established method for visualising gene activity in plants and animals. However, a limitation of conventional OPT is that the specimen upper size limit precludes its application to larger structures. To address this problem we constructed a macro version called Macro OPT (M-OPT). We apply M-OPT to 3D live imaging of gene activity in growing whole plants and to visualise structural morphology in large optically cleared plant and insect specimens up to 60 mm tall and 45 mm deep. We also show how M-OPT can be used to image gene expression domains in 3D within fixed tissue and to visualise gene activity in 3D in clones of growing young whole Arabidopsis plants. A further application of M-OPT is to visualise plant-insect interactions. Thus M-OPT provides an effective 3D imaging platform that allows the study of gene activity, internal plant structures and plant-insect interactions at a macroscopic scale. PMID:28025317

  7. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. A generalized approach for producing, quantifying, and validating citizen science data from wildlife images.

    PubMed

    Swanson, Alexandra; Kosmala, Margaret; Lintott, Chris; Packer, Craig

    2016-06-01

    Citizen science has the potential to expand the scope and scale of research in ecology and conservation, but many professional researchers remain skeptical of data produced by nonexperts. We devised an approach for producing accurate, reliable data from untrained, nonexpert volunteers. On the citizen science website www.snapshotserengeti.org, more than 28,000 volunteers classified 1.51 million images taken in a large-scale camera-trap survey in Serengeti National Park, Tanzania. Each image was circulated to, on average, 27 volunteers, and their classifications were aggregated using a simple plurality algorithm. We validated the aggregated answers against a data set of 3829 images verified by experts and calculated 3 certainty metrics-level of agreement among classifications (evenness), fraction of classifications supporting the aggregated answer (fraction support), and fraction of classifiers who reported "nothing here" for an image that was ultimately classified as containing an animal (fraction blank)-to measure confidence that an aggregated answer was correct. Overall, aggregated volunteer answers agreed with the expert-verified data on 98% of images, but accuracy differed by species commonness such that rare species had higher rates of false positives and false negatives. Easily calculated analysis of variance and post-hoc Tukey tests indicated that the certainty metrics were significant indicators of whether each image was correctly classified or classifiable. Thus, the certainty metrics can be used to identify images for expert review. Bootstrapping analyses further indicated that 90% of images were correctly classified with just 5 volunteers per image. Species classifications based on the plurality vote of multiple citizen scientists can provide a reliable foundation for large-scale monitoring of African wildlife. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  9. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  10. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.

    PubMed

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.

  11. Multidisciplinary geoscientific experiments in central Europe

    NASA Technical Reports Server (NTRS)

    Bannert, D. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Studies were carried out in the fields of geology-pedology, coastal dynamics, geodesy-cartography, geography, and data processing. In geology-pedology, a comparison of ERTS image studies with extensive ground data led to a better understanding of the relationship between vegetation, soil, bedrock, and other geologic features. Findings in linear tectonics gave better insight in orogeny and ore deposit development for prospecting. Coastal studies proved the value of ERTS images for the updating of nautical charts, as well as small scale topographic maps. A plotter for large scale high speed image generation from CCT was developed.

  12. Regression-Based Identification of Behavior-Encoding Neurons During Large-Scale Optical Imaging of Neural Activity at Cellular Resolution

    PubMed Central

    Miri, Andrew; Daie, Kayvon; Burdine, Rebecca D.; Aksay, Emre

    2011-01-01

    The advent of methods for optical imaging of large-scale neural activity at cellular resolution in behaving animals presents the problem of identifying behavior-encoding cells within the resulting image time series. Rapid and precise identification of cells with particular neural encoding would facilitate targeted activity measurements and perturbations useful in characterizing the operating principles of neural circuits. Here we report a regression-based approach to semiautomatically identify neurons that is based on the correlation of fluorescence time series with quantitative measurements of behavior. The approach is illustrated with a novel preparation allowing synchronous eye tracking and two-photon laser scanning fluorescence imaging of calcium changes in populations of hindbrain neurons during spontaneous eye movement in the larval zebrafish. Putative velocity-to-position oculomotor integrator neurons were identified that showed a broad spatial distribution and diversity of encoding. Optical identification of integrator neurons was confirmed with targeted loose-patch electrical recording and laser ablation. The general regression-based approach we demonstrate should be widely applicable to calcium imaging time series in behaving animals. PMID:21084686

  13. NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.

    PubMed

    Arandjelovic, Relja; Gronat, Petr; Torii, Akihiko; Pajdla, Tomas; Sivic, Josef

    2018-06-01

    We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.

  14. Observations of thunderstorm-related 630 nm airglow depletions

    NASA Astrophysics Data System (ADS)

    Kendall, E. A.; Bhatt, A.

    2015-12-01

    The Midlatitude All-sky imaging Network for Geophysical Observations (MANGO) is an NSF-funded network of 630 nm all-sky imagers in the continental United States. MANGO will be used to observe the generation, propagation, and dissipation of medium and large-scale wave activity in the subauroral, mid and low-latitude thermosphere. This network is actively being deployed and will ultimately consist of nine all-sky imagers. These imagers form a network providing continuous coverage over the western United States, including California, Oregon, Washington, Utah, Arizona and Texas extending south into Mexico. This network sees high levels of both medium and large scale wave activity. Apart from the widely reported northeast to southwest propagating wave fronts resulting from the so called Perkins mechanism, this network observes wave fronts propagating to the west, north and northeast. At least three of these anomalous events have been associated with thunderstorm activity. Imager data has been correlated with both GPS data and data from the AIRS (Atmospheric Infrared Sounder) instrument on board NASA's Earth Observing System Aqua satellite. We will present a comprehensive analysis of these events and discuss the potential thunderstorm source mechanism.

  15. Imaging and identification of waterborne parasites using a chip-scale microscope.

    PubMed

    Lee, Seung Ah; Erath, Jessey; Zheng, Guoan; Ou, Xiaoze; Willems, Phil; Eichinger, Daniel; Rodriguez, Ana; Yang, Changhuei

    2014-01-01

    We demonstrate a compact portable imaging system for the detection of waterborne parasites in resource-limited settings. The previously demonstrated sub-pixel sweeping microscopy (SPSM) technique is a lens-less imaging scheme that can achieve high-resolution (<1 µm) bright-field imaging over a large field-of-view (5.7 mm×4.3 mm). A chip-scale microscope system, based on the SPSM technique, can be used for automated and high-throughput imaging of protozoan parasite cysts for the effective diagnosis of waterborne enteric parasite infection. We successfully imaged and identified three major types of enteric parasite cysts, Giardia, Cryptosporidium, and Entamoeba, which can be found in fecal samples from infected patients. We believe that this compact imaging system can serve well as a diagnostic device in challenging environments, such as rural settings or emergency outbreaks.

  16. Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China

    NASA Astrophysics Data System (ADS)

    Yang, Bo

    2016-06-01

    Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.

  17. PIRATE: pediatric imaging response assessment and targeting environment

    NASA Astrophysics Data System (ADS)

    Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho

    2010-02-01

    By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.

  18. SWAP OBSERVATIONS OF THE LONG-TERM, LARGE-SCALE EVOLUTION OF THE EXTREME-ULTRAVIOLET SOLAR CORONA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seaton, Daniel B.; De Groof, Anik; Berghmans, David

    The Sun Watcher with Active Pixels and Image Processing (SWAP) EUV solar telescope on board the Project for On-Board Autonomy 2 spacecraft has been regularly observing the solar corona in a bandpass near 17.4 nm since 2010 February. With a field of view of 54 × 54 arcmin, SWAP provides the widest-field images of the EUV corona available from the perspective of the Earth. By carefully processing and combining multiple SWAP images, it is possible to produce low-noise composites that reveal the structure of the EUV corona to relatively large heights. A particularly important step in this processing was tomore » remove instrumental stray light from the images by determining and deconvolving SWAP's point-spread function from the observations. In this paper, we use the resulting images to conduct the first-ever study of the evolution of the large-scale structure of the corona observed in the EUV over a three year period that includes the complete rise phase of solar cycle 24. Of particular note is the persistence over many solar rotations of bright, diffuse features composed of open magnetic fields that overlie polar crown filaments and extend to large heights above the solar surface. These features appear to be related to coronal fans, which have previously been observed in white-light coronagraph images and, at low heights, in the EUV. We also discuss the evolution of the corona at different heights above the solar surface and the evolution of the corona over the course of the solar cycle by hemisphere.« less

  19. Image retrieval by information fusion based on scalable vocabulary tree and robust Hausdorff distance

    NASA Astrophysics Data System (ADS)

    Che, Chang; Yu, Xiaoyang; Sun, Xiaoming; Yu, Boyang

    2017-12-01

    In recent years, Scalable Vocabulary Tree (SVT) has been shown to be effective in image retrieval. However, for general images where the foreground is the object to be recognized while the background is cluttered, the performance of the current SVT framework is restricted. In this paper, a new image retrieval framework that incorporates a robust distance metric and information fusion is proposed, which improves the retrieval performance relative to the baseline SVT approach. First, the visual words that represent the background are diminished by using a robust Hausdorff distance between different images. Second, image matching results based on three image signature representations are fused, which enhances the retrieval precision. We conducted intensive experiments on small-scale to large-scale image datasets: Corel-9, Corel-48, and PKU-198, where the proposed Hausdorff metric and information fusion outperforms the state-of-the-art methods by about 13, 15, and 15%, respectively.

  20. Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images

    NASA Astrophysics Data System (ADS)

    Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow

    2018-04-01

    Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.

  1. Scaling Analysis of Ocean Surface Turbulent Heterogeneities from Satellite Remote Sensing: Use of 2D Structure Functions.

    PubMed

    Renosh, P R; Schmitt, Francois G; Loisel, Hubert

    2015-01-01

    Satellite remote sensing observations allow the ocean surface to be sampled synoptically over large spatio-temporal scales. The images provided from visible and thermal infrared satellite observations are widely used in physical, biological, and ecological oceanography. The present work proposes a method to understand the multi-scaling properties of satellite products such as the Chlorophyll-a (Chl-a), and the Sea Surface Temperature (SST), rarely studied. The specific objectives of this study are to show how the small scale heterogeneities of satellite images can be characterised using tools borrowed from the fields of turbulence. For that purpose, we show how the structure function, which is classically used in the frame of scaling time series analysis, can be used also in 2D. The main advantage of this method is that it can be applied to process images which have missing data. Based on both simulated and real images, we demonstrate that coarse-graining (CG) of a gradient modulus transform of the original image does not provide correct scaling exponents. We show, using a fractional Brownian simulation in 2D, that the structure function (SF) can be used with randomly sampled couple of points, and verify that 1 million of couple of points provides enough statistics.

  2. Satellite-based characterization of climatic conditions before large-scale general flowering events in Peninsular Malaysia

    PubMed Central

    Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md.; Fletcher, Christine

    2016-01-01

    General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon. PMID:27561887

  3. Satellite-based characterization of climatic conditions before large-scale general flowering events in Peninsular Malaysia.

    PubMed

    Azmy, Muna Maryam; Hashim, Mazlan; Numata, Shinya; Hosaka, Tetsuro; Noor, Nur Supardi Md; Fletcher, Christine

    2016-08-26

    General flowering (GF) is a unique phenomenon wherein, at irregular intervals, taxonomically diverse trees in Southeast Asian dipterocarp forests synchronize their reproduction at the community level. Triggers of GF, including drought and low minimum temperatures a few months previously has been limitedly observed across large regional scales due to lack of meteorological stations. Here, we aim to identify the climatic conditions that trigger large-scale GF in Peninsular Malaysia using satellite sensors, Tropical Rainfall Measuring Mission (TRMM) and Moderate Resolution Imaging Spectroradiometer (MODIS), to evaluate the climatic conditions of focal forests. We observed antecedent drought, low temperature and high photosynthetic radiation conditions before large-scale GF events, suggesting that large-scale GF events could be triggered by these factors. In contrast, we found higher-magnitude GF in forests where lower precipitation preceded large-scale GF events. GF magnitude was also negatively influenced by land surface temperature (LST) for a large-scale GF event. Therefore, we suggest that spatial extent of drought may be related to that of GF forests, and that the spatial pattern of LST may be related to that of GF occurrence. With significant new findings and other results that were consistent with previous research we clarified complicated environmental correlates with the GF phenomenon.

  4. Satellite measurements of large-scale air pollution - Methods

    NASA Technical Reports Server (NTRS)

    Kaufman, Yoram J.; Ferrare, Richard A.; Fraser, Robert S.

    1990-01-01

    A technique for deriving large-scale pollution parameters from NIR and visible satellite remote-sensing images obtained over land or water is described and demonstrated on AVHRR images. The method is based on comparison of the upward radiances on clear and hazy days and permits simultaneous determination of aerosol optical thickness with error Delta tau(a) = 0.08-0.15, particle size with error + or - 100-200 nm, and single-scattering albedo with error + or - 0.03 (for albedos near 1), all assuming accurate and stable satellite calibration and stable surface reflectance between the clear and hazy days. In the analysis of AVHRR images of smoke from a forest fire, good agreement was obtained between satellite and ground-based (sun-photometer) measurements of aerosol optical thickness, but the satellite particle sizes were systematically greater than those measured from the ground. The AVHRR single-scattering albedo agreed well with a Landsat albedo for the same smoke.

  5. Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines

    PubMed Central

    Mikut, Ralf

    2017-01-01

    Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927

  6. Image-based optimization of coronal magnetic field models for improved space weather forecasting

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.; MacNeice, P. J.

    2017-12-01

    The existing space weather forecasting frameworks show a significant dependence on the accuracy of the photospheric magnetograms and the extrapolation models used to reconstruct the magnetic filed in the solar corona. Minor uncertainties in the magnetic field magnitude and direction near the Sun, when propagated through the heliosphere, can lead to unacceptible prediction errors at 1 AU. We argue that ground based and satellite coronagraph images can provide valid geometric constraints that could be used for improving coronal magnetic field extrapolation results, enabling more reliable forecasts of extreme space weather events such as major CMEs. In contrast to the previously developed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions up to 1-2 solar radii above the photosphere. By applying the developed image processing techniques to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code developed S. Jones at al. (ApJ 2016, 2017). Our tracing results are shown to be in a good qualitative agreement with the large-scale configuration of the optical corona, and lead to a more consistent reconstruction of the large-scale coronal magnetic field geometry, and potentially more accurate global heliospheric simulation results. Several upcoming data products for the space weather forecasting community will be also discussed.

  7. Ontology-guided organ detection to retrieve web images of disease manifestation: towards the construction of a consumer-based health image library.

    PubMed

    Chen, Yang; Ren, Xiaofeng; Zhang, Guo-Qiang; Xu, Rong

    2013-01-01

    Visual information is a crucial aspect of medical knowledge. Building a comprehensive medical image base, in the spirit of the Unified Medical Language System (UMLS), would greatly benefit patient education and self-care. However, collection and annotation of such a large-scale image base is challenging. To combine visual object detection techniques with medical ontology to automatically mine web photos and retrieve a large number of disease manifestation images with minimal manual labeling effort. As a proof of concept, we first learnt five organ detectors on three detection scales for eyes, ears, lips, hands, and feet. Given a disease, we used information from the UMLS to select affected body parts, ran the pretrained organ detectors on web images, and combined the detection outputs to retrieve disease images. Compared with a supervised image retrieval approach that requires training images for every disease, our ontology-guided approach exploits shared visual information of body parts across diseases. In retrieving 2220 web images of 32 diseases, we reduced manual labeling effort to 15.6% while improving the average precision by 3.9% from 77.7% to 81.6%. For 40.6% of the diseases, we improved the precision by 10%. The results confirm the concept that the web is a feasible source for automatic disease image retrieval for health image database construction. Our approach requires a small amount of manual effort to collect complex disease images, and to annotate them by standard medical ontology terms.

  8. Fast, large-scale hologram calculation in wavelet domain

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  9. A practical overview and comparison of certain commercial forensic software tools for processing large-scale digital investigations

    NASA Astrophysics Data System (ADS)

    Kröger, Knut; Creutzburg, Reiner

    2013-05-01

    The aim of this paper is to show the usefulness of modern forensic software tools for processing large-scale digital investigations. In particular, we focus on the new version of Nuix 4.2 and compare it with AccessData FTK 4.2, X-Ways Forensics 16.9 and Guidance Encase Forensic 7 regarding its performance, functionality, usability and capability. We will show how these software tools work with large forensic images and how capable they are in examining complex and big data scenarios.

  10. Mpc-scale diffuse radio emission in two massive cool-core clusters of galaxies

    NASA Astrophysics Data System (ADS)

    Sommer, Martin W.; Basu, Kaustuv; Intema, Huib; Pacaud, Florian; Bonafede, Annalisa; Babul, Arif; Bertoldi, Frank

    2017-04-01

    Radio haloes are diffuse synchrotron sources on scales of ˜1 Mpc that are found in merging clusters of galaxies, and are believed to be powered by electrons re-accelerated by merger-driven turbulence. We present measurements of extended radio emission on similarly large scales in two clusters of galaxies hosting cool cores: Abell 2390 and Abell 2261. The analysis is based on interferometric imaging with the Karl G. Jansky Very Large Array, Very Large Array and Giant Metrewave Radio Telescope. We present detailed radio images of the targets, subtract the compact emission components and measure the spectral indices for the diffuse components. The radio emission in A2390 extends beyond a known sloshing-like brightness discontinuity, and has a very steep in-band spectral slope at 1.5 GHz that is similar to some known ultrasteep spectrum radio haloes. The diffuse signal in A2261 is more extended than in A2390 but has lower luminosity. X-ray morphological indicators, derived from XMM-Newton X-ray data, place these clusters in the category of relaxed or regular systems, although some asymmetric features that can indicate past minor mergers are seen in the X-ray brightness images. If these two Mpc-scale radio sources are categorized as giant radio haloes, they question the common assumption of radio haloes occurring exclusively in clusters undergoing violent merging activity, in addition to commonly used criteria for distinguishing between radio haloes and minihaloes.

  11. Miniaturized integration of a fluorescence microscope

    PubMed Central

    Ghosh, Kunal K.; Burns, Laurie D.; Cocker, Eric D.; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J.

    2013-01-01

    The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals towards relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including semiconductor light source and sensor. This device enables high-speed cellular-level imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens. PMID:21909102

  12. Miniaturized integration of a fluorescence microscope.

    PubMed

    Ghosh, Kunal K; Burns, Laurie D; Cocker, Eric D; Nimmerjahn, Axel; Ziv, Yaniv; Gamal, Abbas El; Schnitzer, Mark J

    2011-09-11

    The light microscope is traditionally an instrument of substantial size and expense. Its miniaturized integration would enable many new applications based on mass-producible, tiny microscopes. Key prospective usages include brain imaging in behaving animals for relating cellular dynamics to animal behavior. Here we introduce a miniature (1.9 g) integrated fluorescence microscope made from mass-producible parts, including a semiconductor light source and sensor. This device enables high-speed cellular imaging across ∼0.5 mm2 areas in active mice. This capability allowed concurrent tracking of Ca2+ spiking in >200 Purkinje neurons across nine cerebellar microzones. During mouse locomotion, individual microzones exhibited large-scale, synchronized Ca2+ spiking. This is a mesoscopic neural dynamic missed by prior techniques for studying the brain at other length scales. Overall, the integrated microscope is a potentially transformative technology that permits distribution to many animals and enables diverse usages, such as portable diagnostics or microscope arrays for large-scale screens.

  13. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  14. Radar imaging of volcanic fields and sand dune fields: Implications for VOIR

    NASA Technical Reports Server (NTRS)

    Elachi, C.; Blom, R.; Daily, M.; Farr, T.; Saunders, R. S.

    1980-01-01

    A number of volcanic fields and sand dune fields in the western part of North America were studied using aircraft and Seasat synthetic aperture radar images and LANDSAT images. The capability of radars with different characteristics (i.e., frequency, polarization and look angles was assessed to identify and map different volcanic features, lava flows and sand dune types. It was concluded that: (1) volcanic features which have a relatively large topographic expression (i.e., cinder cones, collapse craters, calderas, etc.) are easily identified; (2) lava flows of different ages can be identified, particularly on the L-band images; and (3) sand dunes are clearly observed and their extent and large scale geometric characteristics determined, provided the proper imaging geometry exists.

  15. Application of AIS Technology to Forest Mapping

    NASA Technical Reports Server (NTRS)

    Yool, S. R.; Star, J. L.

    1985-01-01

    Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.

  16. ARES I AND ARES V CONCEPT IMAGE

    NASA Technical Reports Server (NTRS)

    2008-01-01

    THIS CONCEPT IMAGE SHOWS NASA'S NEXT GENERATION LAUNCH VEHICLE SYSTEMS STANDING SIDE BY SIDE. ARES I, LEFT, IS THE CREW LAUNCH VEHICLE THAT WILL CARRY THE ORION CREW EXPLORATION VEHICLE TO SPACE. ARES V IS THE CARGO LAUNCH VEHICLE THAT WILL DELIVER LARGE SCALE HARDWARE, INCLUDING THE LUNAR LANDER, TO SPACE.

  17. Large-scale imaging of cortical network activity with calcium indicators.

    PubMed

    Ikegaya, Yuji; Le Bon-Jego, Morgane; Yuste, Rafael

    2005-06-01

    Bulk loading of calcium indicators has provided a unique opportunity to reconstruct the activity of cortical networks with single-cell resolution. Here we describe the detailed methods of bulk loading of AM dyes we developed and have been improving for imaging with a spinning disk confocal microscope.

  18. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  19. Studying the Sky/Planets Can Drown You in Images: Machine Learning Solutions at JPL/Caltech

    NASA Technical Reports Server (NTRS)

    Fayyad, U. M.

    1995-01-01

    JPL is working to develop a domain-independent system capable of small-scale object recognition in large image databases for science analysis. Two applications discussed are the cataloging of three billion sky objects in the Sky Image Cataloging and Analysis Tool (SKICAT) and the detection of possibly one million small volcanoes visible in the Magellan synthetic aperture radar images of Venus (JPL Adaptive Recognition Tool, JARTool).

  20. Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.

    1998-01-01

    Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.

  1. The Future of Stellar Populations Studies in the Milky Way and the Local Group

    NASA Astrophysics Data System (ADS)

    Majewski, Steven R.

    2010-04-01

    The last decade has seen enormous progress in understanding the structure of the Milky Way and neighboring galaxies via the production of large-scale digital surveys of the sky like 2MASS and SDSS, as well as specialized, counterpart imaging surveys of other Local Group systems. Apart from providing snaphots of galaxy structure, these “cartographic” surveys lend insights into the formation and evolution of galaxies when supplemented with additional data (e.g., spectroscopy, astrometry) and when referenced to theoretical models and simulations of galaxy evolution. These increasingly sophisticated simulations are making ever more specific predictions about the detailed chemistry and dynamics of stellar populations in galaxies. To fully exploit, test and constrain these theoretical ventures demands similar commitments of observational effort as has been plied into the previous imaging surveys to fill out other dimensions of parameter space with statistically significant intensity. Fortunately the future of large-scale stellar population studies is bright with a number of grand projects on the horizon that collectively will contribute a breathtaking volume of information on individual stars in Local Group galaxies. These projects include: (1) additional imaging surveys, such as Pan-STARRS, SkyMapper and LSST, which, apart from providing deep, multicolor imaging, yield time series data useful for revealing variable stars (including critical standard candles, like RR Lyrae variables) and creating large-scale, deep proper motion catalogs; (2) higher accuracy, space-based astrometric missions, such as Gaia and SIM-Lite, which stand to provide critical, high precision dynamical data on stars in the Milky Way and its satellites; and (3) large-scale spectroscopic surveys provided by RAVE, APOGEE, HERMES, LAMOST, and the Gaia spectrometer, which will yield not only enormous numbers of stellar radial velocities, but extremely comprehensive views of the chemistry of stellar populations. Meanwhile, previously dust-obscured regions of the Milky Way will continue to be systematically exposed via large infrared surveys underway or on the way, such as the various GLIMPSE surveys from Spitzer's IRAC instrument, UKIDSS, APOGEE, JASMINE and WISE.

  2. Implementation of large-scale routine diagnostics using whole slide imaging in Sweden: Digital pathology experiences 2006-2013

    PubMed Central

    Thorstenson, Sten; Molin, Jesper; Lundström, Claes

    2014-01-01

    Recent technological advances have improved the whole slide imaging (WSI) scanner quality and reduced the cost of storage, thereby enabling the deployment of digital pathology for routine diagnostics. In this paper we present the experiences from two Swedish sites having deployed routine large-scale WSI for primary review. At Kalmar County Hospital, the digitization process started in 2006 to reduce the time spent at the microscope in order to improve the ergonomics. Since 2008, more than 500,000 glass slides have been scanned in the routine operations of Kalmar and the neighboring Linköping University Hospital. All glass slides are digitally scanned yet they are also physically delivered to the consulting pathologist who can choose to review the slides on screen, in the microscope, or both. The digital operations include regular remote case reporting by a few hospital pathologists, as well as around 150 cases per week where primary review is outsourced to a private clinic. To investigate how the pathologists choose to use the digital slides, a web-based questionnaire was designed and sent out to the pathologists in Kalmar and Linköping. The responses showed that almost all pathologists think that ergonomics have improved and that image quality was sufficient for most histopathologic diagnostic work. 38 ± 28% of the cases were diagnosed digitally, but the survey also revealed that the pathologists commonly switch back and forth between digital and conventional microscopy within the same case. The fact that two full-scale digital systems have been implemented and that a large portion of the primary reporting is voluntarily performed digitally shows that large-scale digitization is possible today. PMID:24843825

  3. Large-Scale Outflows in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  4. Identifying Coherent Structures in a 3-Stream Supersonic Jet Flow using Time-Resolved Schlieren Imaging

    NASA Astrophysics Data System (ADS)

    Tenney, Andrew; Coleman, Thomas; Berry, Matthew; Magstadt, Andy; Gogineni, Sivaram; Kiel, Barry

    2015-11-01

    Shock cells and large scale structures present in a three-stream non-axisymmetric jet are studied both qualitatively and quantitatively. Large Eddy Simulation is utilized first to gain an understanding of the underlying physics of the flow and direct the focus of the physical experiment. The flow in the experiment is visualized using long exposure Schlieren photography, with time resolved Schlieren photography also a possibility. Velocity derivative diagnostics are calculated from the grey-scale Schlieren images are analyzed using continuous wavelet transforms. Pressure signals are also captured in the near-field of the jet to correlate with the velocity derivative diagnostics and assist in unraveling this complex flow. We acknowledge the support of AFRL through an SBIR grant.

  5. The Newport Button: The Large Scale Replication Of Combined Three-And Two-Dimensional Holographic Images

    NASA Astrophysics Data System (ADS)

    Cowan, James J.

    1984-05-01

    A unique type of holographic imagery and its large scale replication are described. The "Newport Button", which was designed as an advertising premium item for the Newport Corporation, incorporates a complex overlay of holographic diffraction gratings surrounding a three-dimensional holographic image of a real object. The combined pattern is recorded onto a photosensitive medium from which a metal master is made. The master is subsequently used to repeatedly emboss the pattern into a thin plastic sheet. Individual patterns are then die cut from the metallized plastic and mounted onto buttons. A discussion is given of the diffraction efficiencies of holograms made in this particular fashion and of the special requirements of the replication process.

  6. Architectures and algorithms for digital image processing; Proceedings of the Meeting, Cannes, France, December 5, 6, 1985

    NASA Technical Reports Server (NTRS)

    Duff, Michael J. B. (Editor); Siegel, Howard J. (Editor); Corbett, Francis J. (Editor)

    1986-01-01

    The conference presents papers on the architectures, algorithms, and applications of image processing. Particular attention is given to a very large scale integration system for image reconstruction from projections, a prebuffer algorithm for instant display of volume data, and an adaptive image sequence filtering scheme based on motion detection. Papers are also presented on a simple, direct practical method of sensing local motion and analyzing local optical flow, image matching techniques, and an automated biological dosimetry system.

  7. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE PAGES

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  8. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation

    PubMed Central

    Lee, Jae H.; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T.; Seo, Youngho

    2014-01-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting. PMID:27081299

  9. Handling Big Data in Medical Imaging: Iterative Reconstruction with Large-Scale Automated Parallel Computation.

    PubMed

    Lee, Jae H; Yao, Yushu; Shrestha, Uttam; Gullberg, Grant T; Seo, Youngho

    2014-11-01

    The primary goal of this project is to implement the iterative statistical image reconstruction algorithm, in this case maximum likelihood expectation maximum (MLEM) used for dynamic cardiac single photon emission computed tomography, on Spark/GraphX. This involves porting the algorithm to run on large-scale parallel computing systems. Spark is an easy-to- program software platform that can handle large amounts of data in parallel. GraphX is a graph analytic system running on top of Spark to handle graph and sparse linear algebra operations in parallel. The main advantage of implementing MLEM algorithm in Spark/GraphX is that it allows users to parallelize such computation without any expertise in parallel computing or prior knowledge in computer science. In this paper we demonstrate a successful implementation of MLEM in Spark/GraphX and present the performance gains with the goal to eventually make it useable in clinical setting.

  10. Dual-wavelength hybrid optoacoustic-ultrasound biomicroscopy for functional imaging of large-scale cerebral vascular networks.

    PubMed

    Rebling, Johannes; Estrada, Héctor; Gottschalk, Sven; Sela, Gali; Zwack, Michael; Wissmeyer, Georg; Ntziachristos, Vasilis; Razansky, Daniel

    2018-04-19

    A critical link exists between pathological changes of cerebral vasculature and diseases affecting brain function. Microscopic techniques have played an indispensable role in the study of neurovascular anatomy and functions. Yet, investigations are often hindered by suboptimal trade-offs between the spatiotemporal resolution, field-of-view (FOV) and type of contrast offered by the existing optical microscopy techniques. We present a hybrid dual-wavelength optoacoustic (OA) biomicroscope capable of rapid transcranial visualization of large-scale cerebral vascular networks. The system offers 3-dimensional views of the morphology and oxygenation status of the cerebral vasculature with single capillary resolution and a FOV exceeding 6 × 8 mm 2 , thus covering the entire cortical vasculature in mice. The large-scale OA imaging capacity is complemented by simultaneously acquired pulse-echo ultrasound (US) biomicroscopy scans of the mouse skull. The new approach holds great potential to provide better insights into cerebrovascular function and facilitate efficient studies into neurological and vascular abnormalities of the brain. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Higher resolution satellite remote sensing and the impact on image mapping

    USGS Publications Warehouse

    Watkins, Allen H.; Thormodsgard, June M.

    1987-01-01

    Recent advances in spatial, spectral, and temporal resolution of civil land remote sensing satellite data are presenting new opportunities for image mapping applications. The U.S. Geological Survey's experimental satellite image mapping program is evolving toward larger scale image map products with increased information content as a result of improved image processing techniques and increased resolution. Thematic mapper data are being used to produce experimental image maps at 1:100,000 scale that meet established U.S. and European map accuracy standards. Availability of high quality, cloud-free, 30-meter ground resolution multispectral data from the Landsat thematic mapper sensor, along with 10-meter ground resolution panchromatic and 20-meter ground resolution multispectral data from the recently launched French SPOT satellite, present new cartographic and image processing challenges.The need to fully exploit these higher resolution data increases the complexity of processing the images into large-scale image maps. The removal of radiometric artifacts and noise prior to geometric correction can be accomplished by using a variety of image processing filters and transforms. Sensor modeling and image restoration techniques allow maximum retention of spatial and radiometric information. An optimum combination of spectral information and spatial resolution can be obtained by merging different sensor types. These processing techniques are discussed and examples are presented.

  12. Evaluation of Existing Image Matching Methods for Deriving Glacier Surface Displacements Globally from Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Heid, T.; Kääb, A.

    2011-12-01

    Automatic matching of images from two different times is a method that is often used to derive glacier surface velocity. Nearly global repeat coverage of the Earth's surface by optical satellite sensors now opens the possibility for global-scale mapping and monitoring of glacier flow with a number of applications in, for example, glacier physics, glacier-related climate change and impact assessment, and glacier hazard management. The purpose of this study is to compare and evaluate different existing image matching methods for glacier flow determination over large scales. The study compares six different matching methods: normalized cross-correlation (NCC), the phase correlation algorithm used in the COSI-Corr software, and four other Fourier methods with different normalizations. We compare the methods over five regions of the world with different representative glacier characteristics: Karakoram, the European Alps, Alaska, Pine Island (Antarctica) and southwest Greenland. Landsat images are chosen for matching because they expand back to 1972, they cover large areas, and at the same time their spatial resolution is as good as 15 m for images after 1999 (ETM+ pan). Cross-correlation on orientation images (CCF-O) outperforms the three similar Fourier methods, both in areas with high and low visual contrast. NCC experiences problems in areas with low visual contrast, areas with thin clouds or changing snow conditions between the images. CCF-O has problems on narrow outlet glaciers where small window sizes (about 16 pixels by 16 pixels or smaller) are needed, and it also obtains fewer correct matches than COSI-Corr in areas with low visual contrast. COSI-Corr has problems on narrow outlet glaciers and it obtains fewer correct matches compared to CCF-O when thin clouds cover the surface, or if one of the images contains snow dunes. In total, we consider CCF-O and COSI-Corr to be the two most robust matching methods for global-scale mapping and monitoring of glacier velocities. If combining CCF-O with locally adaptive template sizes and by filtering the matching results automatically by comparing the displacement matrix to its low pass filtered version, the matching process can be automated to a large degree. This allows the derivation of glacier velocities with minimal (but not without!) user interaction and hence also opens up the possibility of global-scale mapping and monitoring of glacier flow.

  13. Stacked Metal Silicide/Silicon Far-Infrared Detectors

    NASA Technical Reports Server (NTRS)

    Maserjian, Joseph

    1988-01-01

    Selective doping of silicon in proposed metal silicide/silicon Schottky-barrier infrared photodetector increases maximum detectable wavelength. Stacking layers to form multiple Schottky barriers increases quantum efficiency of detector. Detectors of new type enhance capabilities of far-infrared imaging arrays. Grows by molecular-beam epitaxy on silicon waferscontaining very-large-scale integrated circuits. Imaging arrays of detectors made in monolithic units with image-preprocessing circuitry.

  14. Improving crop condition monitoring at field scale by using optimal Landsat and MODIS images

    USDA-ARS?s Scientific Manuscript database

    Satellite remote sensing data at coarse resolution (kilometers) have been widely used in monitoring crop condition for decades. However, crop condition monitoring at field scale requires high resolution data in both time and space. Although a large number of remote sensing instruments with different...

  15. Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses

    PubMed Central

    McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon

    2013-01-01

    The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm2. Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes. PMID:23592185

  16. Toward giga-pixel nanoscopy on a chip: a computational wide-field look at the nano-scale without the use of lenses.

    PubMed

    McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon; Ozcan, Aydogan

    2013-06-07

    The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm(2). Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes.

  17. Multiscale approach to contour fitting for MR images

    NASA Astrophysics Data System (ADS)

    Rueckert, Daniel; Burger, Peter

    1996-04-01

    We present a new multiscale contour fitting process which combines information about the image and the contour of the object at different levels of scale. The algorithm is based on energy minimizing deformable models but avoids some of the problems associated with these models. The segmentation algorithm starts by constructing a linear scale-space of an image through convolution of the original image with a Gaussian kernel at different levels of scale, where the scale corresponds to the standard deviation of the Gaussian kernel. At high levels of scale large scale features of the objects are preserved while small scale features, like object details as well as noise, are suppressed. In order to maximize the accuracy of the segmentation, the contour of the object of interest is then tracked in scale-space from coarse to fine scales. We propose a hybrid multi-temperature simulated annealing optimization to minimize the energy of the deformable model. At high levels of scale the SA optimization is started at high temperatures, enabling the SA optimization to find a global optimal solution. At lower levels of scale the SA optimization is started at lower temperatures (at the lowest level the temperature is close to 0). This enforces a more deterministic behavior of the SA optimization at lower scales and leads to an increasingly local optimization as high energy barriers cannot be crossed. The performance and robustness of the algorithm have been tested on spin-echo MR images of the cardiovascular system. The task was to segment the ascending and descending aorta in 15 datasets of different individuals in order to measure regional aortic compliance. The results show that the algorithm is able to provide more accurate segmentation results than the classic contour fitting process and is at the same time very robust to noise and initialization.

  18. SAD5 Stereo Correlation Line-Striping in an FPGA

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopoulos, Arin C.

    2011-01-01

    High precision SAD5 stereo computations can be performed in an FPGA (field-programmable gate array) at much higher speeds than possible in a conventional CPU (central processing unit), but this uses large amounts of FPGA resources that scale with image size. Of the two key resources in an FPGA, Slices and BRAM (block RAM), Slices scale linearly in the new algorithm with image size, and BRAM scales quadratically with image size. An approach was developed to trade latency for BRAM by sub-windowing the image vertically into overlapping strips and stitching the outputs together to create a single continuous disparity output. In stereo, the general rule of thumb is that the disparity search range must be 1/10 the image size. In the new algorithm, BRAM usage scales linearly with disparity search range and scales again linearly with line width. So a doubling of image size, say from 640 to 1,280, would in the previous design be an effective 4 of BRAM usage: 2 for line width, 2 again for disparity search range. The minimum strip size is twice the search range, and will produce an output strip width equal to the disparity search range. So assuming a disparity search range of 1/10 image width, 10 sequential runs of the minimum strip size would produce a full output image. This approach allowed the innovators to fit 1280 960 wide SAD5 stereo disparity in less than 80 BRAM, 52k Slices on a Virtex 5LX330T, 25% and 24% of resources, respectively. Using a 100-MHz clock, this build would perform stereo at 39 Hz. Of particular interest to JPL is that there is a flight qualified version of the Virtex 5: this could produce stereo results even for very large image sizes at 3 orders of magnitude faster than could be computed on the PowerPC 750 flight computer. The work covered in the report allows the stereo algorithm to run on much larger images than before, and using much less BRAM. This opens up choices for a smaller flight FPGA (which saves power and space), or for other algorithms in addition to SAD5 to be run on the same FPGA.

  19. Large-Scale Coronal Heating from the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.

    1999-01-01

    In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.

  20. Portable Fluorescence Imaging System for Hypersonic Flow Facilities

    NASA Technical Reports Server (NTRS)

    Wilkes, J. A.; Alderfer, D. W.; Jones, S. B.; Danehy, P. M.

    2003-01-01

    A portable fluorescence imaging system has been developed for use in NASA Langley s hypersonic wind tunnels. The system has been applied to a small-scale free jet flow. Two-dimensional images were taken of the flow out of a nozzle into a low-pressure test section using the portable planar laser-induced fluorescence system. Images were taken from the center of the jet at various test section pressures, showing the formation of a barrel shock at low pressures, transitioning to a turbulent jet at high pressures. A spanwise scan through the jet at constant pressure reveals the three-dimensional structure of the flow. Future capabilities of the system for making measurements in large-scale hypersonic wind tunnel facilities are discussed.

  1. An Assessment of Stream Confluence Flow Dynamics using Large Scale Particle Image Velocimetry Captured from Unmanned Aerial Systems

    NASA Astrophysics Data System (ADS)

    Lewis, Q. W.; Rhoads, B. L.

    2017-12-01

    The merging of rivers at confluences results in complex three-dimensional flow patterns that influence sediment transport, bed morphology, downstream mixing, and physical habitat conditions. The capacity to characterize comprehensively flow at confluences using traditional sensors, such as acoustic Doppler velocimeters and profiles, is limited by the restricted spatial resolution of these sensors and difficulties in measuring velocities simultaneously at many locations within a confluence. This study assesses two-dimensional surficial patterns of flow structure at a small stream confluence in Illinois, USA, using large scale particle image velocimetry (LSPIV) derived from videos captured by unmanned aerial systems (UAS). The method captures surface velocity patterns at high spatial and temporal resolution over multiple scales, ranging from the entire confluence to details of flow within the confluence mixing interface. Flow patterns at high momentum ratio are compared to flow patterns when the two incoming flows have nearly equal momentum flux. Mean surface flow patterns during the two types of events provide details on mean patterns of surface flow in different hydrodynamic regions of the confluence and on changes in these patterns with changing momentum flux ratio. LSPIV data derived from the highest resolution imagery also reveal general characteristics of large-scale vortices that form along the shear layer between the flows during the high-momentum ratio event. The results indicate that the use of LSPIV and UAS is well-suited for capturing in detail mean surface patterns of flow at small confluences, but that characterization of evolving turbulent structures is limited by scale considerations related to structure size, image resolution, and camera instability. Complementary methods, including camera platforms mounted at fixed positions close to the water surface, provide opportunities to accurately characterize evolving turbulent flow structures in confluences.

  2. Coronal mass ejection and solar flare initiation processes without appreciable

    NASA Astrophysics Data System (ADS)

    Veselovsky, I.

    TRACE and SOHO/EIT movies clearly show the cases of the coronal mass ejection and solar flare initiations without noticeable large-scale topology modifications in observed features. Instead of this, the appearance of new intermediate scales is often omnipresent in the erupting region structures when the overall configuration is preserved. Examples of this kind are presented and discussed in the light of the existing magnetic field reconnection paradigms. It is demonstrated that spurious large-scale reconnections and detachments are often produced due to the projection effects in poorly resolved images of twisted loops and sheared arcades especially when deformed parts of them are underexposed and not seen in the images only because of this reason. Other parts, which are normally exposed or overexposed, can make the illusion of "islands" or detached elements in these situations though in reality they preserve the initial magnetic connectivity. Spurious "islands" of this kind could be wrongly interpreted as signatures of topological transitions in the large-scale magnetic fields in many instances described in the vast literature in the past based mainly on fuzzy YOHKOH images, which resulted in the myth about universal solar flare models and the scenario of detached magnetic island formations with new null points in the large scale magnetic field. The better visualization with higher resolution and sensitivity limits allowed to clarify this confusion and to avoid this unjustified interpretation. It is concluded that topological changes obviously can happen in the coronal magnetic fields, but these changes are not always necessary ingredients at least of all coronal mass ejections and solar flares. The scenario of the magnetic field opening is not universal for all ejections. Otherwise, expanding ejections with closed magnetic configurations can be produced by the fast E cross B drifts in strong inductive electric fields, which appear due to the emergence of the new magnetic flux. Corresponding theoretical models are presented and discussed.

  3. Pushing CHARA to its Limit: A Pathway Toward 80X80 Pixel Images of Stellar Surfaces

    NASA Astrophysics Data System (ADS)

    Norris, Ryan

    2018-04-01

    Imagine a future with 80x80 pixel images of stellar surfaces. With a maximum baseline of 330 m, the CHARA Array is already capable of achieving 0.5 mas resolution, sufficient for imaging the red supergiant Betelgeuse (d = 42.3 mas ) at such a scale. However several issues have hampered attempts to image the largest stars at CHARA, including a lack of baselines shorter than 34 m and instrument sensitivities unable to measure the faintest fringes. Here we discuss what is needed to achieve imaging of large stars at CHARA. We will present suggestions for future telescope placement, describing the advantages of a short baseline, while also considering the needs of other imaging targets that might benefit from additional baselines. We will also present developments in image reconstruction methods that can improve the resolution of images today, albeit of smaller targets and at a lesser scale. Of course, there will be example images, created using simulated oifits data and state of the art reconstruction techniques!

  4. State-of-the-art for large area high resolution gray scale and full color AC plasma flat panel displays

    NASA Technical Reports Server (NTRS)

    Stoller, Ray A.; Wedding, Donald K.; Friedman, Peter S.

    1993-01-01

    A development status evaluation is presented for gas plasma display technology, noting how tradeoffs among the parameters of size, resolution, speed, portability, color, and image quality can yield cost-effective solutions for medical imaging, CAD, teleconferencing, multimedia, and both civil and military applications. Attention is given to plasma-based large-area displays' suitability for radar, sonar, and IR, due to their lack of EM susceptibility. Both monochrome and color displays are available.

  5. Compact Representation of High-Dimensional Feature Vectors for Large-Scale Image Recognition and Retrieval.

    PubMed

    Zhang, Yu; Wu, Jianxin; Cai, Jianfei

    2016-05-01

    In large-scale visual recognition and image retrieval tasks, feature vectors, such as Fisher vector (FV) or the vector of locally aggregated descriptors (VLAD), have achieved state-of-the-art results. However, the combination of the large numbers of examples and high-dimensional vectors necessitates dimensionality reduction, in order to reduce its storage and CPU costs to a reasonable range. In spite of the popularity of various feature compression methods, this paper shows that the feature (dimension) selection is a better choice for high-dimensional FV/VLAD than the feature (dimension) compression methods, e.g., product quantization. We show that strong correlation among the feature dimensions in the FV and the VLAD may not exist, which renders feature selection a natural choice. We also show that, many dimensions in FV/VLAD are noise. Throwing them away using feature selection is better than compressing them and useful dimensions altogether using feature compression methods. To choose features, we propose an efficient importance sorting algorithm considering both the supervised and unsupervised cases, for visual recognition and image retrieval, respectively. Combining with the 1-bit quantization, feature selection has achieved both higher accuracy and less computational cost than feature compression methods, such as product quantization, on the FV and the VLAD image representations.

  6. Mosaic construction, processing, and review of very large electron micrograph composites

    NASA Astrophysics Data System (ADS)

    Vogt, Robert C., III; Trenkle, John M.; Harmon, Laurel A.

    1996-11-01

    A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.

  7. Full-wave Characterization of Rough Terrain Surface Effects for Forward-looking Radar Applications: A Scattering and Imaging Study from the Electromagnetic Perspective

    DTIC Science & Technology

    2011-09-01

    and Imaging Framework First, the parallelized 3-D FDTD algorithm is applied to simulate composite scattering from targets in a rough ground...solver as pertinent to forward-looking radar sensing , the effects of surface clutter on multistatic target imaging are illustrated with large-scale...Full-wave Characterization of Rough Terrain Surface Effects for Forward-looking Radar Applications: A Scattering and Imaging Study from the

  8. A Scalable Framework For Segmenting Magnetic Resonance Images

    PubMed Central

    Hore, Prodip; Goldgof, Dmitry B.; Gu, Yuhua; Maudsley, Andrew A.; Darkazanli, Ammar

    2009-01-01

    A fast, accurate and fully automatic method of segmenting magnetic resonance images of the human brain is introduced. The approach scales well allowing fast segmentations of fine resolution images. The approach is based on modifications of the soft clustering algorithm, fuzzy c-means, that enable it to scale to large data sets. Two types of modifications to create incremental versions of fuzzy c-means are discussed. They are much faster when compared to fuzzy c-means for medium to extremely large data sets because they work on successive subsets of the data. They are comparable in quality to application of fuzzy c-means to all of the data. The clustering algorithms coupled with inhomogeneity correction and smoothing are used to create a framework for automatically segmenting magnetic resonance images of the human brain. The framework is applied to a set of normal human brain volumes acquired from different magnetic resonance scanners using different head coils, acquisition parameters and field strengths. Results are compared to those from two widely used magnetic resonance image segmentation programs, Statistical Parametric Mapping and the FMRIB Software Library (FSL). The results are comparable to FSL while providing significant speed-up and better scalability to larger volumes of data. PMID:20046893

  9. Error simulation of paired-comparison-based scaling methods

    NASA Astrophysics Data System (ADS)

    Cui, Chengwu

    2000-12-01

    Subjective image quality measurement usually resorts to psycho physical scaling. However, it is difficult to evaluate the inherent precision of these scaling methods. Without knowing the potential errors of the measurement, subsequent use of the data can be misleading. In this paper, the errors on scaled values derived form paired comparison based scaling methods are simulated with randomly introduced proportion of choice errors that follow the binomial distribution. Simulation results are given for various combinations of the number of stimuli and the sampling size. The errors are presented in the form of average standard deviation of the scaled values and can be fitted reasonably well with an empirical equation that can be sued for scaling error estimation and measurement design. The simulation proves paired comparison based scaling methods can have large errors on the derived scaled values when the sampling size and the number of stimuli are small. Examples are also given to show the potential errors on actually scaled values of color image prints as measured by the method of paired comparison.

  10. "Baby-Cam" and Researching with Infants: Viewer, Image and (Not) Knowing

    ERIC Educational Resources Information Center

    Elwick, Sheena

    2015-01-01

    This article offers a methodological reflection on how "baby-cam" enhanced ethically reflective attitudes in a large-scale research project that set out to research with infants in Australian early childhood education and care settings. By juxtaposing digital images produced by two different digital-camera technologies and drawing on…

  11. Recent Developments in VSD Imaging of Small Neuronal Networks

    ERIC Educational Resources Information Center

    Hill, Evan S.; Bruno, Angela M.; Frost, William N.

    2014-01-01

    Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit…

  12. Network Access to Visual Information: A Study of Costs and Uses.

    ERIC Educational Resources Information Center

    Besser, Howard

    This paper summarizes a subset of the findings of a study of digital image distribution that focused on the Museum Educational Site Licensing (MESL) project--the first large-scale multi-institutional project to explore digital delivery of art images and accompanying text/metadata from disparate sources. This Mellon Foundation-sponsored study…

  13. Schiaparelli Hemisphere

    NASA Image and Video Library

    1996-06-03

    This mosaic is composed of about 100 red- and violet- filter Viking Orbiter images, digitally mosaiced in an orthographic projection at a scale of 1 km/pixel. The images were acquired in 1980 during mid northern summer on Mars (Ls = 89 degrees). The center of the image is near the impact crater Schiaparelli (latitude -3 degrees, longitude 343 degrees). The limits of this mosaic are approximately latitude -60 to 60 degrees and longitude 280 to 30 degrees. The color variations have been enhanced by a factor of two, and the large-scale brightness variations (mostly due to sun-angle variations) have been normalized by large-scale filtering. The large circular area with a bright yellow color (in this rendition) is known as Arabia. The boundary between the ancient, heavily-cratered southern highlands and the younger northern plains occurs far to the north (latitude 40 degrees) on this side of the planet, just north of Arabia. The dark streaks with bright margins emanating from craters in the Oxia Palus region (to the left of Arabia) are caused by erosion and/or deposition by the wind. The dark blue area on the far right, called Syrtis Major Planum, is a low-relief volcanic shield of probable basaltic composition. Bright white areas to the south, including the Hellas impact basin at the lower right, are covered by carbon dioxide frost. http://photojournal.jpl.nasa.gov/catalog/PIA00004

  14. Video-rate volumetric neuronal imaging using 3D targeted illumination.

    PubMed

    Xiao, Sheng; Tseng, Hua-An; Gritton, Howard; Han, Xue; Mertz, Jerome

    2018-05-21

    Fast volumetric microscopy is required to monitor large-scale neural ensembles with high spatio-temporal resolution. Widefield fluorescence microscopy can image large 2D fields of view at high resolution and speed while remaining simple and costeffective. A focal sweep add-on can further extend the capacity of widefield microscopy by enabling extended-depth-of-field (EDOF) imaging, but suffers from an inability to reject out-of-focus fluorescence background. Here, by using a digital micromirror device to target only in-focus sample features, we perform EDOF imaging with greatly enhanced contrast and signal-to-noise ratio, while reducing the light dosage delivered to the sample. Image quality is further improved by the application of a robust deconvolution algorithm. We demonstrate the advantages of our technique for in vivo calcium imaging in the mouse brain.

  15. Separation and imaging diffractions by a sparsity-promoting model and subspace trust-region algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Caixia; Zhao, Jingtao; Wang, Yanfei; Wang, Chengxiang; Geng, Weifeng

    2017-03-01

    The small-scale geologic inhomogeneities or discontinuities, such as tiny faults, cavities or fractures, generally have spatial scales comparable to or even smaller than the seismic wavelength. Therefore, the seismic responses of these objects are coded in diffractions and an attempt to high-resolution imaging can be made if we can appropriately image them. As the amplitudes of reflections can be several orders of magnitude larger than those of diffractions, one of the key problems of diffraction imaging is to suppress reflections and at the same time to preserve diffractions. A sparsity-promoting method for separating diffractions in the common-offset domain is proposed that uses the Kirchhoff integral formula to enforce the sparsity of diffractions and the linear Radon transform to formulate reflections. A subspace trust-region algorithm that can provide globally convergent solutions is employed for solving this large-scale computation problem. The method not only allows for separation of diffractions in the case of interfering events but also ensures a high fidelity of the separated diffractions. Numerical experiment and field application demonstrate the good performance of the proposed method in imaging the small-scale geological features related to the migration channel and storage spaces of carbonate reservoirs.

  16. Current challenges in quantifying preferential flow through the vadose zone

    NASA Astrophysics Data System (ADS)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  17. Large Scale Hierarchical K-Means Based Image Retrieval With MapReduce

    DTIC Science & Technology

    2014-03-27

    hadoop distributed file system: Architecture and design, 2007. [10] G. Bradski. Dr. Dobb’s Journal of Software Tools, 2000. [11] Terry Costlow. Big data ...million images running on 20 virtual machines are shown. 15. SUBJECT TERMS Image Retrieval, MapReduce, Hierarchical K-Means, Big Data , Hadoop U U U UU 87...13 2.1.1.2 HDFS Data Representation . . . . . . . . . . . . . . . . 14 2.1.1.3 Hadoop Engine

  18. Satellite-based peatland mapping: potential of the MODIS sensor.

    Treesearch

    D. Pflugmacher; O.N. Krankina; W.B. Cohen

    2006-01-01

    Peatlands play a major role in the global carbon cycle but are largely overlooked in current large-scale vegetation mapping efforts. In this study, we investigated the potential of the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor to capture extent and distribution of peatlands in the St. Petersburg region of Russia.

  19. Model Analysis of an Aircraft Fueslage Panel using Experimental and Finite-Element Techniques

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A.; Buehrle, Ralph D.; Storaasli, Olaf L.

    1998-01-01

    The application of Electro-Optic Holography (EOH) for measuring the center bay vibration modes of an aircraft fuselage panel under forced excitation is presented. The requirement of free-free panel boundary conditions made the acquisition of quantitative EOH data challenging since large scale rigid body motions corrupted measurements of the high frequency vibrations of interest. Image processing routines designed to minimize effects of large scale motions were applied to successfully resurrect quantitative EOH vibrational amplitude measurements

  20. Characterizing stroke lesions using digital templates and lesion quantification tools in a web-based imaging informatics system for a large-scale stroke rehabilitation clinical trial

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent

    2015-03-01

    Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area, and store these results in a web-based digital format.

  1. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  2. The Hyper Suprime-Cam software pipeline

    NASA Astrophysics Data System (ADS)

    Bosch, James; Armstrong, Robert; Bickerton, Steven; Furusawa, Hisanori; Ikeda, Hiroyuki; Koike, Michitaro; Lupton, Robert; Mineo, Sogo; Price, Paul; Takata, Tadafumi; Tanaka, Masayuki; Yasuda, Naoki; AlSayyad, Yusra; Becker, Andrew C.; Coulton, William; Coupon, Jean; Garmilla, Jose; Huang, Song; Krughoff, K. Simon; Lang, Dustin; Leauthaud, Alexie; Lim, Kian-Tat; Lust, Nate B.; MacArthur, Lauren A.; Mandelbaum, Rachel; Miyatake, Hironao; Miyazaki, Satoshi; Murata, Ryoma; More, Surhud; Okura, Yuki; Owen, Russell; Swinbank, John D.; Strauss, Michael A.; Yamada, Yoshihiko; Yamanoi, Hitomi

    2018-01-01

    In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope's Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope's Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.

  3. Direct evidence for Lyboldsymbol{alpha } depletion in the protocluster core

    NASA Astrophysics Data System (ADS)

    Shimakawa, Rhythm; Kodama, Tadayuki; Hayashi, Masao; Tanaka, Ichi; Matsuda, Yuichi; Kashikawa, Nobunari; Shibuya, Takatoshi; Tadaki, Ken-ichi; Koyama, Yusei; Suzuki, Tomoko L.; Yamamoto, Moegi

    2017-06-01

    We have carried out panoramic Lyα narrow-band imaging with Suprime-Cam on Subaru towards the known protocluster USS1558-003 at z = 2.53. Our previous narrow-band imaging in the near-infrared identified multiple dense groups of Hα emitters (HAEs) within the protocluster. We have now identified the large-scale structures across a ˜50 comoving Mpc scale traced by Lyα emitters (LAEs) in which the protocluster traced by the HAEs is embedded. On a smaller scale, however, there are remarkably few LAEs in the regions of HAE overdensities. Moreover, the stacking analyses of the images show that HAEs in higher-density regions show systematically lower escape fractions of Lyα photons than those of HAEs in lower-density regions. These phenomena may be driven by the extra depletion of Lyα emission lines along our line of sight by more intervening cold circumgalactic/intergalactic medium and/or dust in the dense core. We also caution that all the previous high-z protocluster surveys using LAEs as tracers would have largely missed galaxies in the very dense cores of the protoclusters where we would expect to see any early environmental effects.

  4. GenomeDiagram: a python package for the visualization of large-scale genomic data.

    PubMed

    Pritchard, Leighton; White, Jennifer A; Birch, Paul R J; Toth, Ian K

    2006-03-01

    We present GenomeDiagram, a flexible, open-source Python module for the visualization of large-scale genomic, comparative genomic and other data with reference to a single chromosome or other biological sequence. GenomeDiagram may be used to generate publication-quality vector graphics, rastered images and in-line streamed graphics for webpages. The package integrates with datatypes from the BioPython project, and is available for Windows, Linux and Mac OS X systems. GenomeDiagram is freely available as source code (under GNU Public License) at http://bioinf.scri.ac.uk/lp/programs.html, and requires Python 2.3 or higher, and recent versions of the ReportLab and BioPython packages. A user manual, example code and images are available at http://bioinf.scri.ac.uk/lp/programs.html.

  5. Signature detection and matching for document image retrieval.

    PubMed

    Zhu, Guangyu; Zheng, Yefeng; Doermann, David; Jaeger, Stefan

    2009-11-01

    As one of the most pervasive methods of individual identification and document authentication, signatures present convincing evidence and provide an important form of indexing for effective document image processing and retrieval in a broad range of applications. However, detection and segmentation of free-form objects such as signatures from clustered background is currently an open document analysis problem. In this paper, we focus on two fundamental problems in signature-based document image retrieval. First, we propose a novel multiscale approach to jointly detecting and segmenting signatures from document images. Rather than focusing on local features that typically have large variations, our approach captures the structural saliency using a signature production model and computes the dynamic curvature of 2D contour fragments over multiple scales. This detection framework is general and computationally tractable. Second, we treat the problem of signature retrieval in the unconstrained setting of translation, scale, and rotation invariant nonrigid shape matching. We propose two novel measures of shape dissimilarity based on anisotropic scaling and registration residual error and present a supervised learning framework for combining complementary shape information from different dissimilarity metrics using LDA. We quantitatively study state-of-the-art shape representations, shape matching algorithms, measures of dissimilarity, and the use of multiple instances as query in document image retrieval. We further demonstrate our matching techniques in offline signature verification. Extensive experiments using large real-world collections of English and Arabic machine-printed and handwritten documents demonstrate the excellent performance of our approaches.

  6. HIGH-EFFICIENCY AUTONOMOUS LASER ADAPTIVE OPTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baranec, Christoph; Riddle, Reed; Tendulkar, Shriharsh

    2014-07-20

    As new large-scale astronomical surveys greatly increase the number of objects targeted and discoveries made, the requirement for efficient follow-up observations is crucial. Adaptive optics imaging, which compensates for the image-blurring effects of Earth's turbulent atmosphere, is essential for these surveys, but the scarcity, complexity and high demand of current systems limit their availability for following up large numbers of targets. To address this need, we have engineered and implemented Robo-AO, a fully autonomous laser adaptive optics and imaging system that routinely images over 200 objects per night with an acuity 10 times sharper at visible wavelengths than typically possible frommore » the ground. By greatly improving the angular resolution, sensitivity, and efficiency of 1-3 m class telescopes, we have eliminated a major obstacle in the follow-up of the discoveries from current and future large astronomical surveys.« less

  7. Enabling Interactive Measurements from Large Coverage Microscopy

    PubMed Central

    Bajcsy, Peter; Vandecreme, Antoine; Amelot, Julien; Chalfoun, Joe; Majurski, Michael; Brady, Mary

    2017-01-01

    Microscopy could be an important tool for characterizing stem cell products if quantitative measurements could be collected over multiple spatial and temporal scales. With the cells changing states over time and being several orders of magnitude smaller than cell products, modern microscopes are already capable of imaging large spatial areas, repeat imaging over time, and acquiring images over several spectra. However, characterizing stem cell products from such large image collections is challenging because of data size, required computations, and lack of interactive quantitative measurements needed to determine release criteria. We present a measurement web system consisting of available algorithms, extensions to a client-server framework using Deep Zoom, and the configuration know-how to provide the information needed for inspecting the quality of a cell product. The cell and other data sets are accessible via the prototype web-based system at http://isg.nist.gov/deepzoomweb. PMID:28663600

  8. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Robust Optical Recognition of Cursive Pashto Script Using Scale, Rotation and Location Invariant Approach

    PubMed Central

    Ahmad, Riaz; Naz, Saeeda; Afzal, Muhammad Zeshan; Amin, Sayed Hassan; Breuel, Thomas

    2015-01-01

    The presence of a large number of unique shapes called ligatures in cursive languages, along with variations due to scaling, orientation and location provides one of the most challenging pattern recognition problems. Recognition of the large number of ligatures is often a complicated task in oriental languages such as Pashto, Urdu, Persian and Arabic. Research on cursive script recognition often ignores the fact that scaling, orientation, location and font variations are common in printed cursive text. Therefore, these variations are not included in image databases and in experimental evaluations. This research uncovers challenges faced by Arabic cursive script recognition in a holistic framework by considering Pashto as a test case, because Pashto language has larger alphabet set than Arabic, Persian and Urdu. A database containing 8000 images of 1000 unique ligatures having scaling, orientation and location variations is introduced. In this article, a feature space based on scale invariant feature transform (SIFT) along with a segmentation framework has been proposed for overcoming the above mentioned challenges. The experimental results show a significantly improved performance of proposed scheme over traditional feature extraction techniques such as principal component analysis (PCA). PMID:26368566

  10. Images of Bottomside Irregularities Observed at Topside Altitudes

    NASA Technical Reports Server (NTRS)

    Burke, William J.; Gentile, Louise C.; Shomo, Shannon R.; Roddy, Patrick A.; Pfaff, Robert F.

    2012-01-01

    We analyzed plasma and field measurements acquired by the Communication/ Navigation Outage Forecasting System (C/NOFS) satellite during an eight-hour period on 13-14 January 2010 when strong to moderate 250 MHz scintillation activity was observed at nearby Scintillation Network Decision Aid (SCINDA) ground stations. C/NOFS consistently detected relatively small-scale density and electric field irregularities embedded within large-scale (approx 100 km) structures at topside altitudes. Significant spectral power measured at the Fresnel (approx 1 km) scale size suggests that C/NOFS was magnetically conjugate to bottomside irregularities similar to those directly responsible for the observed scintillations. Simultaneous ion drift and plasma density measurements indicate three distinct types of large-scale irregularities: (1) upward moving depletions, (2) downward moving depletions, and (3) upward moving density enhancements. The first type has the characteristics of equatorial plasma bubbles; the second and third do not. The data suggest that both downward moving depletions and upward moving density enhancements and the embedded small-scale irregularities may be regarded as Alfvenic images of bottomside irregularities. This interpretation is consistent with predictions of previously reported theoretical modeling and with satellite observations of upward-directed Poynting flux in the low-latitude ionosphere.

  11. [Research on non-rigid registration of multi-modal medical image based on Demons algorithm].

    PubMed

    Hao, Peibo; Chen, Zhen; Jiang, Shaofeng; Wang, Yang

    2014-02-01

    Non-rigid medical image registration is a popular subject in the research areas of the medical image and has an important clinical value. In this paper we put forward an improved algorithm of Demons, together with the conservation of gray model and local structure tensor conservation model, to construct a new energy function processing multi-modal registration problem. We then applied the L-BFGS algorithm to optimize the energy function and solve complex three-dimensional data optimization problem. And finally we used the multi-scale hierarchical refinement ideas to solve large deformation registration. The experimental results showed that the proposed algorithm for large de formation and multi-modal three-dimensional medical image registration had good effects.

  12. Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror

    NASA Astrophysics Data System (ADS)

    Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng

    2016-07-01

    To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.

  13. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  14. Self-Assembly of Large-Scale Shape-Controlled DNA Nano-Structures

    DTIC Science & Technology

    2014-12-16

    discharged carbon-coated TEM grids for 4 min and then stained for 1 min using a 2% aqueous uranyl formate solution containing 25 mM NaOH. Imaging was...temperature for 3 h in the dark. TEM imaging. For imaging, 2,5 pi annealed sample was adsorbed for 2 min onto glow- discharged , carbon-coated TEM grids...Imaging. For ’I’EM imaging, a 3.S //L sample (l—5 nM) was adsorbed onto glow discharged carbon-coated TEM grids for 4 min and then stained for 1 min or a

  15. 3D fast adaptive correlation imaging for large-scale gravity data based on GPU computation

    NASA Astrophysics Data System (ADS)

    Chen, Z.; Meng, X.; Guo, L.; Liu, G.

    2011-12-01

    In recent years, large scale gravity data sets have been collected and employed to enhance gravity problem-solving abilities of tectonics studies in China. Aiming at the large scale data and the requirement of rapid interpretation, previous authors have carried out a lot of work, including the fast gradient module inversion and Euler deconvolution depth inversion ,3-D physical property inversion using stochastic subspaces and equivalent storage, fast inversion using wavelet transforms and a logarithmic barrier method. So it can be say that 3-D gravity inversion has been greatly improved in the last decade. Many authors added many different kinds of priori information and constraints to deal with nonuniqueness using models composed of a large number of contiguous cells of unknown property and obtained good results. However, due to long computation time, instability and other shortcomings, 3-D physical property inversion has not been widely applied to large-scale data yet. In order to achieve 3-D interpretation with high efficiency and precision for geological and ore bodies and obtain their subsurface distribution, there is an urgent need to find a fast and efficient inversion method for large scale gravity data. As an entirely new geophysical inversion method, 3D correlation has a rapid development thanks to the advantage of requiring no a priori information and demanding small amount of computer memory. This method was proposed to image the distribution of equivalent excess masses of anomalous geological bodies with high resolution both longitudinally and transversely. In order to tranform the equivalence excess masses into real density contrasts, we adopt the adaptive correlation imaging for gravity data. After each 3D correlation imaging, we change the equivalence into density contrasts according to the linear relationship, and then carry out forward gravity calculation for each rectangle cells. Next, we compare the forward gravity data with real data, and comtinue to perform 3D correlation imaging for the redisual gravity data. After several iterations, we can obtain a satisfactoy results. Newly developed general purpose computing technology from Nvidia GPU (Graphics Processing Unit) has been put into practice and received widespread attention in many areas. Based on the GPU programming mode and two parallel levels, five CPU loops for the main computation of 3D correlation imaging are converted into three loops in GPU kernel functions, thus achieving GPU/CPU collaborative computing. The two inner loops are defined as the dimensions of blocks and the three outer loops are defined as the dimensions of threads, thus realizing the double loop block calculation. Theoretical and real gravity data tests show that results are reliable and the computing time is greatly reduced. Acknowledgments We acknowledge the financial support of Sinoprobe project (201011039 and 201011049-03), the Fundamental Research Funds for the Central Universities (2010ZY26 and 2011PY0183), the National Natural Science Foundation of China (41074095) and the Open Project of State Key Laboratory of Geological Processes and Mineral Resources (GPMR0945).

  16. Iterative quantization: a Procrustean approach to learning binary codes for large-scale image retrieval.

    PubMed

    Gong, Yunchao; Lazebnik, Svetlana; Gordo, Albert; Perronnin, Florent

    2013-12-01

    This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or "classemes" on the ImageNet data set.

  17. Identifying the Source of Large-Scale Atmospheric Variability in Jupiter

    NASA Astrophysics Data System (ADS)

    Orton, Glenn

    2011-01-01

    We propose to use the unique mid-infrared filtered imaging and spectroscopic capabilities of the Subaru COMICS instrument to determine the mechanisms associated with recent unusual rapid albedo and color transformations of several of Jupiter's bands, particularly its South Equatorial Belt (SEB), as a means to understand the coupling between its dynamics and chemistry. These observations will characterize the temperature, degree of cloud cover, and distribution of minor gases that serve as indirect tracers of vertical motions in regions that will be undergoing unusual large-scale changes in dynamics and chemistry: the SEB, as well as regions near the equator and Jupiter's North Temperate Belt. COMICS is ideal for this investigation because of its efficiency in doing both imaging and spectroscopy, its 24.5-mum filter that is unique to 8-meter-class telescopes, its wide field of view that allows imaging of nearly all of Jupiter's disk, coupled with a high diffraction-limited angular resolution and optimal mid-infrared atmospheric transparency.

  18. Saturnian atmospheric storm

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A vortex, or large atmospheric storm, is visible at 74` north latitude in this color composite of Voyager 2 Saturn images obtained Aug. 25 from a range of 1 million kilometers (620,000 miles). Three wide-angle-camera images taken through green, orange and blue filters were used. This particular storm system seems to be one of the few large-scale structures in Saturn's polar region, which otherwise is dominated by much smaller-scale features suggesting convection. The darker, bluish structure (upper right) oriented east to west strongly suggests the presence of a jet stream at these high latitudes. The appearance of a strong east-west flow in the polar-region could have a major influence on models of Saturn's atmospheric circulation, if the existence of such a flow can be substantiated in time sequences of Voyager images. The smallest features visible in this photograph are about 20 km. (12 mi.) across. The Voyager project is managed for NASA by the Jet Propulsion Laboratory, Pasadena, Calif.

  19. Characterization of string cavitation in large-scale Diesel nozzles with tapered holes

    NASA Astrophysics Data System (ADS)

    Gavaises, M.; Andriotis, A.; Papoulias, D.; Mitroglou, N.; Theodorakakos, A.

    2009-05-01

    The cavitation structures formed inside enlarged transparent replicas of tapered Diesel valve covered orifice nozzles have been characterized using high speed imaging visualization. Cavitation images obtained at fixed needle lift and flow rate conditions have revealed that although the conical shape of the converging tapered holes suppresses the formation of geometric cavitation, forming at the entry to the cylindrical injection hole, string cavitation has been found to prevail, particularly at low needle lifts. Computational fluid dynamics simulations have shown that cavitation strings appear in areas where large-scale vortices develop. The vortical structures are mainly formed upstream of the injection holes due to the nonuniform flow distribution and persist also inside them. Cavitation strings have been frequently observed to link adjacent holes while inspection of identical real-size injectors has revealed cavitation erosion sites in the area of string cavitation development. Image postprocessing has allowed estimation of their frequency of appearance, lifetime, and size along the injection hole length, as function of cavitation and Reynolds numbers and needle lift.

  20. An ultrahigh vacuum fast-scanning and variable temperature scanning tunneling microscope for large scale imaging.

    PubMed

    Diaconescu, Bogdan; Nenchev, Georgi; de la Figuera, Juan; Pohl, Karsten

    2007-10-01

    We describe the design and performance of a fast-scanning, variable temperature scanning tunneling microscope (STM) operating from 80 to 700 K in ultrahigh vacuum (UHV), which routinely achieves large scale atomically resolved imaging of compact metallic surfaces. An efficient in-vacuum vibration isolation and cryogenic system allows for no external vibration isolation of the UHV chamber. The design of the sample holder and STM head permits imaging of the same nanometer-size area of the sample before and after sample preparation outside the STM base. Refractory metal samples are frequently annealed up to 2000 K and their cooldown time from room temperature to 80 K is 15 min. The vertical resolution of the instrument was found to be about 2 pm at room temperature. The coarse motor design allows both translation and rotation of the scanner tube. The total scanning area is about 8 x 8 microm(2). The sample temperature can be adjusted by a few tens of degrees while scanning over the same sample area.

  1. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    PubMed

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Hubble Space Telescope Imaging of the Circumnuclear Environments of the CfA Seyfert Galaxies: Nuclear Spirals and Fueling

    NASA Technical Reports Server (NTRS)

    Pogge, Richard W.; Martini, Paul

    2002-01-01

    We present archival Hubble Space Telescope (HST) images of the nuclear regions of 43 of the 46 Seyfert galaxies found in the volume limited,spectroscopically complete CfA Redshift Survey sample. Using an improved method of image contrast enhancement, we created detailed high-quality " structure maps " that allow us to study the distributions of dust, star clusters, and emission-line gas in the circumnuclear regions (100-1000 pc scales) and in the associated host galaxy. Essentially all of these Seyfert galaxies have circumnuclear dust structures with morphologies ranging from grand-design two-armed spirals to chaotic dusty disks. In most Seyfert galaxies there is a clear physical connection between the nuclear dust spirals on hundreds of parsec scales and large-scale bars and spiral arms in the host galaxies proper. These connections are particularly striking in the interacting and barred galaxies. Such structures are predicted by numerical simulations of gas flows in barred and interacting galaxies and may be related to the fueling of active galactic nuclei by matter inflow from the host galaxy disks. We see no significant differences in the circumnuclear dust morphologies of Seyfert 1s and 2s, and very few Seyfert 2 nuclei are obscured by large-scale dust structures in the host galaxies. If Sevfert 2s are obscured Sevfert Is, then the obscuration must occur on smaller scales than those probed by HST.

  3. Multiple Point Statistics algorithm based on direct sampling and multi-resolution images

    NASA Astrophysics Data System (ADS)

    Julien, S.; Renard, P.; Chugunova, T.

    2017-12-01

    Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.

  4. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  5. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  6. Processing of Fine-Scale Piezoelectric Ceramic/Polymer Composites for Sensors and Actuators

    NASA Technical Reports Server (NTRS)

    Janas, V. F.; Safari, A.

    1996-01-01

    The objective of the research effort at Rutgers is the development of lead zirconate titanate (PZT) ceramic/polymer composites with different designs for transducer applications including hydrophones, biomedical imaging, non-destructive testing, and air imaging. In this review, methods for processing both large area and multifunctional ceramic/polymer composites for acoustic transducers were discussed.

  7. Berkeley Lab Scientists to Play Role in New Space Telescope

    Science.gov Websites

    circling distant suns, among other science aims. The Wide Field Infrared Survey Telescope (WFIRST) will Hubble Space Telescope's Wide Field Camera 3 infrared imager. A Hubble large-scale mapping survey of the survey of the M31 galaxy (shown here) required 432 "pointings" of its imager, while only two

  8. Background derivation and image flattening: getimages

    NASA Astrophysics Data System (ADS)

    Men'shchikov, A.

    2017-11-01

    Modern high-resolution images obtained with space observatories display extremely strong intensity variations across images on all spatial scales. Source extraction in such images with methods based on global thresholding may bring unacceptably large numbers of spurious sources in bright areas while failing to detect sources in low-background or low-noise areas. It would be highly beneficial to subtract background and equalize the levels of small-scale fluctuations in the images before extracting sources or filaments. This paper describes getimages, a new method of background derivation and image flattening. It is based on median filtering with sliding windows that correspond to a range of spatial scales from the observational beam size up to a maximum structure width Xλ. The latter is a single free parameter of getimages that can be evaluated manually from the observed image ℐλ. The median filtering algorithm provides a background image \\tilde{Bλ} for structures of all widths below Xλ. The same median filtering procedure applied to an image of standard deviations 𝓓λ derived from a background-subtracted image \\tilde{Sλ} results in a flattening image \\tilde{Fλ}. Finally, a flattened detection image I{λD} = \\tilde{Sλ}/\\tilde{Fλ} is computed, whose standard deviations are uniform outside sources and filaments. Detecting sources in such greatly simplified images results in much cleaner extractions that are more complete and reliable. As a bonus, getimages reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images.

  9. Automated Detection of Microaneurysms Using Scale-Adapted Blob Analysis and Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adal, Kedir M.; Sidebe, Desire; Ali, Sharib

    2014-01-07

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are then introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier to detect true MAs. The developed system is built using onlymore » few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images.« less

  10. Multiscale Imaging of the Mouse Cortex Using Two-Photon Microscopy and Wide-Field Illumination

    NASA Astrophysics Data System (ADS)

    Bumstead, Jonathan R.

    The mouse brain can be studied over vast spatial scales ranging from microscopic imaging of single neurons to macroscopic measurements of hemodynamics acquired over the majority of the mouse cortex. However, most neuroimaging modalities are limited by a fundamental trade-off between the spatial resolution and the field-of-view (FOV) over which the brain can be imaged, making it difficult to fully understand the functional and structural architecture of the healthy mouse brain and its disruption in disease. My dissertation has focused on developing multiscale optical systems capable of imaging the mouse brain at both microscopic and mesoscopic spatial scales, specifically addressing the difference in spatial scales imaged with two-photon microscopy (TPM) and optical intrinsic signal imaging (OISI). Central to this work has been the formulation of a principled design strategy for extending the FOV of the two-photon microscope. Using this design approach, we constructed a TPM system with subcellular resolution and a FOV area 100 times greater than a conventional two-photon microscope. To image the ellipsoidal shape of the mouse cortex, we also developed the microscope to image arbitrary surfaces within a single frame using an electrically tunable lens. Finally, to address the speed limitations of the TPM systems developed during my dissertation, I also conducted research in large-scale neural phenomena occurring in the mouse brain imaged with high-speed OISI. The work conducted during my dissertation addresses some of the fundamental principles in designing and applying optical systems for multiscale imaging of the mouse brain.

  11. A psychophysical comparison of two methods for adaptive histogram equalization.

    PubMed

    Zimmerman, J B; Cousins, S B; Hartzell, K M; Frisse, M E; Kahn, M G

    1989-05-01

    Adaptive histogram equalization (AHE) is a method for adaptive contrast enhancement of digital images. It is an automatic, reproducible method for the simultaneous viewing of contrast within a digital image with a large dynamic range. Recent experiments have shown that in specific cases, there is no significant difference in the ability of AHE and linear intensity windowing to display gray-scale contrast. More recently, a variant of AHE which limits the allowed contrast enhancement of the image has been proposed. This contrast-limited adaptive histogram equalization (CLAHE) produces images in which the noise content of an image is not excessively enhanced, but in which sufficient contrast is provided for the visualization of structures within the image. Images processed with CLAHE have a more natural appearance and facilitate the comparison of different areas of an image. However, the reduced contrast enhancement of CLAHE may hinder the ability of an observer to detect the presence of some significant gray-scale contrast. In this report, a psychophysical observer experiment was performed to determine if there is a significant difference in the ability of AHE and CLAHE to depict gray-scale contrast. Observers were presented with computed tomography (CT) images of the chest processed with AHE and CLAHE. Subtle artificial lesions were introduced into some images. The observers were asked to rate their confidence regarding the presence of the lesions; this rating-scale data was analyzed using receiver operating characteristic (ROC) curve techniques. These ROC curves were compared for significant differences in the observers' performances. In this report, no difference was found in the abilities of AHE and CLAHE to depict contrast information.

  12. Making methane visible

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Olofsson, Göran; Crill, Patrick; Bastviken, David

    2016-04-01

    Methane (CH4) is one of the most important greenhouse gases, and an important energy carrier in biogas and natural gas. Its large scale emission patterns have been unpredictable and the source and sink distributions are poorly constrained. Remote assessment of CH4 with high sensitivity at m2 spatial resolution would allow detailed mapping of near ground distribution and anthropogenic sources and sinks in landscapes but has hitherto not been possible. Here we show that CH4 gradients can be imaged on

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, G.A.; Commer, M.

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/Lmore » supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.« less

  14. IMAGE EXPLORER: Astronomical Image Analysis on an HTML5-based Web Application

    NASA Astrophysics Data System (ADS)

    Gopu, A.; Hayashi, S.; Young, M. D.

    2014-05-01

    Large datasets produced by recent astronomical imagers cause the traditional paradigm for basic visual analysis - typically downloading one's entire image dataset and using desktop clients like DS9, Aladin, etc. - to not scale, despite advances in desktop computing power and storage. This paper describes Image Explorer, a web framework that offers several of the basic visualization and analysis functionality commonly provided by tools like DS9, on any HTML5 capable web browser on various platforms. It uses a combination of the modern HTML5 canvas, JavaScript, and several layers of lossless PNG tiles producted from the FITS image data. Astronomers are able to rapidly and simultaneously open up several images on their web-browser, adjust the intensity min/max cutoff or its scaling function, and zoom level, apply color-maps, view position and FITS header information, execute typically used data reduction codes on the corresponding FITS data using the FRIAA framework, and overlay tiles for source catalog objects, etc.

  15. Anomaly detection for medical images based on a one-class classification

    NASA Astrophysics Data System (ADS)

    Wei, Qi; Ren, Yinhao; Hou, Rui; Shi, Bibo; Lo, Joseph Y.; Carin, Lawrence

    2018-02-01

    Detecting an anomaly such as a malignant tumor or a nodule from medical images including mammogram, CT or PET images is still an ongoing research problem drawing a lot of attention with applications in medical diagnosis. A conventional way to address this is to learn a discriminative model using training datasets of negative and positive samples. The learned model can be used to classify a testing sample into a positive or negative class. However, in medical applications, the high unbalance between negative and positive samples poses a difficulty for learning algorithms, as they will be biased towards the majority group, i.e., the negative one. To address this imbalanced data issue as well as leverage the huge amount of negative samples, i.e., normal medical images, we propose to learn an unsupervised model to characterize the negative class. To make the learned model more flexible and extendable for medical images of different scales, we have designed an autoencoder based on a deep neural network to characterize the negative patches decomposed from large medical images. A testing image is decomposed into patches and then fed into the learned autoencoder to reconstruct these patches themselves. The reconstruction error of one patch is used to classify this patch into a binary class, i.e., a positive or a negative one, leading to a one-class classifier. The positive patches highlight the suspicious areas containing anomalies in a large medical image. The proposed method has been tested on InBreast dataset and achieves an AUC of 0.84. The main contribution of our work can be summarized as follows. 1) The proposed one-class learning requires only data from one class, i.e., the negative data; 2) The patch-based learning makes the proposed method scalable to images of different sizes and helps avoid the large scale problem for medical images; 3) The training of the proposed deep convolutional neural network (DCNN) based auto-encoder is fast and stable.

  16. Variable Grid Traveltime Tomography for Near-surface Seismic Imaging

    NASA Astrophysics Data System (ADS)

    Cai, A.; Zhang, J.

    2017-12-01

    We present a new algorithm of traveltime tomography for imaging the subsurface with automated variable grids upon geological structures. The nonlinear traveltime tomography along with Tikhonov regularization using conjugate gradient method is a conventional method for near surface imaging. However, model regularization for any regular and even grids assumes uniform resolution. From geophysical point of view, long-wavelength and large scale structures can be reliably resolved, the details along geological boundaries are difficult to resolve. Therefore, we solve a traveltime tomography problem that automatically identifies large scale structures and aggregates grids within the structures for inversion. As a result, the number of velocity unknowns is reduced significantly, and inversion intends to resolve small-scale structures or the boundaries of large-scale structures. The approach is demonstrated by tests on both synthetic and field data. One synthetic model is a buried basalt model with one horizontal layer. Using the variable grid traveltime tomography, the resulted model is more accurate in top layer velocity, and basalt blocks, and leading to a less number of grids. The field data was collected in an oil field in China. The survey was performed in an area where the subsurface structures were predominantly layered. The data set includes 476 shots with a 10 meter spacing and 1735 receivers with a 10 meter spacing. The first-arrival traveltime of the seismogram is picked for tomography. The reciprocal errors of most shots are between 2ms and 6ms. The normal tomography results in fluctuations in layers and some artifacts in the velocity model. In comparison, the implementation of new method with proper threshold provides blocky model with resolved flat layer and less artifacts. Besides, the number of grids reduces from 205,656 to 4,930 and the inversion produces higher resolution due to less unknowns and relatively fine grids in small structures. The variable grid traveltime tomography provides an alternative imaging solution for blocky structures in the subsurface and builds a good starting model for waveform inversion and statics.

  17. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  18. A gravitationally lensed quasar with quadruple images separated by 14.62 arcseconds.

    PubMed

    Inada, Naohisa; Oguri, Masamune; Pindor, Bartosz; Hennawi, Joseph F; Chiu, Kuenley; Zheng, Wei; Ichikawa, Shin-Ichi; Gregg, Michael D; Becker, Robert H; Suto, Yasushi; Strauss, Michael A; Turner, Edwin L; Keeton, Charles R; Annis, James; Castander, Francisco J; Eisenstein, Daniel J; Frieman, Joshua A; Fukugita, Masataka; Gunn, James E; Johnston, David E; Kent, Stephen M; Nichol, Robert C; Richards, Gordon T; Rix, Hans-Walter; Sheldon, Erin Scott; Bahcall, Neta A; Brinkmann, J; Ivezić, Zeljko; Lamb, Don Q; McKay, Timothy A; Schneider, Donald P; York, Donald G

    2003-12-18

    Gravitational lensing is a powerful tool for the study of the distribution of dark matter in the Universe. The cold-dark-matter model of the formation of large-scale structures (that is, clusters of galaxies and even larger assemblies) predicts the existence of quasars gravitationally lensed by concentrations of dark matter so massive that the quasar images would be split by over 7 arcsec. Numerous searches for large-separation lensed quasars have, however, been unsuccessful. All of the roughly 70 lensed quasars known, including the first lensed quasar discovered, have smaller separations that can be explained in terms of galaxy-scale concentrations of baryonic matter. Although gravitationally lensed galaxies with large separations are known, quasars are more useful cosmological probes because of the simplicity of the resulting lens systems. Here we report the discovery of a lensed quasar, SDSS J1004 + 4112, which has a maximum separation between the components of 14.62 arcsec. Such a large separation means that the lensing object must be dominated by dark matter. Our results are fully consistent with theoretical expectations based on the cold-dark-matter model.

  19. Challenges for Future UV Imaging of the Earth's Ionosphere and High Latitude Regions

    NASA Technical Reports Server (NTRS)

    Spann, James

    2006-01-01

    Large scale imaging of Geospace has played a significant role in the recent advances in the comprehension of the coupled Solar-Terrestrial System. The Earth's ionospheric far ultraviolet emissions provide a rich tapestry of observations that play a key role in sorting out the dominant mechanisms and phenomena associated with the coupling of the ionosphere and magnetosphere (MI). The MI coupling is an integral part of the Solar-Terrestrial and as such, future observations in this region should focus on understanding the coupling and the impact of solar variability. This talk will focus on the outstanding problems associated with the coupled Solar-Terrestrial system that can be best addressed using far ultraviolet imaging of the Earthls ionosphere. Challenges of global scale imaging and high-resolution imaging will be discussed and how these are driven by unresolved compelling science questions of magnetospheric configuration, and auroral dynamics.

  20. Strong-lensing analysis of A2744 with MUSE and Hubble Frontier Fields images

    NASA Astrophysics Data System (ADS)

    Mahler, G.; Richard, J.; Clément, B.; Lagattuta, D.; Schmidt, K.; Patrício, V.; Soucail, G.; Bacon, R.; Pello, R.; Bouwens, R.; Maseda, M.; Martinez, J.; Carollo, M.; Inami, H.; Leclercq, F.; Wisotzki, L.

    2018-01-01

    We present an analysis of Multi Unit Spectroscopic Explorer (MUSE) observations obtained on the massive Frontier Fields (FFs) cluster A2744. This new data set covers the entire multiply imaged region around the cluster core. The combined catalogue consists of 514 spectroscopic redshifts (with 414 new identifications). We use this redshift information to perform a strong-lensing analysis revising multiple images previously found in the deep FF images, and add three new MUSE-detected multiply imaged systems with no obvious Hubble Space Telescope counterpart. The combined strong-lensing constraints include a total of 60 systems producing 188 images altogether, out of which 29 systems and 83 images are spectroscopically confirmed, making A2744 one of the most well-constrained clusters to date. Thanks to the large amount of spectroscopic redshifts, we model the influence of substructures at larger radii, using a parametrization including two cluster-scale components in the cluster core and several group scale in the outskirts. The resulting model accurately reproduces all the spectroscopic multiple systems, reaching an rms of 0.67 arcsec in the image plane. The large number of MUSE spectroscopic redshifts gives us a robust model, which we estimate reduces the systematic uncertainty on the 2D mass distribution by up to ∼2.5 times the statistical uncertainty in the cluster core. In addition, from a combination of the parametrization and the set of constraints, we estimate the relative systematic uncertainty to be up to 9 per cent at 200 kpc.

  1. Coronal Heating and the Magnetic Flux Content of the Network

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    Previously, from analysis of SOHO coronal images in combination with Kitt Peak magnetograms, we found that the quiet corona is the sum of two components: the large-scale corona and the coronal network. The large-scale corona consists of all coronal-temperature (T approximately 10(exp 6) K) structures larger than supergranules (greater than approximately 30,000 kilometers). The coronal network (1) consists of all coronal-temperature structures smaller than supergranules, (2) is rooted in and loosely traces the photospheric magnetic network, (3) has its brightest features seated on polarity dividing lines (neutral lines) in the network magnetic flux, and (4) produces only about 5% of the total coronal emission in quiet regions. The heating of the coronal network is apparently magnetic in origin. Here, from analysis of EIT coronal images of quiet regions in combination with magnetograms of the same quiet regions from SOHO/MDI and from Kitt Peak, we examine the other 95% of the quiet corona and its relation to the underlying magnetic network. We find: (1) Dividing the large-scale corona into its bright and dim halves divides the area into bright "continents" and dark "oceans" having spans of 2-4 supergranules. (2) These patterns are also present in the photospheric magnetograms: the network is stronger under the bright half and weaker under the dim half. (3) The radiation from the large-scale corona increases roughly as the cube root of the magnetic flux content of the underlying magnetic network. In contrast, the coronal radiation from an active region increases roughly linearly with the magnetic flux content of the active region. We assume, as is widely held, that nearly all of the large-scale corona is magnetically rooted in the network. Our results suggest that either the coronal heating in quiet regions has a large non-magnetic component, or, if the heating is predominantly produced via the magnetic field, the mechanism is significantly different than in active regions.

  2. The Origin of Clusters and Large-Scale Structures: Panoramic View of the High-z Universe

    NASA Astrophysics Data System (ADS)

    Ouchi, Masami

    We will report results of our on-going survey for proto-clusters and large-scale structures at z=3-6. We carried out very wide and deep optical imaging down to i=27 for a 1 deg^2 field of the Subaru/XMM Deep Field with 8.2m Subaru Telescope. We obtain maps of the Universe traced by ~1,000 Ly-a galaxies at z=3, 4, and 6 and by ~10,000 Lyman break galaxies at z=3-6. These cosmic maps have a transverse dimension of ~150 Mpc x 150 Mpc in comoving units at these redshifts, and provide us, for the first time, a panoramic view of the high-z Universe from the scales of galaxies, clusters to large-scale structures. Major results and implications will be presented in our talk. (Part of this work is subject to press embargo.)

  3. Large-Scale Coronal Heating, Clustering of Coronal Bright Points, and Concentration of Magnetic Flux

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.

    1998-01-01

    By combining quiet-region Fe XII coronal images from SOHO/EIT with magnetograms from NSO/Kitt Peak and from SOHO/MDI, we show that on scales larger than a supergranule the population of network coronal bright points and the magnetic flux content of the network are both markedly greater under the bright half of the quiet corona than under the dim half. These results (1) support the view that the heating of the entire corona in quiet regions and coronal holes is driven by fine-scale magnetic activity (microflares, explosive events, spicules) seated low in the magnetic network, and (2) suggest that this large-scale modulation of the magnetic flux and coronal heating is a signature of giant convection cells.

  4. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  5. From Large-scale to Protostellar Disk Fragmentation into Close Binary Stars

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; Cruz, Fidel; Gabbasov, Ruslan; Klapp, Jaime; Ramírez-Velasquez, José

    2018-04-01

    Recent observations of young stellar systems with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Karl G. Jansky Very Large Array are helping to cement the idea that close companion stars form via fragmentation of a gravitationally unstable disk around a protostar early in the star formation process. As the disk grows in mass, it eventually becomes gravitationally unstable and fragments, forming one or more new protostars in orbit with the first at mean separations of 100 au or even less. Here, we report direct numerical calculations down to scales as small as ∼0.1 au, using a consistent Smoothed Particle Hydrodynamics code, that show the large-scale fragmentation of a cloud core into two protostars accompanied by small-scale fragmentation of their circumstellar disks. Our results demonstrate the two dominant mechanisms of star formation, where the disk forming around a protostar (which in turn results from the large-scale fragmentation of the cloud core) undergoes eccentric (m = 1) fragmentation to produce a close binary. We generate two-dimensional emission maps and simulated ALMA 1.3 mm continuum images of the structure and fragmentation of the disks that can help explain the dynamical processes occurring within collapsing cloud cores.

  6. Detection and location of fouling on photovoltaic panels using a drone-mounted infrared thermography system

    NASA Astrophysics Data System (ADS)

    Zhang, Peng; Zhang, Lifu; Wu, Taixia; Zhang, Hongming; Sun, Xuejian

    2017-01-01

    Due to weathering and external forces, solar panels are subject to fouling and defects after a certain amount of time in service. These fouling and defects have direct adverse consequences such as low-power efficiency. Because solar power plants usually have large-scale photovoltaic (PV) panels, fast detection and location of fouling and defects across large PV areas are imperative. A drone-mounted infrared thermography system was designed and developed, and its ability to detect rapid fouling on large-scale PV panel systems was investigated. The infrared images were preprocessed using the K neighbor mean filter, and the single PV module on each image was recognized and extracted. Combining the local and global detection method, suspicious sites were located precisely. The results showed the flexible drone-mounted infrared thermography system to have a strong ability to detect the presence and determine the position of PV fouling. Drone-mounted infrared thermography also has good technical feasibility and practical value in the detection of PV fouling detection.

  7. OMERO and Bio-Formats 5: flexible access to large bioimaging datasets at scale

    NASA Astrophysics Data System (ADS)

    Moore, Josh; Linkert, Melissa; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Li, Simon; Lindner, Dominik; Moore, William J.; Patterson, Andrew J.; Pindelski, Blazej; Ramalingam, Balaji; Rozbicki, Emil; Tarkowska, Aleksandra; Walczysko, Petr; Allan, Chris; Burel, Jean-Marie; Swedlow, Jason

    2015-03-01

    The Open Microscopy Environment (OME) has built and released Bio-Formats, a Java-based proprietary file format conversion tool and OMERO, an enterprise data management platform under open source licenses. In this report, we describe new versions of Bio-Formats and OMERO that are specifically designed to support large, multi-gigabyte or terabyte scale datasets that are routinely collected across most domains of biological and biomedical research. Bio- Formats reads image data directly from native proprietary formats, bypassing the need for conversion into a standard format. It implements the concept of a file set, a container that defines the contents of multi-dimensional data comprised of many files. OMERO uses Bio-Formats to read files natively, and provides a flexible access mechanism that supports several different storage and access strategies. These new capabilities of OMERO and Bio-Formats make them especially useful for use in imaging applications like digital pathology, high content screening and light sheet microscopy that create routinely large datasets that must be managed and analyzed.

  8. Streamflow Observations From Cameras: Large-Scale Particle Image Velocimetry or Particle Tracking Velocimetry?

    NASA Astrophysics Data System (ADS)

    Tauro, F.; Piscopia, R.; Grimaldi, S.

    2017-12-01

    Image-based methodologies, such as large scale particle image velocimetry (LSPIV) and particle tracking velocimetry (PTV), have increased our ability to noninvasively conduct streamflow measurements by affording spatially distributed observations at high temporal resolution. However, progress in optical methodologies has not been paralleled by the implementation of image-based approaches in environmental monitoring practice. We attribute this fact to the sensitivity of LSPIV, by far the most frequently adopted algorithm, to visibility conditions and to the occurrence of visible surface features. In this work, we test both LSPIV and PTV on a data set of 12 videos captured in a natural stream wherein artificial floaters are homogeneously and continuously deployed. Further, we apply both algorithms to a video of a high flow event on the Tiber River, Rome, Italy. In our application, we propose a modified PTV approach that only takes into account realistic trajectories. Based on our findings, LSPIV largely underestimates surface velocities with respect to PTV in both favorable (12 videos in a natural stream) and adverse (high flow event in the Tiber River) conditions. On the other hand, PTV is in closer agreement than LSPIV with benchmark velocities in both experimental settings. In addition, the accuracy of PTV estimations can be directly related to the transit of physical objects in the field of view, thus providing tangible data for uncertainty evaluation.

  9. Large-scale oscillatory calcium waves in the immature cortex.

    PubMed

    Garaschuk, O; Linn, J; Eilers, J; Konnerth, A

    2000-05-01

    Two-photon imaging of large neuronal networks in cortical slices of newborn rats revealed synchronized oscillations in intracellular Ca2+ concentration. These spontaneous Ca2+ waves usually started in the posterior cortex and propagated slowly (2.1 mm per second) toward its anterior end. Ca2+ waves were associated with field-potential changes and required activation of AMPA and NMDA receptors. Although GABAA receptors were not involved in wave initiation, the developmental transition of GABAergic transmission from depolarizing to hyperpolarizing (around postnatal day 7) stopped the oscillatory activity. Thus we identified a type of large-scale Ca2+ wave that may regulate long-distance wiring in the immature cortex.

  10. Large-scale Density Structures in Magneto-rotational Disk Turbulence

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew; Johansen, A.; Klahr, H.

    2009-01-01

    Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.

  11. Making methane visible

    NASA Astrophysics Data System (ADS)

    Gålfalk, Magnus; Olofsson, Göran; Crill, Patrick; Bastviken, David

    2016-04-01

    Methane (CH4) is one of the most important greenhouse gases, and an important energy carrier in biogas and natural gas. Its large-scale emission patterns have been unpredictable and the source and sink distributions are poorly constrained. Remote assessment of CH4 with high sensitivity at a m2 spatial resolution would allow detailed mapping of the near-ground distribution and anthropogenic sources in landscapes but has hitherto not been possible. Here we show that CH4 gradients can be imaged on the

  12. The Hyper Suprime-Cam software pipeline

    DOE PAGES

    Bosch, James; Armstrong, Robert; Bickerton, Steven; ...

    2017-10-12

    Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less

  13. The Hyper Suprime-Cam software pipeline

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch, James; Armstrong, Robert; Bickerton, Steven

    Here in this article, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high-level processing steps that generate coadded images and science-ready catalogs as well as low-level detrendingmore » and image characterizations.« less

  14. Quantification of changes in language-related brain areas in autism spectrum disorders using large-scale network analysis.

    PubMed

    Goch, Caspar J; Stieltjes, Bram; Henze, Romy; Hering, Jan; Poustka, Luise; Meinzer, Hans-Peter; Maier-Hein, Klaus H

    2014-05-01

    Diagnosis of autism spectrum disorders (ASD) is difficult, as symptoms vary greatly and are difficult to quantify objectively. Recent work has focused on the assessment of non-invasive diffusion tensor imaging-based biomarkers that reflect the microstructural characteristics of neuronal pathways in the brain. While tractography-based approaches typically analyze specific structures of interest, a graph-based large-scale network analysis of the connectome can yield comprehensive measures of larger-scale architectural patterns in the brain. Commonly applied global network indices, however, do not provide any specificity with respect to functional areas or anatomical structures. Aim of this work was to assess the concept of network centrality as a tool to perform locally specific analysis without disregarding the global network architecture and compare it to other popular network indices. We create connectome networks from fiber tractographies and parcellations of the human brain and compute global network indices as well as local indices for Wernicke's Area, Broca's Area and the Motor Cortex. Our approach was evaluated on 18 children suffering from ASD and 18 typically developed controls using magnetic resonance imaging-based cortical parcellations in combination with diffusion tensor imaging tractography. We show that the network centrality of Wernicke's area is significantly (p<0.001) reduced in ASD, while the motor cortex, which was used as a control region, did not show significant alterations. This could reflect the reduced capacity for comprehension of language in ASD. The betweenness centrality could potentially be an important metric in the development of future diagnostic tools in the clinical context of ASD diagnosis. Our results further demonstrate the applicability of large-scale network analysis tools in the domain of region-specific analysis with a potential application in many different psychological disorders.

  15. Subsurface Monitoring of CO2 Sequestration - A Review and Look Forward

    NASA Astrophysics Data System (ADS)

    Daley, T. M.

    2012-12-01

    The injection of CO2 into subsurface formations is at least 50 years old with large-scale utilization of CO2 for enhanced oil recovery (CO2-EOR) beginning in the 1970s. Early monitoring efforts had limited measurements in available boreholes. With growing interest in CO2 sequestration beginning in the 1990's, along with growth in geophysical reservoir monitoring, small to mid-size sequestration monitoring projects began to appear. The overall goals of a subsurface monitoring plan are to provide measurement of CO2 induced changes in subsurface properties at a range of spatial and temporal scales. The range of spatial scales allows tracking of the location and saturation of the plume with varying detail, while finer temporal sampling (up to continuous) allows better understanding of dynamic processes (e.g. multi-phase flow) and constraining of reservoir models. Early monitoring of small scale pilots associated with CO2-EOR (e.g., the McElroy field and the Lost Hills field), developed many of the methodologies including tomographic imaging and multi-physics measurements. Large (reservoir) scale sequestration monitoring began with the Sleipner and Weyburn projects. Typically, large scale monitoring, such as 4D surface seismic, has limited temporal sampling due to costs. Smaller scale pilots can allow more frequent measurements as either individual time-lapse 'snapshots' or as continuous monitoring. Pilot monitoring examples include the Frio, Nagaoka and Otway pilots using repeated well logging, crosswell imaging, vertical seismic profiles and CASSM (continuous active-source seismic monitoring). For saline reservoir sequestration projects, there is typically integration of characterization and monitoring, since the sites are not pre-characterized resource developments (oil or gas), which reinforces the need for multi-scale measurements. As we move beyond pilot sites, we need to quantify CO2 plume and reservoir properties (e.g. pressure) over large scales, while still obtaining high resolution. Typically the high-resolution (spatial and temporal) tools are deployed in permanent or semi-permanent borehole installations, where special well design may be necessary, such as non-conductive casing for electrical surveys. Effective utilization of monitoring wells requires an approach of modular borehole monitoring (MBM) were multiple measurements can be made. An example is recent work at the Citronelle pilot injection site where an MBM package with seismic, fluid sampling and distributed fiber sensing was deployed. For future large scale sequestration monitoring, an adaptive borehole-monitoring program is proposed.

  16. A rapid extraction of landslide disaster information research based on GF-1 image

    NASA Astrophysics Data System (ADS)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  17. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  18. STEREOMATRIX 3-D display system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whiteside, Stephen Earl

    1973-08-01

    STEREOMATRIX is a large-screen interactive 3-D laser display system which presents computer-generated wire figures stereoscopically. The presented image can be rotated, translated, and scaled by the system user and the perspective of the image is changed according to the position of the user. A cursor may be positioned in three dimensions to identify points and allows communication with the computer.

  19. Assessing change in large-scale forest area by visually interpreting Landsat images

    Treesearch

    Jerry D. Greer; Frederick P. Weber; Raymond L. Czaplewski

    2000-01-01

    As part of the Forest Resources Assessment 1990, the Food and Agriculture Organization of the United Nations visually interpreted a stratified random sample of 117 Landsat scenes to estimate global status and change in tropical forest area. Images from 1980 and 1990 were interpreted by a group of widely experienced technical people in many different tropical countries...

  20. Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans

    NASA Astrophysics Data System (ADS)

    Boussaha, M.; Vallet, B.; Rives, P.

    2018-05-01

    The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.

  1. Very large scale heterogeneous integration (VLSHI) and wafer-level vacuum packaging for infrared bolometer focal plane arrays

    NASA Astrophysics Data System (ADS)

    Forsberg, Fredrik; Roxhed, Niclas; Fischer, Andreas C.; Samel, Björn; Ericsson, Per; Hoivik, Nils; Lapadatu, Adriana; Bring, Martin; Kittilsland, Gjermund; Stemme, Göran; Niklaus, Frank

    2013-09-01

    Imaging in the long wavelength infrared (LWIR) range from 8 to 14 μm is an extremely useful tool for non-contact measurement and imaging of temperature in many industrial, automotive and security applications. However, the cost of the infrared (IR) imaging components has to be significantly reduced to make IR imaging a viable technology for many cost-sensitive applications. This paper demonstrates new and improved fabrication and packaging technologies for next-generation IR imaging detectors based on uncooled IR bolometer focal plane arrays. The proposed technologies include very large scale heterogeneous integration for combining high-performance, SiGe quantum-well bolometers with electronic integrated read-out circuits and CMOS compatible wafer-level vacuum packing. The fabrication and characterization of bolometers with a pitch of 25 μm × 25 μm that are arranged on read-out-wafers in arrays with 320 × 240 pixels are presented. The bolometers contain a multi-layer quantum well SiGe thermistor with a temperature coefficient of resistance of -3.0%/K. The proposed CMOS compatible wafer-level vacuum packaging technology uses Cu-Sn solid-liquid interdiffusion (SLID) bonding. The presented technologies are suitable for implementation in cost-efficient fabless business models with the potential to bring about the cost reduction needed to enable low-cost IR imaging products for industrial, security and automotive applications.

  2. Efficient processing of fluorescence images using directional multiscale representations.

    PubMed

    Labate, D; Laezza, F; Negi, P; Ozcan, B; Papadakis, M

    2014-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data.

  3. Efficient processing of fluorescence images using directional multiscale representations

    PubMed Central

    Labate, D.; Laezza, F.; Negi, P.; Ozcan, B.; Papadakis, M.

    2017-01-01

    Recent advances in high-resolution fluorescence microscopy have enabled the systematic study of morphological changes in large populations of cells induced by chemical and genetic perturbations, facilitating the discovery of signaling pathways underlying diseases and the development of new pharmacological treatments. In these studies, though, due to the complexity of the data, quantification and analysis of morphological features are for the vast majority handled manually, slowing significantly data processing and limiting often the information gained to a descriptive level. Thus, there is an urgent need for developing highly efficient automated analysis and processing tools for fluorescent images. In this paper, we present the application of a method based on the shearlet representation for confocal image analysis of neurons. The shearlet representation is a newly emerged method designed to combine multiscale data analysis with superior directional sensitivity, making this approach particularly effective for the representation of objects defined over a wide range of scales and with highly anisotropic features. Here, we apply the shearlet representation to problems of soma detection of neurons in culture and extraction of geometrical features of neuronal processes in brain tissue, and propose it as a new framework for large-scale fluorescent image analysis of biomedical data. PMID:28804225

  4. Imaging through ground-level turbulence by Fourier telescopy: Simulations and preliminary experiments

    NASA Astrophysics Data System (ADS)

    Randunu Pathirannehelage, Nishantha

    Fourier telescopy imaging is a recently-developed imaging method that relies on active structured-light illumination of the object. Reflected/scattered light is measured by a large "light bucket" detector; processing of the detected signal yields the magnitude and phase of spatial frequency components of the object reflectance or transmittance function. An inverse Fourier transform results in the image. In 2012 a novel method, known as time-average Fourier telescopy (TAFT), was introduced by William T. Rhodes as a means for diffraction-limited imaging through ground-level atmospheric turbulence. This method, which can be applied to long horizontal-path terrestrial imaging, addresses a need that is not solved by the adaptive optics methods being used in astronomical imaging. Field-experiment verification of the TAFT concept requires instrumentation that is not available at Florida Atlantic University. The objective of this doctoral research program is thus to demonstrate, in the absence of full-scale experimentation, the feasibility of time-average Fourier telescopy through (a) the design, construction, and testing of small-scale laboratory instrumentation capable of exploring basic Fourier telescopy data-gathering operations, and (b) the development of MATLAB-based software capable of demonstrating the effect of kilometer-scale passage of laser beams through ground-level turbulence in a numerical simulation of TAFT.

  5. Large-scale precipitation estimation using Kalpana-1 IR measurements and its validation using GPCP and GPCC data

    NASA Astrophysics Data System (ADS)

    Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.

    2011-12-01

    Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.

  6. Fast algorithm of low power image reformation for OLED display

    NASA Astrophysics Data System (ADS)

    Lee, Myungwoo; Kim, Taewhan

    2014-04-01

    We propose a fast algorithm of low-power image reformation for organic light-emitting diode (OLED) display. The proposed algorithm scales the image histogram in a way to reduce power consumption in OLED display by remapping the gray levels of the pixels in the image based on the fast analysis of the histogram of the input image while maintaining contrast of the image. The key idea is that a large number of gray levels are never used in the images and these gray levels can be effectively exploited to reduce power consumption. On the other hand, to maintain the image contrast the gray level remapping is performed by taking into account the object size in the image to which each gray level is applied, that is, reforming little for the gray levels in the objects of large size. Through experiments with 24 Kodak images, it is shown that our proposed algorithm is able to reduce the power consumption by 10% even with 9% contrast enhancement. Our algorithm runs in a linear time so that it can be applied to moving pictures with high resolution.

  7. The cartography of Venus with Magellan data

    NASA Technical Reports Server (NTRS)

    Kirk, R. L.; Morgan, H. F.; Russell, J. F.

    1993-01-01

    Maps of Venus based on Magellan data are being compiled at 1:50,000,000, 1:5,000,000 and 1:1,500,000 scales. Topographic contour lines based on radar altimetry data are overprinted on the image maps, along with feature nomenclature. Map controls are based on existing knowledge of the spacecraft orbit; photogrammetric triangulation, a traditional basis for geodetic control for bodies where framing cameras were used, is not feasible with the radar images of Venus. Preliminary synthetic aperture radar (SAR) image maps have some data gaps and cosmetic inconsistencies, which will be corrected on final compilations. Eventual revision of geodetic controls and of the adopted Venusian spin-axis location will result in geometric adjustments, particularly on large-scale maps.

  8. Large field of view, fast and low dose multimodal phase-contrast imaging at high x-ray energy.

    PubMed

    Astolfo, Alberto; Endrizzi, Marco; Vittoria, Fabio A; Diemoz, Paul C; Price, Benjamin; Haig, Ian; Olivo, Alessandro

    2017-05-19

    X-ray phase contrast imaging (XPCI) is an innovative imaging technique which extends the contrast capabilities of 'conventional' absorption based x-ray systems. However, so far all XPCI implementations have suffered from one or more of the following limitations: low x-ray energies, small field of view (FOV) and long acquisition times. Those limitations relegated XPCI to a 'research-only' technique with an uncertain future in terms of large scale, high impact applications. We recently succeeded in designing, realizing and testing an XPCI system, which achieves significant steps toward simultaneously overcoming these limitations. Our system combines, for the first time, large FOV, high energy and fast scanning. Importantly, it is capable of providing high image quality at low x-ray doses, compatible with or even below those currently used in medical imaging. This extends the use of XPCI to areas which were unpractical or even inaccessible to previous XPCI solutions. We expect this will enable a long overdue translation into application fields such as security screening, industrial inspections and large FOV medical radiography - all with the inherent advantages of the XPCI multimodality.

  9. Large-area super-resolution optical imaging by using core-shell microfibers

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Yang; Lo, Wei-Chieh

    2017-09-01

    We first numerically and experimentally report large-area super-resolution optical imaging achieved by using core-shell microfibers. The particular spatial electromagnetic waves for different core-shell microfibers are studied by using finite-difference time-domain and ray tracing calculations. The focusing properties of photonic nanojets are evaluated in terms of intensity profile and full width at half-maximum along propagation and transversal directions. In experiment, the general optical fiber is chemically etched down to 6 μm diameter and coated with different metallic thin films by using glancing angle deposition. The direct imaging of photonic nanojets for different core-shell microfibers is performed with a scanning optical microscope system. We show that the intensity distribution of a photonic nanojet is highly related to the metallic shell due to the surface plasmon polaritons. Furthermore, large-area super-resolution optical imaging is performed by using different core-shell microfibers placed over the nano-scale grating with 150 nm line width. The core-shell microfiber-assisted imaging is achieved with super-resolution and hundreds of times the field-of-view in contrast to microspheres. The possible applications of these core-shell optical microfibers include real-time large-area micro-fluidics and nano-structure inspections.

  10. JunoCam's Imaging of Jupiter

    NASA Astrophysics Data System (ADS)

    Orton, Glenn; Hansen, Candice; Momary, Thomas; Caplinger, Michael; Ravine, Michael; Atreya, Sushil; Ingersoll, Andrew; Bolton, Scott; Rogers, John; Eichstaedt, Gerald

    2017-04-01

    Juno's visible imager, JunoCam, is a wide-angle camera (58° field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm, designed for optimal imaging of Jupiter's poles. Juno's elliptical polar orbit offers unique views of Jupiter's polar regions with spatial scales as good as 50 km/pixel. At closest approach ("perijove") the images have spatial scale down to ˜3 km/pixel. As a push-frame imager on a rotating spacecraft, JunoCam uses time-delayed integration to take advantage of the spacecraft spin to extend integration time to increase signal. Images of Jupiter's poles reveal a largely uncharted region of Jupiter, as nearly all earlier spacecraft except Pioneer 11 have orbited or flown by close to the equatorial plane. Poleward of 64-68° planetocentric latitude, Jupiter's familiar east-west banded structure breaks down. Several types of discrete features appear on a darker, bluish-cast background. Clusters of circular cyclonic spirals are found immediately around the north and south poles. Oval-shaped features are also present, ranging in size down to JunoCam's resolution limits. The largest and brightest features usually have chaotic shapes; animations over ˜1 hour can reveal cyclonic motion in them. Narrow linear features traverse tens of degrees of longitude and are not confined in latitude. JunoCam also detected optically thin clouds or hazes that are illuminated beyond the nightside ˜1-bar terminator; one of these detected at Perijove lay some 3 scale heights above the main cloud deck. Tests have been made to detect the aurora and lightning. Most close-up images of Jupiter have been acquired at lower latitudes within 2 hours of closest approach. These images aid in understanding the data collected by other instruments on Juno that probe deeper in the atmosphere. When Jupiter was too close to the sun for ground-based observers to collect data between perijoves 1 and 2, JunoCam took a sequence of routine images to monitor large-scale features, which fortuitously yielded the earliest images of a very energetic outbreak on the rapid jet at 24°N. Images taken around perijove 3 (PJ3) allow a closer inspection of the outbreak features in a later state of evolution. Methane band images covering both polar regions within about four hours, around PJ3, show the shape and extent of the polar-haze features from favorable vantage points. Occasional, opportunistic images of the Galilean moons and the ring system were also acquired.

  11. Distributed multimodal data fusion for large scale wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Ertin, Emre

    2006-05-01

    Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.

  12. Image interpolation used in three-dimensional range data compression.

    PubMed

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  13. Images as drivers of progress in cardiac computational modelling

    PubMed Central

    Lamata, Pablo; Casero, Ramón; Carapella, Valentina; Niederer, Steve A.; Bishop, Martin J.; Schneider, Jürgen E.; Kohl, Peter; Grau, Vicente

    2014-01-01

    Computational models have become a fundamental tool in cardiac research. Models are evolving to cover multiple scales and physical mechanisms. They are moving towards mechanistic descriptions of personalised structure and function, including effects of natural variability. These developments are underpinned to a large extent by advances in imaging technologies. This article reviews how novel imaging technologies, or the innovative use and extension of established ones, integrate with computational models and drive novel insights into cardiac biophysics. In terms of structural characterization, we discuss how imaging is allowing a wide range of scales to be considered, from cellular levels to whole organs. We analyse how the evolution from structural to functional imaging is opening new avenues for computational models, and in this respect we review methods for measurement of electrical activity, mechanics and flow. Finally, we consider ways in which combined imaging and modelling research is likely to continue advancing cardiac research, and identify some of the main challenges that remain to be solved. PMID:25117497

  14. Cross-Domain Shoe Retrieval with a Semantic Hierarchy of Attribute Classification Network.

    PubMed

    Zhan, Huijing; Shi, Boxin; Kot, Alex C

    2017-08-04

    Cross-domain shoe image retrieval is a challenging problem, because the query photo from the street domain (daily life scenario) and the reference photo in the online domain (online shop images) have significant visual differences due to the viewpoint and scale variation, self-occlusion, and cluttered background. This paper proposes the Semantic Hierarchy Of attributE Convolutional Neural Network (SHOE-CNN) with a three-level feature representation for discriminative shoe feature expression and efficient retrieval. The SHOE-CNN with its newly designed loss function systematically merges semantic attributes of closer visual appearances to prevent shoe images with the obvious visual differences being confused with each other; the features extracted from image, region, and part levels effectively match the shoe images across different domains. We collect a large-scale shoe dataset composed of 14341 street domain and 12652 corresponding online domain images with fine-grained attributes to train our network and evaluate our system. The top-20 retrieval accuracy improves significantly over the solution with the pre-trained CNN features.

  15. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  16. Remote focusing for programmable multi-layer differential multiphoton microscopy

    PubMed Central

    Hoover, Erich E.; Young, Michael D.; Chandler, Eric V.; Luo, Anding; Field, Jeffrey J.; Sheetz, Kraig E.; Sylvester, Anne W.; Squier, Jeff A.

    2010-01-01

    We present the application of remote focusing to multiphoton laser scanning microscopy and utilize this technology to demonstrate simultaneous, programmable multi-layer imaging. Remote focusing is used to independently control the axial location of multiple focal planes that can be simultaneously imaged with single element detection. This facilitates volumetric multiphoton imaging in scattering specimens and can be practically scaled to a large number of focal planes. Further, it is demonstrated that the remote focusing control can be synchronized with the lateral scan directions, enabling imaging in orthogonal scan planes. PMID:21326641

  17. Smart sensors II; Proceedings of the Seminar, San Diego, CA, July 31, August 1, 1980

    NASA Astrophysics Data System (ADS)

    Barbe, D. F.

    1980-01-01

    Topics discussed include technology for smart sensors, smart sensors for tracking and surveillance, and techniques and algorithms for smart sensors. Papers are presented on the application of very large scale integrated circuits to smart sensors, imaging charge-coupled devices for deep-space surveillance, ultra-precise star tracking using charge coupled devices, and automatic target identification of blurred images with super-resolution features. Attention is also given to smart sensors for terminal homing, algorithms for estimating image position, and the computational efficiency of multiple image registration algorithms.

  18. LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.

    PubMed

    Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu

    2018-09-01

    The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.

  19. Performance of a novel wafer scale CMOS active pixel sensor for bio-medical imaging.

    PubMed

    Esposito, M; Anaxagoras, T; Konstantinidis, A C; Zheng, Y; Speller, R D; Evans, P M; Allinson, N M; Wells, K

    2014-07-07

    Recently CMOS active pixels sensors (APSs) have become a valuable alternative to amorphous silicon and selenium flat panel imagers (FPIs) in bio-medical imaging applications. CMOS APSs can now be scaled up to the standard 20 cm diameter wafer size by means of a reticle stitching block process. However, despite wafer scale CMOS APS being monolithic, sources of non-uniformity of response and regional variations can persist representing a significant challenge for wafer scale sensor response. Non-uniformity of stitched sensors can arise from a number of factors related to the manufacturing process, including variation of amplification, variation between readout components, wafer defects and process variations across the wafer due to manufacturing processes. This paper reports on an investigation into the spatial non-uniformity and regional variations of a wafer scale stitched CMOS APS. For the first time a per-pixel analysis of the electro-optical performance of a wafer CMOS APS is presented, to address inhomogeneity issues arising from the stitching techniques used to manufacture wafer scale sensors. A complete model of the signal generation in the pixel array has been provided and proved capable of accounting for noise and gain variations across the pixel array. This novel analysis leads to readout noise and conversion gain being evaluated at pixel level, stitching block level and in regions of interest, resulting in a coefficient of variation ⩽1.9%. The uniformity of the image quality performance has been further investigated in a typical x-ray application, i.e. mammography, showing a uniformity in terms of CNR among the highest when compared with mammography detectors commonly used in clinical practice. Finally, in order to compare the detection capability of this novel APS with the technology currently used (i.e. FPIs), theoretical evaluation of the detection quantum efficiency (DQE) at zero-frequency has been performed, resulting in a higher DQE for this detector compared to FPIs. Optical characterization, x-ray contrast measurements and theoretical DQE evaluation suggest that a trade off can be found between the need of a large imaging area and the requirement of a uniform imaging performance, making the DynAMITe large area CMOS APS suitable for a range of bio-medical applications.

  20. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  1. Constrained Deep Weak Supervision for Histopathology Image Segmentation.

    PubMed

    Jia, Zhipeng; Huang, Xingyi; Chang, Eric I-Chao; Xu, Yan

    2017-11-01

    In this paper, we develop a new weakly supervised learning algorithm to learn to segment cancerous regions in histopathology images. This paper is under a multiple instance learning (MIL) framework with a new formulation, deep weak supervision (DWS); we also propose an effective way to introduce constraints to our neural networks to assist the learning process. The contributions of our algorithm are threefold: 1) we build an end-to-end learning system that segments cancerous regions with fully convolutional networks (FCNs) in which image-to-image weakly-supervised learning is performed; 2) we develop a DWS formulation to exploit multi-scale learning under weak supervision within FCNs; and 3) constraints about positive instances are introduced in our approach to effectively explore additional weakly supervised information that is easy to obtain and enjoy a significant boost to the learning process. The proposed algorithm, abbreviated as DWS-MIL, is easy to implement and can be trained efficiently. Our system demonstrates the state-of-the-art results on large-scale histopathology image data sets and can be applied to various applications in medical imaging beyond histopathology images, such as MRI, CT, and ultrasound images.

  2. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.; Commer, Michael

    2009-07-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  3. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    NASA Astrophysics Data System (ADS)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  4. An intergrated image matching algorithm and its application in the production of lunar map based on Chang'E-2 images

    NASA Astrophysics Data System (ADS)

    Wang, F.; Ren, X.; Liu, J.; Li, C.

    2012-12-01

    An accurate topographic map is a requisite for nearly every phase of research on lunar surface, as well as an essential tool for spacecraft mission planning and operating. Automatic image matching is a key component in this process that could ensure both quality and efficiency in the production of digital topographic map for the whole lunar coverage. It also provides the basis for lunar photographic surveying block adjustment. Image matching is relatively easy when encountered with good image texture conditions. However, on lunar images with characteristics such as constantly changing lighting conditions, large rotation angle, few or homogeneous texture and low image contrasts, it becomes a difficult and challenging job. Thus, we require a robust algorithm that is capable of dealing with light effect and image deformation to fulfill this task. In order to obtain a comprehensive review of currently dominated feature point extraction operators and test whether they are suitable for lunar images, we applied several operators, such as Harris, Forstner, Moravec, SIFT, to images from Chang'E-2 spacecraft. We found that SITF (Scale Invariant Feature Transform) is a scale invariant interest point detector that can provide robustness against errors caused by image distortions from scale, orientation or illumination condition changes. Meanwhile, its capability in detecting blob-like interest points satisfies the image characteristics of Chang'E-2. However, the uneven distributed and low accurate matching results cannot meet the practical requirements in lunar photogrammetry. In contrast, some high-precision corner detectors, such as Harris, Forstner, Moravec, are limited in their sensitivities to geometric rotation. Therefore, this paper proposed a least square matching algorithm that combines the advantages of both local feature detector and corner detector. We experiment this novel method in several sites. The accuracy assessment shows that the overall matching error is within 0.3 pixel and the matching reliability can reach 98%, which proves its robustness. This method had been successfully applied to over 700 scenes of lunar images that cover the entire moon, in finding corresponding pixels in a pair of images from adjacent tracks and aiding the automatic lunar image mosaicing. The completion of the 7 meter resolution lunar map shows the promise of this least square matching algorithm in applications with a large quantity of images to be processed.

  5. Multi-Scale Fractal Analysis of Image Texture and Pattern

    NASA Technical Reports Server (NTRS)

    Emerson, Charles W.

    1998-01-01

    Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.

  6. Linking Sediment Transport to Coherent Flow Structures: First Results Using 2-Phase PIV and Considerations of the Origin of Large-Scale Turbulence

    NASA Astrophysics Data System (ADS)

    Best, J.

    2004-05-01

    The origin and scaling of large-scale coherent flow structures has been of central interest in furthering understanding of the nature of turbulent boundary layers, and recent work has shown the presence of large-scale turbulent flow structures that may extend through the whole flow depth. Such structures may dominate the entrainment of bedload sediment and advection of fine sediment in suspension. However, we still know remarkably little of the interactions between the dynamics of coherent flow structures and sediment transport, and its implications for ecosystem dynamics. This paper will discuss the first results of two-phase particle imaging velocimetry (PIV) that has been used to visualize large-scale turbulent flow structures moving over a flat bed in a water channel, and the motion of sand particles within these flows. The talk will outline the methodology, involving the fluorescent tagging of sediment and its discrimination from the fluid phase, and show results that illustrate the key role of these large-scale structures in the transport of sediment. Additionally, the presence of these structures will be discussed in relation to the origin of vorticity within flat-bed boundary layers and recent models that envisage these large-scale motions as being linked to whole-flow field structures. Discussion will focus on if these recent models simply reflect the organization of turbulent boundary layer structure and vortex packets, some of which are amply visualised at the laminar-turbulent transition.

  7. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  8. Rotation and scale change invariant point pattern relaxation matching by the Hopfield neural network

    NASA Astrophysics Data System (ADS)

    Sang, Nong; Zhang, Tianxu

    1997-12-01

    Relaxation matching is one of the most relevant methods for image matching. The original relaxation matching technique using point patterns is sensitive to rotations and scale changes. We improve the original point pattern relaxation matching technique to be invariant to rotations and scale changes. A method that makes the Hopfield neural network perform this matching process is discussed. An advantage of this is that the relaxation matching process can be performed in real time with the neural network's massively parallel capability to process information. Experimental results with large simulated images demonstrate the effectiveness and feasibility of the method to perform point patten relaxation matching invariant to rotations and scale changes and the method to perform this matching by the Hopfield neural network. In addition, we show that the method presented can be tolerant to small random error.

  9. IIPImage: Large-image visualization

    NASA Astrophysics Data System (ADS)

    Pillay, Ruven

    2014-08-01

    IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.

  10. Sparse imaging for fast electron microscopy

    NASA Astrophysics Data System (ADS)

    Anderson, Hyrum S.; Ilic-Helms, Jovana; Rohrer, Brandon; Wheeler, Jason; Larson, Kurt

    2013-02-01

    Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).

  11. Sun-induced fluorescence - a new probe of photosynthesis: First maps from the imaging spectrometer HyPlant.

    PubMed

    Rascher, U; Alonso, L; Burkart, A; Cilia, C; Cogliati, S; Colombo, R; Damm, A; Drusch, M; Guanter, L; Hanus, J; Hyvärinen, T; Julitta, T; Jussila, J; Kataja, K; Kokkalis, P; Kraft, S; Kraska, T; Matveeva, M; Moreno, J; Muller, O; Panigada, C; Pikl, M; Pinto, F; Prey, L; Pude, R; Rossini, M; Schickling, A; Schurr, U; Schüttemeyer, D; Verrelst, J; Zemek, F

    2015-12-01

    Variations in photosynthesis still cause substantial uncertainties in predicting photosynthetic CO2 uptake rates and monitoring plant stress. Changes in actual photosynthesis that are not related to greenness of vegetation are difficult to measure by reflectance based optical remote sensing techniques. Several activities are underway to evaluate the sun-induced fluorescence signal on the ground and on a coarse spatial scale using space-borne imaging spectrometers. Intermediate-scale observations using airborne-based imaging spectroscopy, which are critical to bridge the existing gap between small-scale field studies and global observations, are still insufficient. Here we present the first validated maps of sun-induced fluorescence in that critical, intermediate spatial resolution, employing the novel airborne imaging spectrometer HyPlant. HyPlant has an unprecedented spectral resolution, which allows for the first time quantifying sun-induced fluorescence fluxes in physical units according to the Fraunhofer Line Depth Principle that exploits solar and atmospheric absorption bands. Maps of sun-induced fluorescence show a large spatial variability between different vegetation types, which complement classical remote sensing approaches. Different crop types largely differ in emitting fluorescence that additionally changes within the seasonal cycle and thus may be related to the seasonal activation and deactivation of the photosynthetic machinery. We argue that sun-induced fluorescence emission is related to two processes: (i) the total absorbed radiation by photosynthetically active chlorophyll; and (ii) the functional status of actual photosynthesis and vegetation stress. © 2015 John Wiley & Sons Ltd.

  12. Scintillometer networks for calibration and validation of energy balance and soil moisture remote sensing algorithms

    NASA Astrophysics Data System (ADS)

    Hendrickx, Jan M. H.; Kleissl, Jan; Gómez Vélez, Jesús D.; Hong, Sung-ho; Fábrega Duque, José R.; Vega, David; Moreno Ramírez, Hernán A.; Ogden, Fred L.

    2007-04-01

    Accurate estimation of sensible and latent heat fluxes as well as soil moisture from remotely sensed satellite images poses a great challenge. Yet, it is critical to face this challenge since the estimation of spatial and temporal distributions of these parameters over large areas is impossible using only ground measurements. A major difficulty for the calibration and validation of operational remote sensing methods such as SEBAL, METRIC, and ALEXI is the ground measurement of sensible heat fluxes at a scale similar to the spatial resolution of the remote sensing image. While the spatial length scale of remote sensing images covers a range from 30 m (LandSat) to 1000 m (MODIS) direct methods to measure sensible heat fluxes such as eddy covariance (EC) only provide point measurements at a scale that may be considerably smaller than the estimate obtained from a remote sensing method. The Large Aperture scintillometer (LAS) flux footprint area is larger (up to 5000 m long) and its spatial extent better constraint than that of EC systems. Therefore, scintillometers offer the unique possibility of measuring the vertical flux of sensible heat averaged over areas comparable with several pixels of a satellite image (up to about 40 Landsat thermal pixels or about 5 MODIS thermal pixels). The objective of this paper is to present our experiences with an existing network of seven scintillometers in New Mexico and a planned network of three scintillometers in the humid tropics of Panama and Colombia.

  13. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  14. Automated AFM for small-scale and large-scale surface profiling in CMP applications

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2018-03-01

    As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.

  15. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  16. Scientific Accomplishments for ARL Brain Structure-Function Couplings Research on Large-Scale Brain Networks from FY11-FY13 (DSI Final Report)

    DTIC Science & Technology

    2014-03-01

    streamlines) from two types of diffusion weighted imaging scans, diffusion tensor imaging ( DTI ) and diffusion spectrum imaging (DSI). We examined...individuals. Importantly, the results also showed that this effect was greater for the DTI method than the DSI method. This suggested that DTI can better...compared to level surface walking. This project combines experimental EEG data and electromyography (EMG) data recorded from seven muscles of the leg

  17. Semantic classification of urban buildings combining VHR image and GIS data: An improved random forest approach

    NASA Astrophysics Data System (ADS)

    Du, Shihong; Zhang, Fangli; Zhang, Xiuyuan

    2015-07-01

    While most existing studies have focused on extracting geometric information on buildings, only a few have concentrated on semantic information. The lack of semantic information cannot satisfy many demands on resolving environmental and social issues. This study presents an approach to semantically classify buildings into much finer categories than those of existing studies by learning random forest (RF) classifier from a large number of imbalanced samples with high-dimensional features. First, a two-level segmentation mechanism combining GIS and VHR image produces single image objects at a large scale and intra-object components at a small scale. Second, a semi-supervised method chooses a large number of unbiased samples by considering the spatial proximity and intra-cluster similarity of buildings. Third, two important improvements in RF classifier are made: a voting-distribution ranked rule for reducing the influences of imbalanced samples on classification accuracy and a feature importance measurement for evaluating each feature's contribution to the recognition of each category. Fourth, the semantic classification of urban buildings is practically conducted in Beijing city, and the results demonstrate that the proposed approach is effective and accurate. The seven categories used in the study are finer than those in existing work and more helpful to studying many environmental and social problems.

  18. Centralized automated quality assurance for large scale health care systems. A pilot method for some aspects of dental radiography.

    PubMed

    Benn, D K; Minden, N J; Pettigrew, J C; Shim, M

    1994-08-01

    President Clinton's Health Security Act proposes the formation of large scale health plans with improved quality assurance. Dental radiography consumes 4% ($1.2 billion in 1990) of total dental expenditure yet regular systematic office quality assurance is not performed. A pilot automated method is described for assessing density of exposed film and fogging of unexposed processed film. A workstation and camera were used to input intraoral radiographs. Test images were produced from a phantom jaw with increasing exposure times. Two radiologists subjectively classified the images as too light, acceptable, or too dark. A computer program automatically classified global grey level histograms from the test images as too light, acceptable, or too dark. The program correctly classified 95% of 88 clinical films. Optical density of unexposed film in the range 0.15 to 0.52 measured by computer was reliable to better than 0.01. Further work is needed to see if comprehensive centralized automated radiographic quality assurance systems with feedback to dentists are feasible, are able to improve quality, and are significantly cheaper than conventional clerical methods.

  19. Task-driven dictionary learning.

    PubMed

    Mairal, Julien; Bach, Francis; Ponce, Jean

    2012-04-01

    Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.

  20. Real-Time Three-Dimensional Cell Segmentation in Large-Scale Microscopy Data of Developing Embryos.

    PubMed

    Stegmaier, Johannes; Amat, Fernando; Lemon, William C; McDole, Katie; Wan, Yinan; Teodoro, George; Mikut, Ralf; Keller, Philipp J

    2016-01-25

    We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A Multiscale Surface Water Temperature Data Acquisition Platform: Tests on Lake Geneva, Switzerland

    NASA Astrophysics Data System (ADS)

    Barry, D. A.; Irani Rahaghi, A.; Lemmin, U.; Riffler, M.; Wunderle, S.

    2015-12-01

    An improved understanding of surface transport processes is necessary to predict sediment, pollutant and phytoplankton patterns in large lakes. Lake surface water temperature (LSWT), which varies in space and time, reflects meteorological and climatological forcing more than any other physical lake parameter. There are different data sources for LSWT mapping, including remote sensing and in situ measurements. Satellite data can be suitable for detecting large-scale thermal patterns, but not meso- or small scale processes. Lake surface thermography, investigated in this study, has finer resolution compared to satellite images. Thermography at the meso-scale provides the ability to ground-truth satellite imagery over scales of one to several satellite image pixels. On the other hand, thermography data can be used as a control in schemes to upscale local measurements that account for surface energy fluxes and the vertical energy budget. Independently, since such data can be collected at high frequency, they can be also useful in capturing changes in the surface signatures of meso-scale eddies and thus to quantify mixing processes. In the present study, we report results from a Balloon Launched Imaging and Monitoring Platform (BLIMP), which was developed in order to measure the LSWT at meso-scale. The BLIMP consists of a small balloon that is tethered to a boat and equipped with thermal and RGB cameras, as well as other instrumentation for location and communication. Several deployments were carried out on Lake Geneva. In a typical deployment, the BLIMP is towed by a boat, and collects high frequency data from different heights (i.e., spatial resolutions) and locations. Simultaneous ground-truthing of the BLIMP data is achieved using an autonomous craft that collects a variety of data, including in situ surface/near surface temperatures, radiation and meteorological data in the area covered by the BLIMP images. With suitable scaling, our results show good consistency between in situ, BLIMP and concurrent satellite data. In addition, the BLIMP thermography reveals (hydrodynamically-driven) structures in the LSWT - an obvious example being mixing of river discharges.

  2. What Actually Happens When Granular Materials Deform Under Shear: A Look Within

    NASA Astrophysics Data System (ADS)

    Viggiani, C.

    2012-12-01

    We all know that geomaterials (soil and rock) are composed of particles. However, when dealing with them, we often use continuum models, which ignore particles and make use of abstract variables such stress and strain. Continuum mechanics is the classical tool that geotechnical engineers have always used for their everyday calculations: estimating settlements of an embankment, the deformation of a sheet pile wall, the stability of a dam or a foundation, etc. History tells us that, in general, this works fine. While we are happily ignoring particles, they will at times come back to haunt us. This happens when deformation is localized in regions so small that the detail of the soil's (or rock's) particular structure cannot safely be ignored. Failure is the perfect example of this. Researchers in geomechanics (and more generally in solid mechanics) have long since known that all classical continuum models typically break down when trying to model failure. All sorts of numerical troubles ensue - all of them pointing to a fundamental deficiency of the model: the lack of microstructure. N.B.: the term microstructure doesn't prescribe a dimension (e.g., microns), but rather a scale - the scale of the mechanisms responsible for failure. A possible remedy to this deficiency is represented by the so-called "double scale" models, in which the small scale (the microstructure) is explicitly taken into account. Typically, two numerical problems are defined and solved - one at the large (continuum) scale, and the other at the small scale. This sort of approach requires a link between the two scales, to complete the picture. Imagine we are solving at the small scale a simulation of an assembly of a few grains, for example using the Discrete Element Method, whose results are in turn fed back to the large scale Finite Element simulation. The key feature of a double scale model is that one can inject the relevant physics at the appropriate scale. The success of such a model crucially depends on the quality of the physics one injects: ideally, this comes directly from experiments. In Grenoble, this is what we do, combining various advanced experimental techniques. We are able to image, in three dimensions and at small scales, the deformation processes accompanying failure in geomaterials. This allows us to understand these processes and subsequently to define models at a pertinently small scale. I will present a few examples of the kind of experimental results which could inform a micro scale model. X-ray micro tomography imaging is the key measurement tool. This is used during loading, providing complete 3D images of a sand specimen at several stages throughout a triaxial compression test. Images from x-rays are then analyzed either in a continuum sense (using 3D Digital Image Correlation) or looking at the individual particle kinematics (Particle Tracking). I will show some of our most recent results, in which individual sand grains are followed with a technique combining very recent developments in image correlation and particle tracking. These advanced techniques offer us a look at what actually happens when a granular material deforms and eventually fails.

  3. Radiologic image communication and archive service: a secure, scalable, shared approach

    NASA Astrophysics Data System (ADS)

    Fellingham, Linda L.; Kohli, Jagdish C.

    1995-11-01

    The Radiologic Image Communication and Archive (RICA) service is designed to provide a shared archive for medical images to the widest possible audience of customers. Images are acquired from a number of different modalities, each available from many different vendors. Images are acquired digitally from those modalities which support direct digital output and by digitizing films for projection x-ray exams. The RICA Central Archive receives standard DICOM 3.0 messages and data streams from the medical imaging devices at customer institutions over the public telecommunication network. RICA represents a completely scalable resource. The user pays only for what he is using today with the full assurance that as the volume of image data that he wishes to send to the archive increases, the capacity will be there to accept it. To provide this seamless scalability imposes several requirements on the RICA architecture: (1) RICA must support the full array of transport services. (2) The Archive Interface must scale cost-effectively to support local networks that range from the very small (one x-ray digitizer in a medical clinic) to the very large and complex (a large hospital with several CTs, MRs, Nuclear medicine devices, ultrasound machines, CRs, and x-ray digitizers). (3) The Archive Server must scale cost-effectively to support rapidly increasing demands for service providing storage for and access to millions of patients and hundreds of millions of images. The architecture must support the incorporation of improved technology as it becomes available to maintain performance and remain cost-effective as demand rises.

  4. Multiscale pore structure and constitutive models of fine-grained rocks

    NASA Astrophysics Data System (ADS)

    Heath, J. E.; Dewers, T. A.; Shields, E. A.; Yoon, H.; Milliken, K. L.

    2017-12-01

    A foundational concept of continuum poromechanics is the representative elementary volume or REV: an amount of material large enough that pore- or grain-scale fluctuations in relevant properties are dissipated to a definable mean, but smaller than length scales of heterogeneity. We determine 2D-equivalent representative elementary areas (REAs) of pore areal fraction of three major types of mudrocks by applying multi-beam scanning electron microscopy (mSEM) to obtain terapixel image mosaics. Image analysis obtains pore areal fraction and pore size and shape as a function of progressively larger measurement areas. Using backscattering imaging and mSEM data, pores are identified by the components within which they occur, such as in organics or the clastic matrix. We correlate pore areal fraction with nano-indentation, micropillar compression, and axysimmetic testing at multiple length scales on a terrigenous-argillaceous mudrock sample. The combined data set is used to: investigate representative elementary volumes (and areas for the 2D images); determine if scale separation occurs; and determine if transport and mechanical properties at a given length scale can be statistically defined. Clear scale separation occurs between REAs and observable heterogeneity in two of the samples. A highly-laminated sample exhibits fine-scale heterogeneity and an overlapping in scales, in which case typical continuum assumptions on statistical variability may break down. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  5. Imaging of Brain Slices with a Genetically Encoded Voltage Indicator.

    PubMed

    Quicke, Peter; Barnes, Samuel J; Knöpfel, Thomas

    2017-01-01

    Functional fluorescence microscopy of brain slices using voltage sensitive fluorescent proteins (VSFPs) allows large scale electrophysiological monitoring of neuronal excitation and inhibition. We describe the equipment and techniques needed to successfully record functional responses optical voltage signals from cells expressing a voltage indicator such as VSFP Butterfly 1.2. We also discuss the advantages of voltage imaging and the challenges it presents.

  6. Simulation and thermal imaging of the 2006 Esperanza Wildfire in southern California: application of a coupled weather-wildland fire model

    Treesearch

    Janice L. Coen; Philip J Riggan

    2014-01-01

    The 2006 Esperanza Fire in Riverside County, California, was simulated with the Coupled Atmosphere-Wildland Fire Environment (CAWFE) model to examine how dynamic interactions of the atmosphere with large-scale fire spread and energy release may affect observed patterns of fire behavior as mapped using the FireMapper thermal imaging radiometer. CAWFE simulated the...

  7. Measurement of Device Parameters Using Image Recovery Techniques in Large-Scale IC Devices

    NASA Technical Reports Server (NTRS)

    Scheick, Leif; Edmonds, Larry

    2004-01-01

    Devices that respond to radiation on a cell level will produce histograms showing the relative frequency of cell damage as a function of damage. The measured distribution is the convolution of distributions from radiation responses, measurement noise, and manufacturing parameters. A method of extracting device characteristics and parameters from measured distributions via mathematical and image subtraction techniques is described.

  8. Improved Calibration Shows Images True Colors

    NASA Technical Reports Server (NTRS)

    2015-01-01

    Innovative Imaging and Research, located at Stennis Space Center, used a single SBIR contract with the center to build a large-scale integrating sphere, capable of calibrating a whole array of cameras simultaneously, at a fraction of the usual cost for such a device. Through the use of LEDs, the company also made the sphere far more efficient than existing products and able to mimic sunlight.

  9. Global Carbon Dioxide Transport from AIRS Data, July 2009

    NASA Image and Video Library

    2009-11-09

    Created with data acquired by JPL Atmospheric Infrared Sounder instrument during July 2009 this image shows large-scale patterns of carbon dioxide concentrations that are transported around Earth by the general circulation of the atmosphere.

  10. Web-based scoring of the dicentric assay, a collaborative biodosimetric scoring strategy for population triage in large scale radiation accidents.

    PubMed

    Romm, H; Ainsbury, E; Bajinskis, A; Barnard, S; Barquinero, J F; Barrios, L; Beinke, C; Puig-Casanovas, R; Deperas-Kaminska, M; Gregoire, E; Oestreicher, U; Lindholm, C; Moquet, J; Rothkamm, K; Sommer, S; Thierens, H; Vral, A; Vandersickel, V; Wojcik, A

    2014-05-01

    In the case of a large scale radiation accident high throughput methods of biological dosimetry for population triage are needed to identify individuals requiring clinical treatment. The dicentric assay performed in web-based scoring mode may be a very suitable technique. Within the MULTIBIODOSE EU FP7 project a network is being established of 8 laboratories with expertise in dose estimations based on the dicentric assay. Here, the manual dicentric assay was tested in a web-based scoring mode. More than 23,000 high resolution images of metaphase spreads (only first mitosis) were captured by four laboratories and established as image galleries on the internet (cloud). The galleries included images of a complete dose effect curve (0-5.0 Gy) and three types of irradiation scenarios simulating acute whole body, partial body and protracted exposure. The blood samples had been irradiated in vitro with gamma rays at the University of Ghent, Belgium. Two laboratories provided image galleries from Fluorescence plus Giemsa stained slides (3 h colcemid) and the image galleries from the other two laboratories contained images from Giemsa stained preparations (24 h colcemid). Each of the 8 participating laboratories analysed 3 dose points of the dose effect curve (scoring 100 cells for each point) and 3 unknown dose points (50 cells) for each of the 3 simulated irradiation scenarios. At first all analyses were performed in a QuickScan Mode without scoring individual chromosomes, followed by conventional scoring (only complete cells, 46 centromeres). The calibration curves obtained using these two scoring methods were very similar, with no significant difference in the linear-quadratic curve coefficients. Analysis of variance showed a significant effect of dose on the yield of dicentrics, but no significant effect of the laboratories, different methods of slide preparation or different incubation times used for colcemid. The results obtained to date within the MULTIBIODOSE project by a network of 8 collaborating laboratories throughout Europe are very promising. The dicentric assay in the web based scoring mode as a high throughput scoring strategy is a useful application for biodosimetry in the case of a large scale radiation accident.

  11. Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation

    NASA Astrophysics Data System (ADS)

    Zuo, C.; Xiao, X.; Hou, Q.; Li, B.

    2018-05-01

    WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.

  12. Land use mapping and modelling for the Phoenix Quadrangle

    NASA Technical Reports Server (NTRS)

    Place, J. L. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Changes in the land use in the Phoenix (1:250,000 scale) Quadrangle in Arizona have been mapped using only the images from ERTS-1, tending to verify the utility of a land use classification system proposed for use with ERTS images. Seasonal changes were studied on successive ERTS-1 images, particularly large scale color composite transparencies for August, October, February, and May, and this seasonal variation aided delineation of land use boundaries. Types of equipment used to aid interpretation included color additive viewer, a twenty-power magnifier, a density slicer, and a diazo copy machine. A Zoom Transfer Scope was used for scale and photogrammetric adjustments. Types of changes detected have been: (1) cropland or rangeland developed as new residential areas; (2) rangeland converted to new cropland or to new reservoirs; and (3) possibly new activity by the mining industries. A map of land use previously compiled from air photos was updated in this manner. ERTS-1 images complemented air photos: the photos gave detail on a one-shot basis; the ERTS-1 images provided currency and revealed seasonal variation in vegetation which aided interpretation of land use.

  13. Sloan Digital Sky Survey III photometric quasar clustering: Probing the initial conditions of the Universe

    DOE PAGES

    Ho, Shirley; Agarwal, Nishant; Myers, Adam D.; ...

    2015-05-22

    Here, the Sloan Digital Sky Survey has surveyed 14,555 square degrees of the sky, and delivered over a trillion pixels of imaging data. We present the large-scale clustering of 1.6 million quasars between z=0.5 and z=2.5 that have been classified from this imaging, representing the highest density of quasars ever studied for clustering measurements. This data set spans 0~ 11,00 square degrees and probes a volume of 80 h –3 Gpc 3. In principle, such a large volume and medium density of tracers should facilitate high-precision cosmological constraints. We measure the angular clustering of photometrically classified quasars using an optimalmore » quadratic estimator in four redshift slices with an accuracy of ~ 25% over a bin width of δ l ~ 10–15 on scales corresponding to matter-radiation equality and larger (0ℓ ~ 2–3).« less

  14. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  15. SureChEMBL: a large-scale, chemically annotated patent document database.

    PubMed

    Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P

    2016-01-04

    SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Low Pressure Seeder Development for PIV in Large Scale Open Loop Wind Tunnels

    NASA Astrophysics Data System (ADS)

    Schmit, Ryan

    2010-11-01

    A low pressure seeding techniques have been developed for Particle Image Velocimetry (PIV) in large scale wind tunnel facilities was performed at the Subsonic Aerodynamic Research Laboratory (SARL) facility at Wright-Patterson Air Force Base. The SARL facility is an open loop tunnel with a 7 by 10 foot octagonal test section that has 56% optical access and the Mach number varies from 0.2 to 0.5. A low pressure seeder sprayer was designed and tested in the inlet of the wind tunnel. The seeder sprayer was designed to produce an even and uniform distribution of seed while reducing the seeders influence in the test section. ViCount Compact 5000 using Smoke Oil 180 was using as the seeding material. The results show that this low pressure seeder does produce streaky seeding but excellent PIV images are produced.

  17. Variations of mesoscale and large-scale sea ice morphology in the 1984 Marginal Ice Zone Experiment as observed by microwave remote sensing

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Josberger, E. G.; Gloersen, P.; Johannessen, O. M.; Guest, P. S.

    1987-01-01

    The data acquired during the summer 1984 Marginal Ice Zone Experiment in the Fram Strait-Greenland Sea marginal ice zone, using airborne active and passive microwave sensors and the Nimbus 7 SMMR, were analyzed to compile a sequential description of the mesoscale and large-scale ice morphology variations during the period of June 6 - July 16, 1984. Throughout the experiment, the long ice edge between northwest Svalbard and central Greenland meandered; eddies were repeatedly formed, moved, and disappeared but the ice edge remained within a 100-km-wide zone. The ice pack behind this alternately diffuse and compact edge underwent rapid and pronounced variations in ice concentration over a 200-km-wide zone. The high-resolution ice concentration distributions obtained in the aircraft images agree well with the low-resolution distributions of SMMR images.

  18. SureChEMBL: a large-scale, chemically annotated patent document database

    PubMed Central

    Papadatos, George; Davies, Mark; Dedman, Nathan; Chambers, Jon; Gaulton, Anna; Siddle, James; Koks, Richard; Irvine, Sean A.; Pettersson, Joe; Goncharoff, Nicko; Hersey, Anne; Overington, John P.

    2016-01-01

    SureChEMBL is a publicly available large-scale resource containing compounds extracted from the full text, images and attachments of patent documents. The data are extracted from the patent literature according to an automated text and image-mining pipeline on a daily basis. SureChEMBL provides access to a previously unavailable, open and timely set of annotated compound-patent associations, complemented with sophisticated combined structure and keyword-based search capabilities against the compound repository and patent document corpus; given the wealth of knowledge hidden in patent documents, analysis of SureChEMBL data has immediate applications in drug discovery, medicinal chemistry and other commercial areas of chemical science. Currently, the database contains 17 million compounds extracted from 14 million patent documents. Access is available through a dedicated web-based interface and data downloads at: https://www.surechembl.org/. PMID:26582922

  19. Synchrotron radiation x-ray topography and defect selective etching analysis of threading dislocations in GaN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sintonen, Sakari, E-mail: sakari.sintonen@aalto.fi; Suihkonen, Sami; Jussila, Henri

    2014-08-28

    The crystal quality of bulk GaN crystals is continuously improving due to advances in GaN growth techniques. Defect characterization of the GaN substrates by conventional methods is impeded by the very low dislocation density and a large scale defect analysis method is needed. White beam synchrotron radiation x-ray topography (SR-XRT) is a rapid and non-destructive technique for dislocation analysis on a large scale. In this study, the defect structure of an ammonothermal c-plane GaN substrate was recorded using SR-XRT and the image contrast caused by the dislocation induced microstrain was simulated. The simulations and experimental observations agree excellently and themore » SR-XRT image contrasts of mixed and screw dislocations were determined. Apart from a few exceptions, defect selective etching measurements were shown to correspond one to one with the SR-XRT results.« less

  20. Large-Scale Image Analytics Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Ganguly, S.; Nemani, R. R.; Basu, S.; Mukhopadhyay, S.; Michaelis, A.; Votava, P.

    2014-12-01

    High resolution land cover classification maps are needed to increase the accuracy of current Land ecosystem and climate model outputs. Limited studies are in place that demonstrates the state-of-the-art in deriving very high resolution (VHR) land cover products. In addition, most methods heavily rely on commercial softwares that are difficult to scale given the region of study (e.g. continents to globe). Complexities in present approaches relate to (a) scalability of the algorithm, (b) large image data processing (compute and memory intensive), (c) computational cost, (d) massively parallel architecture, and (e) machine learning automation. In addition, VHR satellite datasets are of the order of terabytes and features extracted from these datasets are of the order of petabytes. In our present study, we have acquired the National Agricultural Imaging Program (NAIP) dataset for the Continental United States at a spatial resolution of 1-m. This data comes as image tiles (a total of quarter million image scenes with ~60 million pixels) and has a total size of ~100 terabytes for a single acquisition. Features extracted from the entire dataset would amount to ~8-10 petabytes. In our proposed approach, we have implemented a novel semi-automated machine learning algorithm rooted on the principles of "deep learning" to delineate the percentage of tree cover. In order to perform image analytics in such a granular system, it is mandatory to devise an intelligent archiving and query system for image retrieval, file structuring, metadata processing and filtering of all available image scenes. Using the Open NASA Earth Exchange (NEX) initiative, which is a partnership with Amazon Web Services (AWS), we have developed an end-to-end architecture for designing the database and the deep belief network (following the distbelief computing model) to solve a grand challenge of scaling this process across quarter million NAIP tiles that cover the entire Continental United States. The AWS core components that we use to solve this problem are DynamoDB along with S3 for database query and storage, ElastiCache shared memory architecture for image segmentation, Elastic Map Reduce (EMR) for image feature extraction, and the memory optimized Elastic Cloud Compute (EC2) for the learning algorithm.

  1. Statistical processing of large image sequences.

    PubMed

    Khellah, F; Fieguth, P; Murray, M J; Allen, M

    2005-01-01

    The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.

  2. Mid-latitude response to geomagnetic storms observed in 630nm airglow over continental United States

    NASA Astrophysics Data System (ADS)

    Bhatt, A.; Kendall, E. A.

    2016-12-01

    We present analysis of mid-latitude response observed to geomagnetic storms using the MANGO network consisting of all-sky cameras imaging 630nm emission over the continental United States. The response largely falls in two categories: Stable Auroral Red (SAR) arc and Large-scale traveling ionospheric disturbances (LSTIDs). However, outside of these phenomena, less often observed response include anomalous airglow brightening, bright swirls, and frozen in traveling structures. We will present an analysis of various events observed over 3 years of MANGO network operation, which started with two imagers in the western US with addition of new imagers in the last year. We will also present unusual north and northeastward propagating waves often observed in conjunction with diffuse aurora. Wherever possible, we will compare with observations from Boston University imagers located in Massachusetts and Texas.

  3. Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.

    PubMed

    Adal, Kedir M; Sidibé, Désiré; Ali, Sharib; Chaum, Edward; Karnowski, Thomas P; Mériaudeau, Fabrice

    2014-04-01

    Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. Cassini UVIS Auroral Observations in 2016 and 2017

    NASA Astrophysics Data System (ADS)

    Pryor, Wayne R.; Esposito, Larry W.; Jouchoux, Alain; Radioti, Aikaterini; Grodent, Denis; Gustin, Jacques; Gerard, Jean-Claude; Lamy, Laurent; Badman, Sarah; Dyudina, Ulyana A.; Cassini UVIS Team, Cassini VIMS Team, Cassini ISS Team, HST Saturn Auroral Team

    2017-10-01

    In 2016 and 2017, the Cassini Saturn orbiter executed a final series of high-inclination, low-periapsis orbits ideal for studies of Saturn's polar regions. The Cassini Ultraviolet Imaging Spectrograph (UVIS) obtained an extensive set of auroral images, some at the highest spatial resolution obtained during Cassini's long orbital mission (2004-2017). In some cases, two or three spacecraft slews at right angles to the long slit of the spectrograph were required to cover the entire auroral region to form auroral images. We will present selected images from this set showing narrow arcs of emission, more diffuse auroral emissions, multiple auroral arcs in a single image, discrete spots of emission, small scale vortices, large-scale spiral forms, and parallel linear features that appear to cross in places like twisted wires. Some shorter features are transverse to the main auroral arcs, like barbs on a wire. UVIS observations were in some cases simultaneous with auroral observations from the Cassini Imaging Science Subsystem (ISS) the Cassini Visual and Infrared Mapping Spectrometer (VIMS), and the Hubble Space Telescope Space Telescope Imaging Spectrograph (STIS) that will also be presented.

  5. Neutron Tomography at the Los Alamos Neutron Science Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, William Riley

    Neutron imaging is an incredibly powerful tool for non-destructive sample characterization and materials science. Neutron tomography is one technique that results in a three-dimensional model of the sample, representing the interaction of the neutrons with the sample. This relies both on reliable data acquisition and on image processing after acquisition. Over the course of the project, the focus has changed from the former to the latter, culminating in a large-scale reconstruction of a meter-long fossilized skull. The full reconstruction is not yet complete, though tools have been developed to improve the speed and accuracy of the reconstruction. This project helpsmore » to improve the capabilities of LANSCE and LANL with regards to imaging large or unwieldy objects.« less

  6. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less

  7. Cosmic Infrared Background Sources Clustered Around Quasars

    NASA Astrophysics Data System (ADS)

    Hall, Kirsten R.; Zakamska, Nadia; Marriage, Tobias; Crichton, Devin; Gralla, Megan

    2017-06-01

    Powerful quasars can be seen out to large distances. As they reside in massive dark matter halos, they provide a useful tracer of large scale structure. We stack Herschel-SPIRE images at 250, 350, and 500 microns at the locations of 13,000 quasars in redshift bins spanning 0.5 < z < 3.5. While the detected signal is dominated on instrumental beam scales by the unresolved dust emission of the quasar and its host galaxy, at z 2 the extended emission is clearly spatially resolved on Mpc scales. This emission is due to star-forming galaxies clustered around the dark matter halos hosting quasars. We measure radial surface brightness profiles of the stacked images to compute the angular correlation function of dusty star-forming galaxies correlated with quasars. We generate a halo occupation distribution model in order to determine the masses of the dark matter halos in which dusty star forming galaxies reside. We are probing potential changes in the halo mass most efficient at hosting star forming galaxies, and assessing any evidence that this halo mass evolved with redshift in the context of "cosmic downsizing".

  8. Large-scale deep learning for robotically gathered imagery for science

    NASA Astrophysics Data System (ADS)

    Skinner, K.; Johnson-Roberson, M.; Li, J.; Iscar, E.

    2016-12-01

    With the explosion of computing power, the intelligence and capability of mobile robotics has dramatically increased over the last two decades. Today, we can deploy autonomous robots to achieve observations in a variety of environments ripe for scientific exploration. These platforms are capable of gathering a volume of data previously unimaginable. Additionally, optical cameras, driven by mobile phones and consumer photography, have rapidly improved in size, power consumption, and quality making their deployment cheaper and easier. Finally, in parallel we have seen the rise of large-scale machine learning approaches, particularly deep neural networks (DNNs), increasing the quality of the semantic understanding that can be automatically extracted from optical imagery. In concert this enables new science using a combination of machine learning and robotics. This work will discuss the application of new low-cost high-performance computing approaches and the associated software frameworks to enable scientists to rapidly extract useful science data from millions of robotically gathered images. The automated analysis of imagery on this scale opens up new avenues of inquiry unavailable using more traditional manual or semi-automated approaches. We will use a large archive of millions of benthic images gathered with an autonomous underwater vehicle to demonstrate how these tools enable new scientific questions to be posed.

  9. Robust multi-site MR data processing: iterative optimization of bias correction, tissue classification, and registration.

    PubMed

    Young Kim, Eun; Johnson, Hans J

    2013-01-01

    A robust multi-modal tool, for automated registration, bias correction, and tissue classification, has been implemented for large-scale heterogeneous multi-site longitudinal MR data analysis. This work focused on improving the an iterative optimization framework between bias-correction, registration, and tissue classification inspired from previous work. The primary contributions are robustness improvements from incorporation of following four elements: (1) utilize multi-modal and repeated scans, (2) incorporate high-deformable registration, (3) use extended set of tissue definitions, and (4) use of multi-modal aware intensity-context priors. The benefits of these enhancements were investigated by a series of experiments with both simulated brain data set (BrainWeb) and by applying to highly-heterogeneous data from a 32 site imaging study with quality assessments through the expert visual inspection. The implementation of this tool is tailored for, but not limited to, large-scale data processing with great data variation with a flexible interface. In this paper, we describe enhancements to a joint registration, bias correction, and the tissue classification, that improve the generalizability and robustness for processing multi-modal longitudinal MR scans collected at multi-sites. The tool was evaluated by using both simulated and simulated and human subject MRI images. With these enhancements, the results showed improved robustness for large-scale heterogeneous MRI processing.

  10. The Effect of Large Scale Climate Oscillations on the Land Surface Phenology of the Northern Polar Regions and Central Asia

    NASA Astrophysics Data System (ADS)

    de Beurs, K.; Henebry, G. M.; Owsley, B.; Sokolik, I. N.

    2016-12-01

    Land surface phenology metrics allow for the summarization of long image time series into a set of annual observations that describe the vegetated growing season. These metrics have been shown to respond to both large scale climatic and anthropogenic impacts. In this study we assemble a time series (2001 - 2014) of Moderate Resolution Imaging Spectroradiometer (MODIS) Nadir BRDF-Adjusted Reflectance data and land surface temperature data at 0.05º spatial resolution. We then derive land surface phenology metrics focusing on the peak of the growing season by fitting quadratic regression models using NDVI and Accumulated Growing Degree-Days (AGDD) derived from land surface temperature. We link the annual information on the peak timing, the thermal time to peak and the maximum of the growing season with five of the most important large scale climate oscillations: NAO, AO, PDO, PNA and ENSO. We demonstrate several significant correlations between the climate oscillations and the land surface phenology peak metrics for a range of different bioclimatic regions in both dryland Central Asia and the northern Polar Regions. We will then link the correlation results with trends derived by the seasonal Mann-Kendall trend detection method applied to several satellite derived vegetation and albedo datasets.

  11. Dynamic displacement measurement of large-scale structures based on the Lucas-Kanade template tracking algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Jie; Zhu, Chang`an

    2016-01-01

    The development of optics and computer technologies enables the application of the vision-based technique that uses digital cameras to the displacement measurement of large-scale structures. Compared with traditional contact measurements, vision-based technique allows for remote measurement, has a non-intrusive characteristic, and does not necessitate mass introduction. In this study, a high-speed camera system is developed to complete the displacement measurement in real time. The system consists of a high-speed camera and a notebook computer. The high-speed camera can capture images at a speed of hundreds of frames per second. To process the captured images in computer, the Lucas-Kanade template tracking algorithm in the field of computer vision is introduced. Additionally, a modified inverse compositional algorithm is proposed to reduce the computing time of the original algorithm and improve the efficiency further. The modified algorithm can rapidly accomplish one displacement extraction within 1 ms without having to install any pre-designed target panel onto the structures in advance. The accuracy and the efficiency of the system in the remote measurement of dynamic displacement are demonstrated in the experiments on motion platform and sound barrier on suspension viaduct. Experimental results show that the proposed algorithm can extract accurate displacement signal and accomplish the vibration measurement of large-scale structures.

  12. Determination of atomic-scale chemical composition at semiconductor heteroepitaxial interfaces by high-resolution transmission electron microscopy.

    PubMed

    Wen, C; Ma, Y J

    2018-03-01

    The determination of atomic structures and further quantitative information such as chemical compositions at atomic scale for semiconductor defects or heteroepitaxial interfaces can provide direct evidence to understand their formation, modification, and/or effects on the properties of semiconductor films. The commonly used method, high-resolution transmission electron microscopy (HRTEM), suffers from difficulty in acquiring images that correctly show the crystal structure at atomic resolution, because of the limitation in microscope resolution or deviation from the Scherzer-defocus conditions. In this study, an image processing method, image deconvolution, was used to achieve atomic-resolution (∼1.0 Å) structure images of small lattice-mismatch (∼1.0%) AlN/6H-SiC (0001) and large lattice-mismatch (∼8.5%) AlSb/GaAs (001) heteroepitaxial interfaces using simulated HRTEM images of a conventional 300-kV field-emission-gun transmission electron microscope under non-Scherzer-defocus conditions. Then, atomic-scale chemical compositions at the interface were determined for the atomic intermixing and Lomer dislocation with an atomic step by analyzing the deconvoluted image contrast. Furthermore, the effect of dynamical scattering on contrast analysis was also evaluated for differently weighted atomic columns in the compositions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  14. Network analysis of mesoscale optical recordings to assess regional, functional connectivity.

    PubMed

    Lim, Diana H; LeDue, Jeffrey M; Murphy, Timothy H

    2015-10-01

    With modern optical imaging methods, it is possible to map structural and functional connectivity. Optical imaging studies that aim to describe large-scale neural connectivity often need to handle large and complex datasets. In order to interpret these datasets, new methods for analyzing structural and functional connectivity are being developed. Recently, network analysis, based on graph theory, has been used to describe and quantify brain connectivity in both experimental and clinical studies. We outline how to apply regional, functional network analysis to mesoscale optical imaging using voltage-sensitive-dye imaging and channelrhodopsin-2 stimulation in a mouse model. We include links to sample datasets and an analysis script. The analyses we employ can be applied to other types of fluorescence wide-field imaging, including genetically encoded calcium indicators, to assess network properties. We discuss the benefits and limitations of using network analysis for interpreting optical imaging data and define network properties that may be used to compare across preparations or other manipulations such as animal models of disease.

  15. Part-based deep representation for product tagging and search

    NASA Astrophysics Data System (ADS)

    Chen, Keqing

    2017-06-01

    Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.

  16. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  17. Multi-scale Observation of Biological Interactions of Nanocarriers: from Nano to Macro

    PubMed Central

    Jin, Su-Eon; Bae, Jin Woo; Hong, Seungpyo

    2010-01-01

    Microscopic observations have played a key role in recent advancements in nanotechnology-based biomedical sciences. In particular, multi-scale observation is necessary to fully understand the nano-bio interfaces where a large amount of unprecedented phenomena have been reported. This review describes how to address the physicochemical and biological interactions of nanocarriers within the biological environments using microscopic tools. The imaging techniques are categorized based on the size scale of detection. For observation of the nano-scale biological interactions of nanocarriers, we discuss atomic force microscopy (AFM), scanning electron microscopy (SEM), and transmission electron microscopy (TEM). For the micro to macro-scale (in vitro and in vivo) observation, we focus on confocal laser scanning microscopy (CLSM) as well as in vivo imaging systems such as magnetic resonance imaging (MRI), superconducting quantum interference devices (SQUIDs), and IVIS®. Additionally, recently developed combined techniques such as AFM-CLSM, correlative Light and Electron Microscopy (CLEM), and SEM-spectroscopy are also discussed. In this review, we describe how each technique helps elucidate certain physicochemical and biological activities of nanocarriers such as dendrimers, polymers, liposomes, and polymeric/inorganic nanoparticles, thus providing a toolbox for bioengineers, pharmaceutical scientists, biologists, and research clinicians. PMID:20232368

  18. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA gel electrophoresis images, called GELect, which was written in Java and made available through the imageJ framework. With a novel automated image processing workflow, the tool can accurately segment lanes from a gel matrix, intelligently extract distorted and even doublet bands that are difficult to identify by existing image processing tools. Consequently, genotyping from DNA gel electrophoresis can be performed automatically allowing users to efficiently conduct large scale DNA fingerprinting via DNA gel electrophoresis. The software is freely available from http://www.biotec.or.th/gi/tools/gelect.

  19. Fast Updating National Geo-Spatial Databases with High Resolution Imagery: China's Methodology and Experience

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.

    2014-04-01

    Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.

  20. Large scale track analysis for wide area motion imagery surveillance

    NASA Astrophysics Data System (ADS)

    van Leeuwen, C. J.; van Huis, J. R.; Baan, J.

    2016-10-01

    Wide Area Motion Imagery (WAMI) enables image based surveillance of areas that can cover multiple square kilometers. Interpreting and analyzing information from such sources, becomes increasingly time consuming as more data is added from newly developed methods for information extraction. Captured from a moving Unmanned Aerial Vehicle (UAV), the high-resolution images allow detection and tracking of moving vehicles, but this is a highly challenging task. By using a chain of computer vision detectors and machine learning techniques, we are capable of producing high quality track information of more than 40 thousand vehicles per five minutes. When faced with such a vast number of vehicular tracks, it is useful for analysts to be able to quickly query information based on region of interest, color, maneuvers or other high-level types of information, to gain insight and find relevant activities in the flood of information. In this paper we propose a set of tools, combined in a graphical user interface, which allows data analysts to survey vehicles in a large observed area. In order to retrieve (parts of) images from the high-resolution data, we developed a multi-scale tile-based video file format that allows to quickly obtain only a part, or a sub-sampling of the original high resolution image. By storing tiles of a still image according to a predefined order, we can quickly retrieve a particular region of the image at any relevant scale, by skipping to the correct frames and reconstructing the image. Location based queries allow a user to select tracks around a particular region of interest such as landmark, building or street. By using an integrated search engine, users can quickly select tracks that are in the vicinity of locations of interest. Another time-reducing method when searching for a particular vehicle, is to filter on color or color intensity. Automatic maneuver detection adds information to the tracks that can be used to find vehicles based on their behavior.

  1. DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis.

    PubMed

    Joshi, Gopal Datt; Sivaswamy, Jayanthi

    2011-01-01

    Diabetic retinopathy is the leading cause of blindness in urban populations. Early diagnosis through regular screening and timely treatment has been shown to prevent visual loss and blindness. It is very difficult to cater to this vast set of diabetes patients, primarily because of high costs in reaching out to patients and a scarcity of skilled personnel. Telescreening offers a cost-effective solution to reach out to patients but is still inadequate due to an insufficient number of experts who serve the diabetes population. Developments toward fundus image analysis have shown promise in addressing the scarcity of skilled personnel for large-scale screening. This article aims at addressing the underlying issues in traditional telescreening to develop a solution that leverages the developments carried out in fundus image analysis. We propose a novel Web-based telescreening solution (called DrishtiCare) integrating various value-added fundus image analysis components. A Web-based platform on the software as a service (SaaS) delivery model is chosen to make the service cost-effective, easy to use, and scalable. A server-based prescreening system is employed to scrutinize the fundus images of patients and to refer them to the experts. An automatic quality assessment module ensures transfer of fundus images that meet grading standards. An easy-to-use interface, enabled with new visualization features, is designed for case examination by experts. Three local primary eye hospitals have participated and used DrishtiCare's telescreening service. A preliminary evaluation of the proposed platform is performed on a set of 119 patients, of which 23% are identified with the sight-threatening retinopathy. Currently, evaluation at a larger scale is under process, and a total of 450 patients have been enrolled. The proposed approach provides an innovative way of integrating automated fundus image analysis in the telescreening framework to address well-known challenges in large-scale disease screening. It offers a low-cost, effective, and easily adoptable screening solution to primary care providers. © 2010 Diabetes Technology Society.

  2. Single-trabecula building block for large-scale finite element models of cancellous bone.

    PubMed

    Dagan, D; Be'ery, M; Gefen, A

    2004-07-01

    Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.

  3. Low-energy transmission electron diffraction and imaging of large-area graphene

    PubMed Central

    Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili

    2017-01-01

    Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials. PMID:28879233

  4. Low-energy transmission electron diffraction and imaging of large-area graphene.

    PubMed

    Zhao, Wei; Xia, Bingyu; Lin, Li; Xiao, Xiaoyang; Liu, Peng; Lin, Xiaoyang; Peng, Hailin; Zhu, Yuanmin; Yu, Rong; Lei, Peng; Wang, Jiangtao; Zhang, Lina; Xu, Yong; Zhao, Mingwen; Peng, Lianmao; Li, Qunqing; Duan, Wenhui; Liu, Zhongfan; Fan, Shoushan; Jiang, Kaili

    2017-09-01

    Two-dimensional (2D) materials have attracted interest because of their excellent properties and potential applications. A key step in realizing industrial applications is to synthesize wafer-scale single-crystal samples. Until now, single-crystal samples, such as graphene domains up to the centimeter scale, have been synthesized. However, a new challenge is to efficiently characterize large-area samples. Currently, the crystalline characterization of these samples still relies on selected-area electron diffraction (SAED) or low-energy electron diffraction (LEED), which is more suitable for characterizing very small local regions. This paper presents a highly efficient characterization technique that adopts a low-energy electrostatically focused electron gun and a super-aligned carbon nanotube (SACNT) film sample support. It allows rapid crystalline characterization of large-area graphene through a single photograph of a transmission-diffracted image at a large beam size. Additionally, the low-energy electron beam enables the observation of a unique diffraction pattern of adsorbates on the suspended graphene at room temperature. This work presents a simple and convenient method for characterizing the macroscopic structures of 2D materials, and the instrument we constructed allows the study of the weak interaction with 2D materials.

  5. Large-scale topology and the default mode network in the mouse connectome

    PubMed Central

    Stafford, James M.; Jarrett, Benjamin R.; Miranda-Dominguez, Oscar; Mills, Brian D.; Cain, Nicholas; Mihalas, Stefan; Lahvis, Garet P.; Lattal, K. Matthew; Mitchell, Suzanne H.; David, Stephen V.; Fryer, John D.; Nigg, Joel T.; Fair, Damien A.

    2014-01-01

    Noninvasive functional imaging holds great promise for serving as a translational bridge between human and animal models of various neurological and psychiatric disorders. However, despite a depth of knowledge of the cellular and molecular underpinnings of atypical processes in mouse models, little is known about the large-scale functional architecture measured by functional brain imaging, limiting translation to human conditions. Here, we provide a robust processing pipeline to generate high-resolution, whole-brain resting-state functional connectivity MRI (rs-fcMRI) images in the mouse. Using a mesoscale structural connectome (i.e., an anterograde tracer mapping of axonal projections across the mouse CNS), we show that rs-fcMRI in the mouse has strong structural underpinnings, validating our procedures. We next directly show that large-scale network properties previously identified in primates are present in rodents, although they differ in several ways. Last, we examine the existence of the so-called default mode network (DMN)—a distributed functional brain system identified in primates as being highly important for social cognition and overall brain function and atypically functionally connected across a multitude of disorders. We show the presence of a potential DMN in the mouse brain both structurally and functionally. Together, these studies confirm the presence of basic network properties and functional networks of high translational importance in structural and functional systems in the mouse brain. This work clears the way for an important bridge measurement between human and rodent models, enabling us to make stronger conclusions about how regionally specific cellular and molecular manipulations in mice relate back to humans. PMID:25512496

  6. An Automated Blur Detection Method for Histological Whole Slide Imaging

    PubMed Central

    Moles Lopez, Xavier; D'Andrea, Etienne; Barbot, Paul; Bridoux, Anne-Sophie; Rorive, Sandrine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine

    2013-01-01

    Whole slide scanners are novel devices that enable high-resolution imaging of an entire histological slide. Furthermore, the imaging is achieved in only a few minutes, which enables image rendering of large-scale studies involving multiple immunohistochemistry biomarkers. Although whole slide imaging has improved considerably, locally poor focusing causes blurred regions of the image. These artifacts may strongly affect the quality of subsequent analyses, making a slide review process mandatory. This tedious and time-consuming task requires the scanner operator to carefully assess the virtual slide and to manually select new focus points. We propose a statistical learning method that provides early image quality feedback and automatically identifies regions of the image that require additional focus points. PMID:24349343

  7. Australian Soil Moisture Field Experiments in Support of Soil Moisture Satellite Observations

    NASA Technical Reports Server (NTRS)

    Kim, Edward; Walker, Jeff; Rudiger, Christopher; Panciera, Rocco

    2010-01-01

    Large-scale field campaigns provide the critical fink between our understanding retrieval algorithms developed at the point scale, and algorithms suitable for satellite applications at vastly larger pixel scales. Retrievals of land parameters must deal with the substantial sub-pixel heterogeneity that is present in most regions. This is particularly the case for soil moisture remote sensing, because of the long microwave wavelengths (L-band) that are optimal. Yet, airborne L-band imagers have generally been large, heavy, and required heavy-lift aircraft resources that are expensive and difficult to schedule. Indeed, US soil moisture campaigns, have been constrained by these factors, and European campaigns have used non-imagers due to instrument and aircraft size constraints. Despite these factors, these campaigns established that large-scale soil moisture remote sensing was possible, laying the groundwork for satellite missions. Starting in 2005, a series of airborne field campaigns have been conducted in Australia: to improve our understanding of soil moisture remote sensing at large scales over heterogeneous areas. These field data have been used to test and refine retrieval algorithms for soil moisture satellite missions, and most recently with the launch of the European Space Agency's Soil Moisture Ocean Salinity (SMOS) mission, to provide validation measurements over a multi-pixel area. The campaigns to date have included a preparatory campaign in 2005, two National Airborne Field Experiments (NAFE), (2005 and 2006), two campaigns to the Simpson Desert (2008 and 2009), and one Australian Airborne Cal/val Experiment for SMOS (AACES), just concluded in the austral spring of 2010. The primary airborne sensor for each campaign has been the Polarimetric L-band Microwave Radiometer (PLMR), a 6-beam pushbroom imager that is small enough to be compatible with light aircraft, greatly facilitating the execution of the series of campaigns, and a key to their success. An L-band imaging radar is being added to the complement to provide simultaneous active-passive L-band observations, for algorithm development activities in support of NASA's upcoming Soil Moisture Active Passive (.S"M) mission. This paper will describe the campaigns, their objectives, their datasets, and some of the unique advantages of working with small/light sensors and aircraft. We will also review the main scientific findings, including improvements to the SMOS retrieval algorithm enabled by NAFE observations and the evaluation of the Simpson Desert as a calibration target for L-band satellite missions. Plans for upcoming campaigns will also be discussed.

  8. Full-color large-scaled computer-generated holograms using RGB color filters.

    PubMed

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  9. Remote sensing of the biological dynamics of large-scale salt evaporation ponds

    NASA Technical Reports Server (NTRS)

    Richardson, Laurie L.; Bachoon, Dave; Ingram-Willey, Vebbra; Chow, Colin C.; Weinstock, Kenneth

    1992-01-01

    Optical properties of salt evaporation ponds associated with Exportadora de Sal, a salt production company in Baja California Sur, Mexico, were analyzed using a combination of spectroradiometer and extracted pigment data, and Landsat-5 Thematic Mapper imagery. The optical characteristics of each pond are determined by the biota, which consists of dense populations of algae and photosynthetic bacteria containing a wide variety of photosynthetic and photoprotective pigments. Analysis has shown that spectral and image data can differentiate between taxonomic groups of the microbiota, detect changes in population distributions, and reveal large-scale seasonal dynamics.

  10. Thermochemical Processes | Bioenergy | NREL

    Science.gov Websites

    model catalysts appear on a montage of images of wood chips, liquid gasoline, a gas tanker truck, and a , pipes, and hoses, pouring a liquid from a large hose into a bucket. Integration, Scale-Up, and Piloting

  11. Vehicle Detection of Aerial Image Using TV-L1 Texture Decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Wang, G.; Li, Y.; Huang, Y.

    2016-06-01

    Vehicle detection from high-resolution aerial image facilitates the study of the public traveling behavior on a large scale. In the context of road, a simple and effective algorithm is proposed to extract the texture-salient vehicle among the pavement surface. Texturally speaking, the majority of pavement surface changes a little except for the neighborhood of vehicles and edges. Within a certain distance away from the given vector of the road network, the aerial image is decomposed into a smoothly-varying cartoon part and an oscillatory details of textural part. The variational model of Total Variation regularization term and L1 fidelity term (TV-L1) is adopted to obtain the salient texture of vehicles and the cartoon surface of pavement. To eliminate the noise of texture decomposition, regions of pavement surface are refined by seed growing and morphological operation. Based on the shape saliency analysis of the central objects in those regions, vehicles are detected as the objects of rectangular shape saliency. The proposed algorithm is tested with a diverse set of aerial images that are acquired at various resolution and scenarios around China. Experimental results demonstrate that the proposed algorithm can detect vehicles at the rate of 71.5% and the false alarm rate of 21.5%, and that the speed is 39.13 seconds for a 4656 x 3496 aerial image. It is promising for large-scale transportation management and planning.

  12. QuickEval: a web application for psychometric scaling experiments

    NASA Astrophysics Data System (ADS)

    Van Ngo, Khai; Storvik, Jehans J.; Dokkeberg, Christopher A.; Farup, Ivar; Pedersen, Marius

    2015-01-01

    QuickEval is a web application for carrying out psychometric scaling experiments. It offers the possibility of running controlled experiments in a laboratory, or large scale experiment over the web for people all over the world. It is a unique one of a kind web application, and it is a software needed in the image quality field. It is also, to the best of knowledge, the first software that supports the three most common scaling methods; paired comparison, rank order, and category judgement. It is also the first software to support rank order. Hopefully, a side effect of this newly created software is that it will lower the threshold to perform psychometric experiments, improve the quality of the experiments being carried out, make it easier to reproduce experiments, and increase research on image quality both in academia and industry. The web application is available at www.colourlab.no/quickeval.

  13. CImbinator: a web-based tool for drug synergy analysis in small- and large-scale datasets.

    PubMed

    Flobak, Åsmund; Vazquez, Miguel; Lægreid, Astrid; Valencia, Alfonso

    2017-08-01

    Drug synergies are sought to identify combinations of drugs particularly beneficial. User-friendly software solutions that can assist analysis of large-scale datasets are required. CImbinator is a web-service that can aid in batch-wise and in-depth analyzes of data from small-scale and large-scale drug combination screens. CImbinator offers to quantify drug combination effects, using both the commonly employed median effect equation, as well as advanced experimental mathematical models describing dose response relationships. CImbinator is written in Ruby and R. It uses the R package drc for advanced drug response modeling. CImbinator is available at http://cimbinator.bioinfo.cnio.es , the source-code is open and available at https://github.com/Rbbt-Workflows/combination_index . A Docker image is also available at https://hub.docker.com/r/mikisvaz/rbbt-ci_mbinator/ . asmund.flobak@ntnu.no or miguel.vazquez@cnio.es. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  14. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    PubMed

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. A statistical parts-based appearance model of inter-subject variability.

    PubMed

    Toews, Matthew; Collins, D Louis; Arbel, Tal

    2006-01-01

    In this article, we present a general statistical parts-based model for representing the appearance of an image set, applied to the problem of inter-subject MR brain image matching. In contrast with global image representations such as active appearance models, the parts-based model consists of a collection of localized image parts whose appearance, geometry and occurrence frequency are quantified statistically. The parts-based approach explicitly addresses the case where one-to-one correspondence does not exist between subjects due to anatomical differences, as parts are not expected to occur in all subjects. The model can be learned automatically, discovering structures that appear with statistical regularity in a large set of subject images, and can be robustly fit to new images, all in the presence of significant inter-subject variability. As parts are derived from generic scale-invariant features, the framework can be applied in a wide variety of image contexts, in order to study the commonality of anatomical parts or to group subjects according to the parts they share. Experimentation shows that a parts-based model can be learned from a large set of MR brain images, and used to determine parts that are common within the group of subjects. Preliminary results indicate that the model can be used to automatically identify distinctive features for inter-subject image registration despite large changes in appearance.

  16. Helium Ion Secondary Electron Mode Microscopy For Interconnect Material Imaging

    NASA Astrophysics Data System (ADS)

    Ogawa, Shinichi; Thompson, William; Stern, Lewis; Scipioni, Larry; Notte, John; Farkas, Lou; Barriss, Louise

    2010-04-01

    The recently developed helium ion microscope (HIM) is now capable of 0.35 nm secondary electron (SE) mode image resolution. When low-k dielectrics or copper interconnects in ultra large scale integrated circuits (ULSI) interconnect structures were imaged in this mode, it was found that unique pattern dimension and fidelity information at sub-nanometer resolution was available for the first time. This paper will discuss the helium ion microscope architecture and the SE imaging techniques that make the HIM observation method of particular value to the low-k dielectric and dual damascene copper interconnect technologies.

  17. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy.

    PubMed

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Boccara, A Claude; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  18. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy

    NASA Astrophysics Data System (ADS)

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Claude Boccara, A.; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  19. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    PubMed

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  20. The PREP pipeline: standardized preprocessing for large-scale EEG analysis.

    PubMed

    Bigdely-Shamlo, Nima; Mullen, Tim; Kothe, Christian; Su, Kyung-Min; Robbins, Kay A

    2015-01-01

    The technology to collect brain imaging and physiological measures has become portable and ubiquitous, opening the possibility of large-scale analysis of real-world human imaging. By its nature, such data is large and complex, making automated processing essential. This paper shows how lack of attention to the very early stages of an EEG preprocessing pipeline can reduce the signal-to-noise ratio and introduce unwanted artifacts into the data, particularly for computations done in single precision. We demonstrate that ordinary average referencing improves the signal-to-noise ratio, but that noisy channels can contaminate the results. We also show that identification of noisy channels depends on the reference and examine the complex interaction of filtering, noisy channel identification, and referencing. We introduce a multi-stage robust referencing scheme to deal with the noisy channel-reference interaction. We propose a standardized early-stage EEG processing pipeline (PREP) and discuss the application of the pipeline to more than 600 EEG datasets. The pipeline includes an automatically generated report for each dataset processed. Users can download the PREP pipeline as a freely available MATLAB library from http://eegstudy.org/prepcode.

  1. Large-Scale Sentinel-1 Processing for Solid Earth Science and Urgent Response using Cloud Computing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Hua, H.; Owen, S. E.; Yun, S. H.; Agram, P. S.; Manipon, G.; Starch, M.; Sacco, G. F.; Bue, B. D.; Dang, L. B.; Linick, J. P.; Malarout, N.; Rosen, P. A.; Fielding, E. J.; Lundgren, P.; Moore, A. W.; Liu, Z.; Farr, T.; Webb, F.; Simons, M.; Gurrola, E. M.

    2017-12-01

    With the increased availability of open SAR data (e.g. Sentinel-1 A/B), new challenges are being faced with processing and analyzing the voluminous SAR datasets to make geodetic measurements. Upcoming SAR missions such as NISAR are expected to generate close to 100TB per day. The Advanced Rapid Imaging and Analysis (ARIA) project can now generate geocoded unwrapped phase and coherence products from Sentinel-1 TOPS mode data in an automated fashion, using the ISCE software. This capability is currently being exercised on various study sites across the United States and around the globe, including Hawaii, Central California, Iceland and South America. The automated and large-scale SAR data processing and analysis capabilities use cloud computing techniques to speed the computations and provide scalable processing power and storage. Aspects such as how to processing these voluminous SLCs and interferograms at global scales, keeping up with the large daily SAR data volumes, and how to handle the voluminous data rates are being explored. Scene-partitioning approaches in the processing pipeline help in handling global-scale processing up to unwrapped interferograms with stitching done at a late stage. We have built an advanced science data system with rapid search functions to enable access to the derived data products. Rapid image processing of Sentinel-1 data to interferograms and time series is already being applied to natural hazards including earthquakes, floods, volcanic eruptions, and land subsidence due to fluid withdrawal. We will present the status of the ARIA science data system for generating science-ready data products and challenges that arise from being able to process SAR datasets to derived time series data products at large scales. For example, how do we perform large-scale data quality screening on interferograms? What approaches can be used to minimize compute, storage, and data movement costs for time series analysis in the cloud? We will also present some of our findings from applying machine learning and data analytics on the processed SAR data streams. We will also present lessons learned on how to ease the SAR community onto interfacing with these cloud-based SAR science data systems.

  2. Fast Open-World Person Re-Identification.

    PubMed

    Zhu, Xiatian; Wu, Botong; Huang, Dongcheng; Zheng, Wei-Shi

    2018-05-01

    Existing person re-identification (re-id) methods typically assume that: 1) any probe person is guaranteed to appear in the gallery target population during deployment (i.e., closed-world) and 2) the probe set contains only a limited number of people (i.e., small search scale). Both assumptions are artificial and breached in real-world applications, since the probe population in target people search can be extremely vast in practice due to the ambiguity of probe search space boundary. Therefore, it is unrealistic that any probe person is assumed as one target people, and a large-scale search in person images is inherently demanded. In this paper, we introduce a new person re-id search setting, called large scale open-world (LSOW) re-id, characterized by huge size probe images and open person population in search thus more close to practical deployments. Under LSOW, the under-studied problem of person re-id efficiency is essential in addition to that of commonly studied re-id accuracy. We, therefore, develop a novel fast person re-id method, called Cross-view Identity Correlation and vErification (X-ICE) hashing, for joint learning of cross-view identity representation binarisation and discrimination in a unified manner. Extensive comparative experiments on three large-scale benchmarks have been conducted to validate the superiority and advantages of the proposed X-ICE method over a wide range of the state-of-the-art hashing models, person re-id methods, and their combinations.

  3. Micron-scale coherence in interphase chromatin dynamics

    PubMed Central

    Zidovska, Alexandra; Weitz, David A.; Mitchison, Timothy J.

    2013-01-01

    Chromatin structure and dynamics control all aspects of DNA biology yet are poorly understood, especially at large length scales. We developed an approach, displacement correlation spectroscopy based on time-resolved image correlation analysis, to map chromatin dynamics simultaneously across the whole nucleus in cultured human cells. This method revealed that chromatin movement was coherent across large regions (4–5 µm) for several seconds. Regions of coherent motion extended beyond the boundaries of single-chromosome territories, suggesting elastic coupling of motion over length scales much larger than those of genes. These large-scale, coupled motions were ATP dependent and unidirectional for several seconds, perhaps accounting for ATP-dependent directed movement of single genes. Perturbation of major nuclear ATPases such as DNA polymerase, RNA polymerase II, and topoisomerase II eliminated micron-scale coherence, while causing rapid, local movement to increase; i.e., local motions accelerated but became uncoupled from their neighbors. We observe similar trends in chromatin dynamics upon inducing a direct DNA damage; thus we hypothesize that this may be due to DNA damage responses that physically relax chromatin and block long-distance communication of forces. PMID:24019504

  4. Serial grouping of 2D-image regions with object-based attention in humans.

    PubMed

    Jeurissen, Danique; Self, Matthew W; Roelfsema, Pieter R

    2016-06-13

    After an initial stage of local analysis within the retina and early visual pathways, the human visual system creates a structured representation of the visual scene by co-selecting image elements that are part of behaviorally relevant objects. The mechanisms underlying this perceptual organization process are only partially understood. We here investigate the time-course of perceptual grouping of two-dimensional image-regions by measuring the reaction times of human participants and report that it is associated with the gradual spread of object-based attention. Attention spreads fastest over large and homogeneous areas and is slowed down at locations that require small-scale processing. We find that the time-course of the object-based selection process is well explained by a 'growth-cone' model, which selects surface elements in an incremental, scale-dependent manner. We discuss how the visual cortical hierarchy can implement this scale-dependent spread of object-based attention, leveraging the different receptive field sizes in distinct cortical areas.

  5. Exploring the brain on multiple scales with correlative two-photon and light sheet microscopy

    NASA Astrophysics Data System (ADS)

    Silvestri, Ludovico; Allegra Mascaro, Anna Letizia; Costantini, Irene; Sacconi, Leonardo; Pavone, Francesco S.

    2014-02-01

    One of the unique features of the brain is that its activity cannot be framed in a single spatio-temporal scale, but rather spans many orders of magnitude both in space and time. A single imaging technique can reveal only a small part of this complex machinery. To obtain a more comprehensive view of brain functionality, complementary approaches should be combined into a correlative framework. Here, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, taking advantage of blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living thy1-GFP-M mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from the apical portion, the whole pyramidal neuron can then be segmented. The correlative approach presented here allows contextualizing within a three-dimensional anatomic framework the neurons whose dynamics have been observed with high detail in vivo.

  6. Imaging the Chicxulub central crater zone from large scale seismic acoustic wave propagation and gravity modeling

    NASA Astrophysics Data System (ADS)

    Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.

    2017-12-01

    Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments

  7. High-Resolution Large Field-of-View FUV Compact Camera

    NASA Technical Reports Server (NTRS)

    Spann, James F.

    2006-01-01

    The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.

  8. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  9. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    DTIC Science & Technology

    2016-04-23

    SECURITY CLASSIFICATION OF: The explosive growth of Internet Media (partial-duplicate/similar images, 3D objects, 3D models, etc.) sheds bright...light on many promising applications in forensics, surveillance, 3D animation, mobile visual search, and 3D model/object search. Compared with the...and stable spatial configuration. Compared with the general 2D objects, 3D models/objects consist of 3D data information (typically a list of

  10. ARES V CONCEPT IMAGE

    NASA Technical Reports Server (NTRS)

    2008-01-01

    THIS CONCEPT IMAGE SHOWS THE ARES V CARGO LAUNCH VEHICLE. THE HEAVY LIFTING ARES V IS NASA'S PRIMARY VEHICLE FOR SAFE AND RELIABLE DELIVERY OF LARGE SCALE HARDWARE TO SPACE. THIS INCLUDES THE LUNAR LANDER, MATERIALS FOR ESTABLISHING A PERMANENT MOON BASE, AND THE VEHICLES AND HARDWARE NEEDED TO EXTEND A HUMAN PRESENCE BEYOND EARTH ORBIT. ARES V CAN CARRY APPROXIMATELY 290,000 POUNDS TO LOW EARTH ORBIT AND 144,000 POUNDS TO LUNAR ORBIT.

  11. Wavelet Analysis for RADARSAT Exploitation: Demonstration of Algorithms for Maritime Surveillance

    DTIC Science & Technology

    2007-02-01

    this study , we demonstrate wavelet analysis for exploitation of RADARSAT ocean imagery, including wind direction estimation, oceanic and atmospheric ...of image striations that can arise as a texture pattern caused by turbulent coherent structures in the marine atmospheric boundary layer. The image...associated change in the pattern texture (i.e., the nature of the turbulent atmospheric structures) across the front. Due to the large spatial scale of

  12. The Tomographic Ionized-Carbon Mapping Experiment (TIME) CII Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Staniszewski, Z.; Bock, J. J.; Bradford, C. M.; Brevik, J.; Cooray, A.; Gong, Y.; Hailey-Dunsheath, S.; O'Brient, R.; Santos, M.; Shirokoff, E.; Silva, M.; Zemcov, M.

    2014-09-01

    The Tomographic Ionized-Carbon Mapping Experiment (TIME) and TIME-Pilot are proposed imaging spectrometers to measure reionization and large scale structure at redshifts 5-9. We seek to exploit the 158 restframe emission of [CII], which becomes measurable at 200-300 GHz at reionization redshifts. Here we describe the scientific motivation, give an overview of the proposed instrument, and highlight key technological developments underway to enable these measurements.

  13. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  14. An efficient photogrammetric stereo matching method for high-resolution images

    NASA Astrophysics Data System (ADS)

    Li, Yingsong; Zheng, Shunyi; Wang, Xiaonan; Ma, Hao

    2016-12-01

    Stereo matching of high-resolution images is a great challenge in photogrammetry. The main difficulty is the enormous processing workload that involves substantial computing time and memory consumption. In recent years, the semi-global matching (SGM) method has been a promising approach for solving stereo problems in different data sets. However, the time complexity and memory demand of SGM are proportional to the scale of the images involved, which leads to very high consumption when dealing with large images. To solve it, this paper presents an efficient hierarchical matching strategy based on the SGM algorithm using single instruction multiple data instructions and structured parallelism in the central processing unit. The proposed method can significantly reduce the computational time and memory required for large scale stereo matching. The three-dimensional (3D) surface is reconstructed by triangulating and fusing redundant reconstruction information from multi-view matching results. Finally, three high-resolution aerial date sets are used to evaluate our improvement. Furthermore, precise airborne laser scanner data of one data set is used to measure the accuracy of our reconstruction. Experimental results demonstrate that our method remarkably outperforms in terms of time and memory savings while maintaining the density and precision of the 3D cloud points derived.

  15. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  16. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  17. An innovative experimental setup for Large Scale Particle Image Velocimetry measurements in riverine environments

    NASA Astrophysics Data System (ADS)

    Tauro, Flavia; Olivieri, Giorgio; Porfiri, Maurizio; Grimaldi, Salvatore

    2014-05-01

    Large Scale Particle Image Velocimetry (LSPIV) is a powerful methodology to nonintrusively monitor surface flows. Its use has been beneficial to the development of rating curves in riverine environments and to map geomorphic features in natural waterways. Typical LSPIV experimental setups rely on the use of mast-mounted cameras for the acquisition of natural stream reaches. Such cameras are installed on stream banks and are angled with respect to the water surface to capture large scale fields of view. Despite its promise and the simplicity of the setup, the practical implementation of LSPIV is affected by several challenges, including the acquisition of ground reference points for image calibration and time-consuming and highly user-assisted procedures to orthorectify images. In this work, we perform LSPIV studies on stream sections in the Aniene and Tiber basins, Italy. To alleviate the limitations of traditional LSPIV implementations, we propose an improved video acquisition setup comprising a telescopic, an inexpensive GoPro Hero 3 video camera, and a system of two lasers. The setup allows for maintaining the camera axis perpendicular to the water surface, thus mitigating uncertainties related to image orthorectification. Further, the mast encases a laser system for remote image calibration, thus allowing for nonintrusively calibrating videos without acquiring ground reference points. We conduct measurements on two different water bodies to outline the performance of the methodology in case of varying flow regimes, illumination conditions, and distribution of surface tracers. Specifically, the Aniene river is characterized by high surface flow velocity, the presence of abundant, homogeneously distributed ripples and water reflections, and a meagre number of buoyant tracers. On the other hand, the Tiber river presents lower surface flows, isolated reflections, and several floating objects. Videos are processed through image-based analyses to correct for lens distortions and analyzed with a commercially available PIV software. Surface flow velocity estimates are compared to supervised measurements performed by visually tracking objects floating on the stream surface and to rating curves developed by the Ufficio Idrografico e Mareografico (UIM) at Regione Lazio, Italy. Experimental findings demonstrate that the presence of tracers is crucial for surface flow velocity estimates. Further, considering surface ripples and patterns may lead to underestimations in LSPIV analyses.

  18. Monitoring Local Changes in Granite Rock Under Biaxial Test: A Spatiotemporal Imaging Application With Diffuse Waves

    NASA Astrophysics Data System (ADS)

    Xie, Fan; Ren, Yaqiong; Zhou, Yongsheng; Larose, Eric; Baillet, Laurent

    2018-03-01

    Diffuse acoustic or seismic waves are highly sensitive to detect changes of mechanical properties in heterogeneous geological materials. In particular, thanks to acoustoelasticity, we can quantify stress changes by tracking acoustic or seismic relative velocity changes in the material at test. In this paper, we report on a small-scale laboratory application of an innovative time-lapse tomography technique named Locadiff to image spatiotemporal mechanical changes on a granite sample under biaxial loading, using diffuse waves at ultrasonic frequencies (300 kHz to 900 kHz). We demonstrate the ability of the method to image reversible stress evolution and deformation process, together with the development of reversible and irreversible localized microdamage in the specimen at an early stage. Using full-field infrared thermography, we visualize stress-induced temperature changes and validate stress images obtained from diffuse ultrasound. We demonstrate that the inversion with a good resolution can be achieved with only a limited number of receivers distributed around a single source, all located at the free surface of the specimen. This small-scale experiment is a proof of concept for frictional earthquake-like failure (e.g., stick-slip) research at laboratory scale as well as large-scale seismic applications, potentially including active fault monitoring.

  19. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.

    PubMed

    Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-01-19

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  20. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    PubMed

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  1. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  2. A new large area scintillator screen for X-ray imaging

    NASA Astrophysics Data System (ADS)

    Nagarkar, V. V.; Miller, S. R.; Tipnis, S. V.; Lempicki, A.; Brecher, C.; Lingertat, H.

    2004-01-01

    We report on the development of a new, large area, powdered scintillator screen based on Lu 2O 3(Eu). As reported earlier, the transparent ceramic form of this material has a very high density of 9.4 g/cm 3, a high light output comparable to that of CsI(Tl), and emits in a narrow spectral band centered at about 610 nm. Research into fabrication of this ceramic scintillator in a large area format is currently underway, however the process is not yet practical for large scale production. Here we have explored fabrication of large area screens using precursor powders from which the ceramics are fabricated. To date we have produced up to 16 × 16 cm 2 area screens with thickness in the range of 18 mg/cm 2. This paper outlines the screen fabrication technique and presents its imaging performance in comparison with a commercial Gd 2O 2S:Tb (GOS) screen.

  3. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.

  4. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  5. Imaging of mesoscopic-scale organisms using selective-plane optoacoustic tomography.

    PubMed

    Razansky, Daniel; Vinegoni, Claudio; Ntziachristos, Vasilis

    2009-05-07

    Mesoscopic-scale living organisms (i.e. 1 mm to 1 cm sized) remain largely inaccessible by current optical imaging methods due to intensive light scattering in tissues. Therefore, imaging of many important model organisms, such as insects, fishes, worms and similarly sized biological specimens, is currently limited to embryonic or other transparent stages of development. This makes it difficult to relate embryonic cellular and molecular mechanisms to consequences in organ function and animal behavior in more advanced stages and adults. Herein, we have developed a selective-plane illumination optoacoustic tomography technique for in vivo imaging of optically diffusive organisms and tissues. The method is capable of whole-body imaging at depths from the sub-millimeter up to centimeter range with a scalable spatial resolution in the order of magnitude of a few tenths of microns. In contrast to pure optical methods, the spatial resolution here is not determined nor limited by light diffusion; therefore, such performance cannot be achieved by any other optical imaging technology developed so far. The utility of the method is demonstrated on several whole-body models and small-animal extremities.

  6. An improved active contour model for glacial lake extraction

    NASA Astrophysics Data System (ADS)

    Zhao, H.; Chen, F.; Zhang, M.

    2017-12-01

    Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.

  7. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    PubMed

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  8. CellCognition: time-resolved phenotype annotation in high-throughput live cell imaging.

    PubMed

    Held, Michael; Schmitz, Michael H A; Fischer, Bernd; Walter, Thomas; Neumann, Beate; Olma, Michael H; Peter, Matthias; Ellenberg, Jan; Gerlich, Daniel W

    2010-09-01

    Fluorescence time-lapse imaging has become a powerful tool to investigate complex dynamic processes such as cell division or intracellular trafficking. Automated microscopes generate time-resolved imaging data at high throughput, yet tools for quantification of large-scale movie data are largely missing. Here we present CellCognition, a computational framework to annotate complex cellular dynamics. We developed a machine-learning method that combines state-of-the-art classification with hidden Markov modeling for annotation of the progression through morphologically distinct biological states. Incorporation of time information into the annotation scheme was essential to suppress classification noise at state transitions and confusion between different functional states with similar morphology. We demonstrate generic applicability in different assays and perturbation conditions, including a candidate-based RNA interference screen for regulators of mitotic exit in human cells. CellCognition is published as open source software, enabling live-cell imaging-based screening with assays that directly score cellular dynamics.

  9. A quantum spin-probe molecular microscope

    NASA Astrophysics Data System (ADS)

    Perunicic, V. S.; Hill, C. D.; Hall, L. T.; Hollenberg, L. C. L.

    2016-10-01

    Imaging the atomic structure of a single biomolecule is an important challenge in the physical biosciences. Whilst existing techniques all rely on averaging over large ensembles of molecules, the single-molecule realm remains unsolved. Here we present a protocol for 3D magnetic resonance imaging of a single molecule using a quantum spin probe acting simultaneously as the magnetic resonance sensor and source of magnetic field gradient. Signals corresponding to specific regions of the molecule's nuclear spin density are encoded on the quantum state of the probe, which is used to produce a 3D image of the molecular structure. Quantum simulations of the protocol applied to the rapamycin molecule (C51H79NO13) show that the hydrogen and carbon substructure can be imaged at the angstrom level using current spin-probe technology. With prospects for scaling to large molecules and/or fast dynamic conformation mapping using spin labels, this method provides a realistic pathway for single-molecule microscopy.

  10. Detectability of large-scale power suppression in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan

    2010-12-01

    Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.

  11. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Matched Filtering and Convolutional Neural Network.

    PubMed

    Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-04-26

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  12. A robust real-time abnormal region detection framework from capsule endoscopy images

    NASA Astrophysics Data System (ADS)

    Cheng, Yanfen; Liu, Xu; Li, Huiping

    2009-02-01

    In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.

  13. Integrated Analysis Platform: An Open-Source Information System for High-Throughput Plant Phenotyping1[C][W][OPEN

    PubMed Central

    Klukas, Christian; Chen, Dijun; Pape, Jean-Michel

    2014-01-01

    High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818

  14. A Mobile System for Measuring Water Surface Velocities Using Unmanned Aerial Vehicle and Large-Scale Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Chen, Y. L.

    2015-12-01

    Measurement technologies for velocity of river flow are divided into intrusive and nonintrusive methods. Intrusive method requires infield operations. The measuring process of intrusive methods are time consuming, and likely to cause damages of operator and instrument. Nonintrusive methods require fewer operators and can reduce instrument damages from directly attaching to the flow. Nonintrusive measurements may use radar or image velocimetry to measure the velocities at the surface of water flow. The image velocimetry, such as large scale particle image velocimetry (LSPIV) accesses not only the point velocity but the flow velocities in an area simultaneously. Flow properties of an area hold the promise of providing spatially information of flow fields. This study attempts to construct a mobile system UAV-LSPIV by using an unmanned aerial vehicle (UAV) with LSPIV to measure flows in fields. The mobile system consists of a six-rotor UAV helicopter, a Sony nex5T camera, a gimbal, an image transfer device, a ground station and a remote control device. The activate gimbal helps maintain the camera lens orthogonal to the water surface and reduce the extent of images being distorted. The image transfer device can monitor the captured image instantly. The operator controls the UAV by remote control device through ground station and can achieve the flying data such as flying height and GPS coordinate of UAV. The mobile system was then applied to field experiments. The deviation of velocities measured by UAV-LSPIV of field experiments and handhold Acoustic Doppler Velocimeter (ADV) is under 8%. The results of the field experiments suggests that the application of UAV-LSPIV can be effectively applied to surface flow studies.

  15. Large-scale imaging in small brains

    PubMed Central

    Ahrens, Misha B.; Engert, Florian

    2016-01-01

    The dense connectivity in the brain and arrangements of cells into circuits means that one neuron’s activity can influence many others. To observe this interconnected system comprehensively, an aspiration within neuroscience is to record from as many neurons as possible at the same time. There are two useful routes toward this goal: one is to expand the spatial extent of functional imaging techniques, and the second is to use animals with small brains. Here we review recent progress toward imaging many neurons and complete populations of identified neurons in small vertebrates and invertebrates. PMID:25636154

  16. Large-scale imaging in small brains.

    PubMed

    Ahrens, Misha B; Engert, Florian

    2015-06-01

    The dense connectivity in the brain means that one neuron's activity can influence many others. To observe this interconnected system comprehensively, an aspiration within neuroscience is to record from as many neurons as possible at the same time. There are two useful routes toward this goal: one is to expand the spatial extent of functional imaging techniques, and the second is to use animals with small brains. Here we review recent progress toward imaging many neurons and complete populations of identified neurons in small vertebrates and invertebrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. AutoCNet: A Python library for sparse multi-image correspondence identification for planetary data

    NASA Astrophysics Data System (ADS)

    Laura, Jason; Rodriguez, Kelvin; Paquette, Adam C.; Dunn, Evin

    2018-01-01

    In this work we describe the AutoCNet library, written in Python, to support the application of computer vision techniques for n-image correspondence identification in remotely sensed planetary images and subsequent bundle adjustment. The library is designed to support exploratory data analysis, algorithm and processing pipeline development, and application at scale in High Performance Computing (HPC) environments for processing large data sets and generating foundational data products. We also present a brief case study illustrating high level usage for the Apollo 15 Metric camera.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Jiangye

    Up-to-date maps of installed solar photovoltaic panels are a critical input for policy and financial assessment of solar distributed generation. However, such maps for large areas are not available. With high coverage and low cost, aerial images enable large-scale mapping, bit it is highly difficult to automatically identify solar panels from images, which are small objects with varying appearances dispersed in complex scenes. We introduce a new approach based on deep convolutional networks, which effectively learns to delineate solar panels in aerial scenes. The approach has successfully mapped solar panels in imagery covering 200 square kilometers in two cities, usingmore » only 12 square kilometers of training data that are manually labeled.« less

  19. Discrimination between weaned and unweaned Atlantic cod (Gadus morhua) in capture-based aquaculture (CBA) by X-ray imaging and radio-frequency metal detector.

    PubMed

    Misimi, Ekrem; Martinsen, Svein; Mathiassen, John Reidar; Erikson, Ulf

    2014-01-01

    The aim of this study was to investigate the feasibility of two detection methods for use in discrimination and sorting of adult Atlantic cod (about 2 kg) in the small scale capture-based aquaculture (CBA). Presently, there is no established method for discrimination of weaned and unweaned cod in CBA. Generally, 60-70% of the wild-caught cod in the CBA are weaned into commercial dry feed. To increase profitability for the fish farmers, unweaned cod must be separated from the stock, meaning the fish must be sorted into two groups - unweaned and weaned from moist feed. The challenges with handling of large numbers of fish in cages, defined the limits of the applied technology. As a result, a working model was established, focusing on implementing different marking materials added to the fish feed, and different technology for detecting the feed presence in the fish gut. X-ray imaging in two modes (planar and dual energy band) and sensitive radio-frequency metal detection were the detection methods that were chosen for the investigations. Both methods were tested in laboratory conditions using dead fish with marked feed inserted into the gut cavity. In particular, the sensitive radio-frequency metal detection method with carbonyl powder showed very promising results in detection of marked feed. Results show also that Dual energy band X-ray imaging may have potential for prediction of fat content in the feed. Based on the investigations it can be concluded that both X-ray imaging and sensitive radio-frequency metal detector technology have the potential for detecting cod having consumed marked feed. These are all technologies that may be adapted to large scale handling of fish from fish cages. Thus, it may be possible to discriminate between unweaned and weaned cod in a large scale grading situation. Based on the results of this study, a suggestion for evaluation of concept for in-situ sorting system is presented.

  20. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    PubMed

    Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf

    2014-01-01

    Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  1. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization.

    PubMed

    Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P

    2015-01-01

    Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples.

  2. Imager for Mars Pathfinder (IMF)

    NASA Technical Reports Server (NTRS)

    Smith, Peter H.

    1994-01-01

    The IMP camera is a near-surface sensing experiment with many capabilities beyond those normally associated with an imager. It is fully pointable in both elevation and azimuth with a protected, stowed position looking straight down. Stereo separation is provided with two optical paths; each has a 12-position filter wheel. The primary function of the camera, strongly tied to mission success, is to take a color panorama of the surrounding terrain. IMP requires approximately 120 images to give a complete downward hemisphere from the deployed position. IMP provides the geologist, and everyone else, a view of the local morphology with millimeter-tometer-scale resolution over a broad area. In addition to the general morphology of the scale, IMP has a large compliment of specially chosen filters to aid in both the identification of the mineral types and their degree of weathering.

  3. A post-processing system for automated rectification and registration of spaceborne SAR imagery

    NASA Technical Reports Server (NTRS)

    Curlander, John C.; Kwok, Ronald; Pang, Shirley S.

    1987-01-01

    An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.

  4. Web-based visualization of very large scientific astronomy imagery

    NASA Astrophysics Data System (ADS)

    Bertin, E.; Pillay, R.; Marmo, C.

    2015-04-01

    Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of datasets. It provides access to floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.

  5. A k-space method for large-scale models of wave propagation in tissue.

    PubMed

    Mast, T D; Souriau, L P; Liu, D L; Tabei, M; Nachman, A I; Waag, R C

    2001-03-01

    Large-scale simulation of ultrasonic pulse propagation in inhomogeneous tissue is important for the study of ultrasound-tissue interaction as well as for development of new imaging methods. Typical scales of interest span hundreds of wavelengths; most current two-dimensional methods, such as finite-difference and finite-element methods, are unable to compute propagation on this scale with the efficiency needed for imaging studies. Furthermore, for most available methods of simulating ultrasonic propagation, large-scale, three-dimensional computations of ultrasonic scattering are infeasible. Some of these difficulties have been overcome by previous pseudospectral and k-space methods, which allow substantial portions of the necessary computations to be executed using fast Fourier transforms. This paper presents a simplified derivation of the k-space method for a medium of variable sound speed and density; the derivation clearly shows the relationship of this k-space method to both past k-space methods and pseudospectral methods. In the present method, the spatial differential equations are solved by a simple Fourier transform method, and temporal iteration is performed using a k-t space propagator. The temporal iteration procedure is shown to be exact for homogeneous media, unconditionally stable for "slow" (c(x) < or = c0) media, and highly accurate for general weakly scattering media. The applicability of the k-space method to large-scale soft tissue modeling is shown by simulating two-dimensional propagation of an incident plane wave through several tissue-mimicking cylinders as well as a model chest wall cross section. A three-dimensional implementation of the k-space method is also employed for the example problem of propagation through a tissue-mimicking sphere. Numerical results indicate that the k-space method is accurate for large-scale soft tissue computations with much greater efficiency than that of an analogous leapfrog pseudospectral method or a 2-4 finite difference time-domain method. However, numerical results also indicate that the k-space method is less accurate than the finite-difference method for a high contrast scatterer with bone-like properties, although qualitative results can still be obtained by the k-space method with high efficiency. Possible extensions to the method, including representation of absorption effects, absorbing boundary conditions, elastic-wave propagation, and acoustic nonlinearity, are discussed.

  6. Advanced astigmatism-corrected tandem Wadsworth mounting for small-scale spectral broadband imaging spectrometer.

    PubMed

    Lei, Yu; Lin, Guan-yu

    2013-01-01

    Tandem gratings of double-dispersion mount make it possible to design an imaging spectrometer for the weak light observation with high spatial resolution, high spectral resolution, and high optical transmission efficiency. The traditional tandem Wadsworth mounting is originally designed to match the coaxial telescope and large-scale imaging spectrometer. When it is used to connect the off-axis telescope such as off-axis parabolic mirror, it presents lower imaging quality than to connect the coaxial telescope. It may also introduce interference among the detector and the optical elements as it is applied to the short focal length and small-scale spectrometer in a close volume by satellite. An advanced tandem Wadsworth mounting has been investigated to deal with the situation. The Wadsworth astigmatism-corrected mounting condition for which is expressed as the distance between the second concave grating and the imaging plane is calculated. Then the optimum arrangement for the first plane grating and the second concave grating, which make the anterior Wadsworth condition fulfilling each wavelength, is analyzed by the geometric and first order differential calculation. These two arrangements comprise the advanced Wadsworth mounting condition. The spectral resolution has also been calculated by these conditions. An example designed by the optimum theory proves that the advanced tandem Wadsworth mounting performs excellently in spectral broadband.

  7. A body image and disordered eating intervention for women in midlife: a randomized controlled trial.

    PubMed

    McLean, Siân A; Paxton, Susan J; Wertheim, Eleanor H

    2011-12-01

    This study examined the outcome of a body image and disordered eating intervention for midlife women. The intervention was specifically designed to address risk factors that are pertinent in midlife. Participants were 61 women aged 30 to 60 years (M = 43.92, SD = 8.22) randomly assigned to intervention (n = 32) or (delayed treatment) control (n = 29) groups. Following an 8-session facilitated group cognitive behavioral therapy-based intervention, outcomes from the Body Shape Questionnaire; Eating Disorder Examination Questionnaire; Body Image Avoidance Questionnaire; Physical Appearance Comparison Scale; Sociocultural Attitudes Towards Appearance Scale, Internalization subscale; measures of appearance importance, cognitive reappraisal, and self-care; Dutch Eating Behavior Questionnaire; and Kessler Psychological Distress Scale were compared for statistical and clinical significance from baseline to posttest and 6-month follow-up. Following the intent-to-treat principle, mixed-model analyses with a mixed within-between design demonstrated that the intervention group had large improvements that were statistically significantly different from the control group in body image, disordered eating, and risk factor variables and that were maintained at 6-month follow-up. Furthermore, the improvements were also of clinical importance. This study provides support for the efficacy of an intervention to reduce body image and eating concerns in midlife women. Further research into interventions tailored for this population is warranted.

  8. Extreme 3D reconstruction of the final ROSETTA/PHILAE landing site

    NASA Astrophysics Data System (ADS)

    Capanna, Claire; Jorda, Laurent; Lamy, Philippe; Gesquiere, Gilles; Delmas, Cédric; Durand, Joelle; Garmier, Romain; Gaudon, Philippe; Jurado, Eric

    2016-04-01

    The Philae lander aboard the Rosetta spacecraft successfully landed at the surface of comet 67P/Churyumov-Gerasimenko (hereafter 67P/C-G) after two rebounds on November 12, 2014. The final landing site, now known as « Abydos », has been identified on images acquired by the OSIRIS imaging system onboard the Rosetta orbiter[1]. The available images of Abydos are very limited in number and reveal a very extreme topography containing cliffs and overhangs. Furthermore, the surface is only observed under very high incidence angles of 60° on average, which implies that the images also exhibit lots of cast shadows. This makes it very difficult to reconstruct the 3D topography with standard methods such as photogrammetry or standard clinometry. We apply a new method called ''Multiresolution PhotoClinometry by Deformation'' (MPCD, [2]) to retrieve the 3D topography of the area around Abydos. The method works in two main steps: (i) a DTM of this region is extracted from a low resolution MPCD global shape model of comet 67P/C-G, and (ii) the resulting triangular mesh is progressively deformed at increasing spatial sampling down to 0.25 m in order to match a set of 14 images of Abydos with projected pixel scales between 1 and 8 m. The method used to perform the image matching is a quasi-Newton non-linear optimization method called L-BFGS-b[3] especially suited to large-scale problems. Finally, we also checked the compatibility of the final MPCD digital terrain model with a set of five panoramic images obtained by the CIVA-P instrument aboard Philae[4]. [1] Lamy et al., 2016, submitted. [2] Capanna et al., Three dimensional reconstruction using multiresoluton photoclinometry by deformation, The visual Computer, v. 29(6-8) pp. 825-835, 2013. [3] Morales et al., Remark on "Algorithm 778: L-BFGS-B: Fortran subroutines for large-scale bound constrained optimization", v.38(1) pp.1-4, ACM Trans. Math. Softw., 2011 [4] Bibring et al., 67P/Churyumov-Gerasimenko surface properties as derived from CIVA panoramic images, Science, v. 349(6247), 2015

  9. Optical Disk Technology.

    ERIC Educational Resources Information Center

    Abbott, George L.; And Others

    1987-01-01

    This special feature focuses on recent developments in optical disk technology. Nine articles discuss current trends, large scale image processing, data structures for optical disks, the use of computer simulators to create optical disks, videodisk use in training, interactive audio video systems, impacts on federal information policy, and…

  10. Small-Scale Spectral and Color Analysis of Ritchey Crater Impact Materials

    NASA Astrophysics Data System (ADS)

    Bray, Veronica; Chojnacki, Matthew; McEwen, Alfred; Heyd, Rodney

    2014-11-01

    Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) analysis of Ritchey crater on Mars has allowed identification of the minerals uplifted from depth within its central peak as well as the dominant spectral signature of the crater fill materials which surround it. However, the 18m/px resolution of CRISM prevents full analysis of the nature of small-scale dykes, mega breccia blocks and finer scale crater-fill units. We extend our existing CRISM-based compositional mapping of the Ritchey crater interior to sub-CRISM pixel scales with the use of High Resolution Imaging Science Experiment (HiRISE) Color Ratio Products (CRPs). These CRPs are then compared to CRISM images; correlation between color ratio and CRISM spectral signature for a large bedrock unit is defined and used to suggest similar composition for a smaller unit with the same color ratio. Megabreccia deposits, angular fragments of rock in excess of 1 meter in diameter within a finer grained matrix, are common at Ritchey. The dominant spectral signature from each megabreccia unit varies with location around Ritchey and appears to reflect the matrix composition (based on texture and albedo similarities to surrounding rocks) rather than clast composition. In cases where the breccia block size is large enough for CRISM analysis, many different mineral compositions are noted (low calcium pyroxene (LCP) olivine (OL), alteration products) depending on the location. All block compositions (as inferred from CRPs) are observed down to the limit of HiRISE resolution. We have found a variety of dyke compositions within our mapping area. Correlation between CRP color and CRISM spectra in this area suggest that large 10 m wide) dykes within LCP-bearing bedrock close to the crater center tend to have similar composition to the host rock. Smaller dykes running non-parallel to the larger dykes are inferred to be OL-rich suggesting multiple phases of dyke formation within the Ritchey crater and its bedrock.

  11. Classification of brain MRI with big data and deep 3D convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Wegmayr, Viktor; Aitharaju, Sai; Buhmann, Joachim

    2018-02-01

    Our ever-aging society faces the growing problem of neurodegenerative diseases, in particular dementia. Magnetic Resonance Imaging provides a unique tool for non-invasive investigation of these brain diseases. However, it is extremely difficult for neurologists to identify complex disease patterns from large amounts of three-dimensional images. In contrast, machine learning excels at automatic pattern recognition from large amounts of data. In particular, deep learning has achieved impressive results in image classification. Unfortunately, its application to medical image classification remains difficult. We consider two reasons for this difficulty: First, volumetric medical image data is considerably scarcer than natural images. Second, the complexity of 3D medical images is much higher compared to common 2D images. To address the problem of small data set size, we assemble the largest dataset ever used for training a deep 3D convolutional neural network to classify brain images as healthy (HC), mild cognitive impairment (MCI) or Alzheimers disease (AD). We use more than 20.000 images from subjects of these three classes, which is almost 9x the size of the previously largest data set. The problem of high dimensionality is addressed by using a deep 3D convolutional neural network, which is state-of-the-art in large-scale image classification. We exploit its ability to process the images directly, only with standard preprocessing, but without the need for elaborate feature engineering. Compared to other work, our workflow is considerably simpler, which increases clinical applicability. Accuracy is measured on the ADNI+AIBL data sets, and the independent CADDementia benchmark.

  12. SIRT-FILTER v1.0.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PELT, DANIEL

    2017-04-21

    Small Python package to compute tomographic reconstructions using a reconstruction method published in: Pelt, D.M., & De Andrade, V. (2017). Improved tomographic reconstruction of large-scale real-world data by filter optimization. Advanced Structural and Chemical Imaging 2: 17; and Pelt, D. M., & Batenburg, K. J. (2015). Accurately approximating algebraic tomographic reconstruction by filtered backprojection. In Proceedings of The 13th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine (pp. 158-161).

  13. Integrated analysis of remote sensing products from basic geological surveys. [Brazil

    NASA Technical Reports Server (NTRS)

    Dasilvafagundesfilho, E. (Principal Investigator)

    1984-01-01

    Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.

  14. Merging Surface Reconstructions of Terrestrial and Airborne LIDAR Range Data

    DTIC Science & Technology

    2009-05-19

    Mangan and R. Whitaker. Partitioning 3D surface meshes using watershed segmentation . IEEE Trans. on Visualization and Computer Graphics, 5(4), pp...Jain, and A. Zakhor. Data Processing Algorithms for Generating Textured 3D Building Facade Meshes from Laser Scans and Camera Images. International...acquired set of overlapping range images into a single mesh [2,9,10]. However, due to the volume of data involved in large scale urban modeling, data

  15. Onset of a Large Ejective Solar Eruption from a Typical Coronal-jet-base Field Configuration

    NASA Astrophysics Data System (ADS)

    Joshi, Navin Chandra; Sterling, Alphonse C.; Moore, Ronald L.; Magara, Tetsuya; Moon, Yong-Jae

    2017-08-01

    Utilizing multiwavelength observations and magnetic field data from the Solar Dynamics Observatory (SDO)/Atmospheric Imaging Assembly (AIA), SDO/Helioseismic and Magnetic Imager (HMI), the Geostationary Operational Environmental Satellite (GOES), and RHESSI, we investigate a large-scale ejective solar eruption of 2014 December 18 from active region NOAA 12241. This event produced a distinctive “three-ribbon” flare, having two parallel ribbons corresponding to the ribbons of a standard two-ribbon flare, and a larger-scale third quasi-circular ribbon offset from the other two. There are two components to this eruptive event. First, a flux rope forms above a strong-field polarity inversion line and erupts and grows as the parallel ribbons turn on, grow, and spread apart from that polarity inversion line; this evolution is consistent with the mechanism of tether-cutting reconnection for eruptions. Second, the eruption of the arcade that has the erupting flux rope in its core undergoes magnetic reconnection at the null point of a fan dome that envelops the erupting arcade, resulting in formation of the quasi-circular ribbon; this is consistent with the breakout reconnection mechanism for eruptions. We find that the parallel ribbons begin well before (˜12 minutes) the onset of the circular ribbon, indicating that tether-cutting reconnection (or a non-ideal MHD instability) initiated this event, rather than breakout reconnection. The overall setup for this large-scale eruption (diameter of the circular ribbon ˜105 km) is analogous to that of coronal jets (base size ˜104 km), many of which, according to recent findings, result from eruptions of small-scale “minifilaments.” Thus these findings confirm that eruptions of sheared-core magnetic arcades seated in fan-spine null-point magnetic topology happen on a wide range of size scales on the Sun.

  16. Ten Years of ENA Imaging from Cassini

    NASA Astrophysics Data System (ADS)

    Brandt, Pontus; Mitchell, Donald; Westlake, Joseph; Carbary, James; Paranicas, Christopher; Mauk, Barry; Krimigis, Stamatios

    2014-05-01

    In this presentation we will provide a detailed review of the science highlights of the ENA observations obtained by The Ion Neutral Camera (INCA) on board Cassini. Since the launch of Cassini, INCA has unveiled an invisible world of hot plasma and neutral gas of the two biggest objects of our solar system: the giant magnetosphere of Jupiter and Saturn. Although more than ten years ago, INCA captured the first ENA images of the Jovian system revealing magnetospheric dynamics and an asymmetric Europa neutral gas torus. Approaching Saturn, INCA observed variability of Saturn's magnetospheric activity in response to changes in solar wind dynamic pressure, which was contrary to expectations and current theories. In orbit around Saturn, INCA continued the surprises including the first imaging and global characterization of Titan's exosphere extended out to its gravitational Hill sphere; recurring injections correlating with periodic Saturn Kilometric Radiation (SKR) bursts and magnetic field perturbations; and the discovery of energetic ionospheric outflow. Perhaps most significant, and the focal point of this presentation, is INCA's contribution to the understanding of global magnetospheric particle acceleration and transport, where the combination between ENA imaging and in-situ measurements have demonstrated that transport and acceleration of plasma is likely to occur in a two-step process. First, large-scale injections in the post-midnight sector accelerate and transport plasma in to about 12 RS up to energies of several hundreds of keV. Second, centrifugal interchange acts on the plasma inside of this region and provides further heating and transport in to about 6RS. We discuss this finding in the context of the two fundamental types of injections (or ENA intensifications) that INCA has revealed during its ten years of imaging. The first type is large-scale injections appearing beyond 12 RS in the post-midnight sector that have in many cases had an inward component of propagation. The second type is apparently local injections inside of about 12 RS and as far in as 6RS in the pre-midnight sector with a recurrence period around 11h that, interestingly, appear to precede the larges-scale injections.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schollmeier, Marius S.; Geissel, Matthias; Shores, Jonathon E.

    We present calculations for the field of view (FOV), image fluence, image monochromaticity, spectral acceptance, and image aberrations for spherical crystal microscopes, which are used as self-emission imaging or backlighter systems at large-scale high energy density physics facilities. Our analytic results are benchmarked with ray-tracing calculations as well as with experimental measurements from the 6.151 keV backlighter system at Sandia National Laboratories. Furthermore, the analytic expressions can be used for x-ray source positions anywhere between the Rowland circle and object plane. We discovered that this enables quick optimization of the performance of proposed but untested, bent-crystal microscope systems to findmore » the best compromise between FOV, image fluence, and spatial resolution for a particular application.« less

  18. Leveraging the crowd for annotation of retinal images.

    PubMed

    Leifman, George; Swedish, Tristan; Roesch, Karin; Raskar, Ramesh

    2015-01-01

    Medical data presents a number of challenges. It tends to be unstructured, noisy and protected. To train algorithms to understand medical images, doctors can label the condition associated with a particular image, but obtaining enough labels can be difficult. We propose an annotation approach which starts with a small pool of expertly annotated images and uses their expertise to rate the performance of crowd-sourced annotations. In this paper we demonstrate how to apply our approach for annotation of large-scale datasets of retinal images. We introduce a novel data validation procedure which is designed to cope with noisy ground-truth data and with non-consistent input from both experts and crowd-workers.

  19. Estimating Patient Dose from X-ray Tube Output Metrics: Automated Measurement of Patient Size from CT Images Enables Large-scale Size-specific Dose Estimates

    PubMed Central

    Ikuta, Ichiro; Warden, Graham I.; Andriole, Katherine P.; Khorasani, Ramin

    2014-01-01

    Purpose To test the hypothesis that patient size can be accurately calculated from axial computed tomographic (CT) images, including correction for the effects of anatomy truncation that occur in routine clinical CT image reconstruction. Materials and Methods Institutional review board approval was obtained for this HIPAA-compliant study, with waiver of informed consent. Water-equivalent diameter (DW) was computed from the attenuation-area product of each image within 50 adult CT scans of the thorax and of the abdomen and pelvis and was also measured for maximal field of view (FOV) reconstructions. Linear regression models were created to compare DW with the effective diameter (Deff) used to select size-specific volume CT dose index (CTDIvol) conversion factors as defined in report 204 of the American Association of Physicists in Medicine. Linear regression models relating reductions in measured DW to a metric of anatomy truncation were used to compensate for the effects of clinical image truncation. Results In the thorax, DW versus Deff had an R2 of 0.51 (n = 200, 50 patients at four anatomic locations); in the abdomen and pelvis, R2 was 0.90 (n = 150, 50 patients at three anatomic locations). By correcting for image truncation, the proportion of clinically reconstructed images with an extracted DW within ±5% of the maximal FOV DW increased from 54% to 90% in the thorax (n = 3602 images) and from 95% to 100% in the abdomen and pelvis (6181 images). Conclusion The DW extracted from axial CT images is a reliable measure of patient size, and varying degrees of clinical image truncation can be readily corrected. Automated measurement of patient size combined with CT radiation exposure metrics may enable patient-specific dose estimation on a large scale. © RSNA, 2013 PMID:24086075

  20. Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.

    1999-01-01

    In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.

Top