Sample records for deep field image

  1. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  2. Deep Learning in Nuclear Medicine and Molecular Imaging: Current Perspectives and Future Directions.

    PubMed

    Choi, Hongyoon

    2018-04-01

    Recent advances in deep learning have impacted various scientific and industrial fields. Due to the rapid application of deep learning in biomedical data, molecular imaging has also started to adopt this technique. In this regard, it is expected that deep learning will potentially affect the roles of molecular imaging experts as well as clinical decision making. This review firstly offers a basic overview of deep learning particularly for image data analysis to give knowledge to nuclear medicine physicians and researchers. Because of the unique characteristics and distinctive aims of various types of molecular imaging, deep learning applications can be different from other fields. In this context, the review deals with current perspectives of deep learning in molecular imaging particularly in terms of development of biomarkers. Finally, future challenges of deep learning application for molecular imaging and future roles of experts in molecular imaging will be discussed.

  3. [Advantages and Application Prospects of Deep Learning in Image Recognition and Bone Age Assessment].

    PubMed

    Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H

    2017-12-01

    Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.

  4. WFIRST: Science from Deep Field Surveys

    NASA Astrophysics Data System (ADS)

    Koekemoer, Anton M.; Foley, Ryan; WFIRST Deep Field Working Group

    2018-06-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  5. WFIRST: Science from Deep Field Surveys

    NASA Astrophysics Data System (ADS)

    Koekemoer, Anton; Foley, Ryan; WFIRST Deep Field Working Group

    2018-01-01

    WFIRST will enable deep field imaging across much larger areas than those previously obtained with Hubble, opening up completely new areas of parameter space for extragalactic deep fields including cosmology, supernova and galaxy evolution science. The instantaneous field of view of the Wide Field Instrument (WFI) is about 0.3 square degrees, which would for example yield an Ultra Deep Field (UDF) reaching similar depths at visible and near-infrared wavelengths to that obtained with Hubble, over an area about 100-200 times larger, for a comparable investment in time. Moreover, wider fields on scales of 10-20 square degrees could achieve depths comparable to large HST surveys at medium depths such as GOODS and CANDELS, and would enable multi-epoch supernova science that could be matched in area to LSST Deep Drilling fields or other large survey areas. Such fields may benefit from being placed on locations in the sky that have ancillary multi-band imaging or spectroscopy from other facilities, from the ground or in space. The WFIRST Deep Fields Working Group has been examining the science considerations for various types of deep fields that may be obtained with WFIRST, and present here a summary of the various properties of different locations in the sky that may be considered for future deep fields with WFIRST.

  6. Deepest X-Rays Ever Reveal universe Teeming With Black Holes

    NASA Astrophysics Data System (ADS)

    2001-03-01

    For the first time, astronomers believe they have proof black holes of all sizes once ruled the universe. NASA's Chandra X-ray Observatory provided the deepest X-ray images ever recorded, and those pictures deliver a novel look at the past 12 billion years of black holes. Two independent teams of astronomers today presented images that contain the faintest X-ray sources ever detected, which include an abundance of active super massive black holes. "The Chandra data show us that giant black holes were much more active in the past than at present," said Riccardo Giacconi, of Johns Hopkins University and Associated Universities, Inc., Washington, DC. The exposure is known as "Chandra Deep Field South" since it is located in the Southern Hemisphere constellation of Fornax. "In this million-second image, we also detect relatively faint X-ray emission from galaxies, groups, and clusters of galaxies". The images, known as Chandra Deep Fields, were obtained during many long exposures over the course of more than a year. Data from the Chandra Deep Field South will be placed in a public archive for scientists beginning today. "For the first time, we are able to use X-rays to look back to a time when normal galaxies were several billion years younger," said Ann Hornschemeier, Pennsylvania State University, University Park. The group’s 500,000-second exposure included the Hubble Deep Field North, allowing scientists the opportunity to combine the power of Chandra and the Hubble Space Telescope, two of NASA's Great Observatories. The Penn State team recently acquired an additional 500,000 seconds of data, creating another one-million-second Chandra Deep Field, located in the constellation of Ursa Major. Chandra Deep Field North/Hubble Deep Field North Press Image and Caption The images are called Chandra Deep Fields because they are comparable to the famous Hubble Deep Field in being able to see further and fainter objects than any image of the universe taken at X-ray wavelengths. Both Chandra Deep Fields are comparable in observation time to the Hubble Deep Fields, but cover a much larger area of the sky. "In essence, it is like seeing galaxies similar to our own Milky Way at much earlier times in their lives," Hornschemeier added. "These data will help scientists better understand star formation and how stellar-sized black holes evolve." Combining infrared and X-ray observations, the Penn State team also found veils of dust and gas are common around young black holes. Another discovery to emerge from the Chandra Deep Field South is the detection of an extremely distant X-ray quasar, shrouded in gas and dust. "The discovery of this object, some 12 billion light years away, is key to understanding how dense clouds of gas form galaxies, with massive black holes at their centers," said Colin Norman of Johns Hopkins University. The Chandra Deep Field South results were complemented by the extensive use of deep optical observations supplied by the Very Large Telescope of the European Southern Observatory in Garching, Germany. The Penn State team obtained optical spectroscopy and imaging using the Hobby-Eberly Telescope in Ft. Davis, TX, and the Keck Observatory atop Mauna Kea, HI. Chandra's Advanced CCD Imaging Spectrometer was developed for NASA by Penn State and Massachusetts Institute of Technology under the leadership of Penn State Professor Gordon Garmire. NASA's Marshall Space Flight Center, Huntsville, AL, manages the Chandra program for the Office of Space Science, Washington, DC. TRW, Inc., Redondo Beach, California, is the prime contractor for the spacecraft. The Smithsonian's Chandra X-ray Center controls science and flight operations from Cambridge, MA. More information is available on the Internet at: http://chandra.harvard.edu AND http://chandra.nasa.gov

  7. [Deep learning and neuronal networks in ophthalmology : Applications in the field of optical coherence tomography].

    PubMed

    Treder, M; Eter, N

    2018-04-19

    Deep learning is increasingly becoming the focus of various imaging methods in medicine. Due to the large number of different imaging modalities, ophthalmology is particularly suitable for this field of application. This article gives a general overview on the topic of deep learning and its current applications in the field of optical coherence tomography. For the benefit of the reader it focuses on the clinical rather than the technical aspects.

  8. Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders.

    PubMed

    Işil, Çağatay; Yorulmaz, Mustafa; Solmaz, Berkan; Turhan, Adil Burak; Yurdakul, Celalettin; Ünlü, Selim; Ozbay, Ekmel; Koç, Aykut

    2018-04-01

    Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.

  9. Hello World Deep Learning in Medical Imaging.

    PubMed

    Lakhani, Paras; Gray, Daniel L; Pett, Carl R; Nagy, Paul; Shih, George

    2018-05-03

    There is recent popularity in applying machine learning to medical imaging, notably deep learning, which has achieved state-of-the-art performance in image analysis and processing. The rapid adoption of deep learning may be attributed to the availability of machine learning frameworks and libraries to simplify their use. In this tutorial, we provide a high-level overview of how to build a deep neural network for medical image classification, and provide code that can help those new to the field begin their informatics projects.

  10. Accurate segmentation of lung fields on chest radiographs using deep convolutional networks

    NASA Astrophysics Data System (ADS)

    Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory

    2017-02-01

    Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.

  11. Hubble Team Unveils Most Colorful View of Universe Captured by Space Telescope

    NASA Image and Video Library

    2014-06-04

    Astronomers using NASA's Hubble Space Telescope have assembled a comprehensive picture of the evolving universe – among the most colorful deep space images ever captured by the 24-year-old telescope. Researchers say the image, in new study called the Ultraviolet Coverage of the Hubble Ultra Deep Field, provides the missing link in star formation. The Hubble Ultra Deep Field 2014 image is a composite of separate exposures taken in 2003 to 2012 with Hubble's Advanced Camera for Surveys and Wide Field Camera 3. Credit: NASA/ESA Read more: 1.usa.gov/1neD0se NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. Deep Keck u-Band Imaging of the Hubble Ultra Deep Field: A Catalog of z ~ 3 Lyman Break Galaxies

    NASA Astrophysics Data System (ADS)

    Rafelski, Marc; Wolfe, Arthur M.; Cooke, Jeff; Chen, Hsiao-Wen; Armandroff, Taft E.; Wirth, Gregory D.

    2009-10-01

    We present a sample of 407 z ~ 3 Lyman break galaxies (LBGs) to a limiting isophotal u-band magnitude of 27.6 mag in the Hubble Ultra Deep Field. The LBGs are selected using a combination of photometric redshifts and the u-band drop-out technique enabled by the introduction of an extremely deep u-band image obtained with the Keck I telescope and the blue channel of the Low Resolution Imaging Spectrometer. The Keck u-band image, totaling 9 hr of integration time, has a 1σ depth of 30.7 mag arcsec-2, making it one of the most sensitive u-band images ever obtained. The u-band image also substantially improves the accuracy of photometric redshift measurements of ~50% of the z ~ 3 LBGs, significantly reducing the traditional degeneracy of colors between z ~ 3 and z ~ 0.2 galaxies. This sample provides the most sensitive, high-resolution multi-filter imaging of reliably identified z ~ 3 LBGs for morphological studies of galaxy formation and evolution and the star formation efficiency of gas at high redshift.

  13. Wide Field Imaging of the Hubble Deep Field-South Region III: Catalog

    NASA Technical Reports Server (NTRS)

    Palunas, Povilas; Collins, Nicholas R.; Gardner, Jonathan P.; Hill, Robert S.; Malumuth, Eliot M.; Rhodes, Jason; Teplitz, Harry I.; Woodgate, Bruce E.

    2002-01-01

    We present 1/2 square degree uBVRI imaging around the Hubble Deep Field - South. These data have been used in earlier papers to examine the QSO population and the evolution of the correlation function in the region around the HDF-S. The images were obtained with the Big Throughput Camera at CTIO in September 1998. The images reach 5 sigma limits of u approx. 24.4, B approx. 25.6, V approx. 25.3, R approx. 24.9 and I approx. 23.9. We present a catalog of approx. 22,000 galaxies. We also present number-magnitude counts and a comparison with other observations of the same field. The data presented here are available over the world wide web.

  14. Shaping field for deep tissue microscopy

    NASA Astrophysics Data System (ADS)

    Colon, J.; Lim, H.

    2015-05-01

    Information capacity of a lossless image-forming system is a conserved property determined by two imaging parameters - the resolution and the field of view (FOV). Adaptive optics improves the former by manipulating the phase, or wavefront, in the pupil plane. Here we describe a homologous approach, namely adaptive field microscopy, which aims to enhance the FOV by controlling the phase, or defocus, in the focal plane. In deep tissue imaging, the useful FOV can be severely limited if the region of interest is buried in a thick sample and not perpendicular to the optic axis. One must acquire many z-scans and reconstruct by post-processing, which exposes tissue to excessive radiation and is also time consuming. We demonstrate the effective FOV can be substantially enhanced by dynamic control of the image plane. Specifically, the tilt of the image plane is continuously adjusted in situ to match the oblique orientation of the sample plane within tissue. The utility of adaptive field microscopy is tested for imaging tissue with non-planar morphology. Ocular tissue of small animals was imaged by two-photon excited fluorescence. Our results show that adaptive field microscopy can utilize the full FOV. The freedom to adjust the image plane to account for the geometrical variations of sample could be extremely useful for 3D biological imaging. Furthermore, it could facilitate rapid surveillance of cellular features within deep tissue while avoiding photo damages, making it suitable for in vivo imaging.

  15. Using deep learning in image hyper spectral segmentation, classification, and detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xiuying; Su, Zhenyu

    2018-02-01

    Recent years have shown that deep learning neural networks are a valuable tool in the field of computer vision. Deep learning method can be used in applications like remote sensing such as Land cover Classification, Detection of Vehicle in Satellite Images, Hyper spectral Image classification. This paper addresses the use of the deep learning artificial neural network in Satellite image segmentation. Image segmentation plays an important role in image processing. The hue of the remote sensing image often has a large hue difference, which will result in the poor display of the images in the VR environment. Image segmentation is a pre processing technique applied to the original images and splits the image into many parts which have different hue to unify the color. Several computational models based on supervised, unsupervised, parametric, probabilistic region based image segmentation techniques have been proposed. Recently, one of the machine learning technique known as, deep learning with convolution neural network has been widely used for development of efficient and automatic image segmentation models. In this paper, we focus on study of deep neural convolution network and its variants for automatic image segmentation rather than traditional image segmentation strategies.

  16. The Great Easter Egg Hunt: The Void's Incredible Richness

    NASA Astrophysics Data System (ADS)

    2006-04-01

    An image made of about 300 million pixels is being released by ESO, based on more than 64 hours of observations with the Wide-Field Camera on the 2.2m telescope at La Silla (Chile). The image covers an 'empty' region of the sky five times the size of the full moon, opening an exceptionally clear view towards the most distant part of our universe. It reveals objects that are 100 million times fainter than what the unaided eye can see. Easter is in many countries a time of great excitement for children who are on the big hunt for chocolate eggs, hidden all about the places. Astronomers, however, do not need to wait this special day to get such an excitement: it is indeed daily that they look for faraway objects concealed in deep images of the sky. And as with chocolate eggs, deep sky objects, such as galaxies, quasars or gravitational lenses, come in the wildest variety of colours and shapes. ESO PR Photo 11/06 ESO PR Photo 14a/06 The Deep 3 'Empty' Field The image presented here is one of such very deep image of the sky. It is the combination of 714 frames for a total exposure time of 64.5 hours obtained through four different filters (B, V, R, and I)! It consists of four adjacent Wide-Field Camera pointings (each 33x34 arcmin), covering a total area larger than one square degree. Yet, if you were to look at this large portion of the firmament with the unaided eye, you would just see... nothing. The area, named Deep 3, was indeed chosen to be a random but empty, high galactic latitude field, positioned in such a way that it can be observed from the La Silla observatory all over the year. Together with two other regions, Deep 1 and Deep 2, Deep 3 is part of the Deep Public Survey (DPS), based on ideas submitted by the ESO community and covering a total sky area of 3 square degrees. Deep 1 and Deep 2 were selected because they overlapped with regions of other scientific interest. For instance, Deep 1 was chosen to complement the deep ATESP radio survey carried out with the Australia Telescope Compact Array (ATCA) covering the region surveyed by the ESO Slice Project, while Deep 2 included the CDF-S field. Each region is observed in the optical, with the WFI, and in the near-infrared, with SOFI on the 3.5-m New Technology Telescope also at La Silla. Deep 3 is located in the Crater ('The Cup'), a southern constellation with very little interest (the brightest star is of fourth magnitude, i.e. only a factor six brighter than what a keen observer can see with the unaided eye), in between the Virgo, Corvus and Hydra constellations. Such comparatively empty fields provide an unusually clear view towards the distant regions in the Universe and thus open a window towards the earliest cosmic times. The deep imaging data can for example be used to pre-select objects by colour for follow-up spectroscopy with ESO's Very Large Telescope instruments. ESO PR Photo 11/06 ESO PR Photo 14b/06 Galaxy ESO 570-19 and Variable Star UW Crateris But being empty is only a relative notion. True, on the whole image, the SIMBAD Astronomical database references less than 50 objects, clearly a tiny number compared to the myriad of anonymous stars and galaxies that can be seen in the deep image obtained by the Survey! Among the objects catalogued is the galaxy visible in the top middle right (see also PR Photo 14b/06) and named ESO 570-19. Located 60 million light-years away, this spiral galaxy is the largest in the image. It is located not so far - on the image! - from the brightest star in the field, UW Crateris. This red giant is a variable star that is about 8 times fainter than what the unaided eye can see. The second and third brightest stars in this image are visible in the lower far right and in the lower middle left. The first is a star slightly more massive than the Sun, HD 98081, while the other is another red giant, HD 98507. ESO PR Photo 11/06 ESO PR Photo 14c/06 The DPS Deep 3 Field (Detail) In the image, a vast number of stars and galaxies are to be studied and compared. They come in a variety of colours and the stars form amazing asterisms (a group of stars forming a pattern), while the galaxies, which are to be counted by the tens of thousands come in different shapes and some even interact or form part of a cluster. The image and the other associated data will certainly provide a plethora of new results in the years to come. In the meantime, why don't you explore the image with the zoom-in facility, and start your own journey into infinity? Just be careful not to get lost. And remember: don't eat too many of these chocolate eggs! High resolution images and their captions are available on this page.

  17. Underwater Inherent Optical Properties Estimation Using a Depth Aided Deep Neural Network.

    PubMed

    Yu, Zhibin; Wang, Yubo; Zheng, Bing; Zheng, Haiyong; Wang, Nan; Gu, Zhaorui

    2017-01-01

    Underwater inherent optical properties (IOPs) are the fundamental clues to many research fields such as marine optics, marine biology, and underwater vision. Currently, beam transmissometers and optical sensors are considered as the ideal IOPs measuring methods. But these methods are inflexible and expensive to be deployed. To overcome this problem, we aim to develop a novel measuring method using only a single underwater image with the help of deep artificial neural network. The power of artificial neural network has been proved in image processing and computer vision fields with deep learning technology. However, image-based IOPs estimation is a quite different and challenging task. Unlike the traditional applications such as image classification or localization, IOP estimation looks at the transparency of the water between the camera and the target objects to estimate multiple optical properties simultaneously. In this paper, we propose a novel Depth Aided (DA) deep neural network structure for IOPs estimation based on a single RGB image that is even noisy. The imaging depth information is considered as an aided input to help our model make better decision.

  18. FRONTIER FIELDS: HIGH-REDSHIFT PREDICTIONS AND EARLY RESULTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Dan; Bradley, Larry; Zitrin, Adi, E-mail: DCoe@STScI.edu

    2015-02-20

    The Frontier Fields program is obtaining deep Hubble and Spitzer Space Telescope images of new ''blank'' fields and nearby fields gravitationally lensed by massive galaxy clusters. The Hubble images of the lensed fields are revealing nJy sources (AB mag > 31), the faintest galaxies yet observed. The full program will transform our understanding of galaxy evolution in the first 600 million years (z > 9). Previous programs have yielded a dozen or so z > 9 candidates, including perhaps fewer than expected in the Ultra Deep Field and more than expected in shallower Hubble images. In this paper, we present high-redshift (z >more » 6) number count predictions for the Frontier Fields and candidates in three of the first Hubble images. We show the full Frontier Fields program may yield up to ∼70 z > 9 candidates (∼6 per field). We base this estimate on an extrapolation of luminosity functions observed between 4 < z < 8 and gravitational lensing models submitted by the community. However, in the first two deep infrared Hubble images obtained to date, we find z ∼ 8 candidates but no strong candidates at z > 9. We defer quantitative analysis of the z > 9 deficit (including detection completeness estimates) to future work including additional data. At these redshifts, cosmic variance (field-to-field variation) is expected to be significant (greater than ±50%) and include clustering of early galaxies formed in overdensities. The full Frontier Fields program will significantly mitigate this uncertainty by observing six independent sightlines each with a lensing cluster and nearby blank field.« less

  19. VizieR Online Data Catalog: Galaxy samples rest-frame ultraviolet structure (Bond+, 2014)

    NASA Astrophysics Data System (ADS)

    Bond, N. A.; Gardner, J. P.; de Mello, D. F.; Teplitz, H. I.; Rafelski, M.; Koekemoer, A. M.; Coe, D.; Grogin, N.; Gawiser, E.; Ravindranath, S.; Scarlata, C.

    2017-03-01

    In this paper, we use data taken as part of a program (GO 11563, PI: Teplitz) to obtain UV imaging of the Hubble Ultra Deep Field (hereafter UVUDF) and study intermediate-redshift galaxy structure in the F336W, F275W, and F225W filters, complementing existing optical and near-IR measurements from the 2012 Hubble Ultra Deep Field (HUDF12; Ellis et al. 2013ApJ...763L...7E) survey. We use AB magnitudes throughout and assume a concordance cosmology with H0=71 km/s/Mpc, ωm=0.27, and ωλ=0.73 (Spergel et al. 2007ApJS..170..377S). The UVUDF data and the optical Hubble Ultradeep Field (UDF; Beckwith et al. 2006, J/AJ/132/1729) are both contained within a single deep field in the Great Observatories Origins Deep Survey South. The new UVUDF data include imaging in three filters (F336W, F275W, and F225W), obtained in 10 visits, for a total of 30 orbits per filter. In addition, from the UDF, we make use of deep drizzled images taken in the observed optical with the F435W, F606W, and F775W filters. (1 data file).

  20. The Great Observatories Origins Deep Survey (GOODS): Overview and Status

    NASA Astrophysics Data System (ADS)

    Hook, R. N.; GOODS Team

    2002-12-01

    GOODS is a very large project to gather deep imaging data and spectroscopic followup of two fields, the Hubble Deep Field North (HDF-N) and the Chandra Deep Field South (CDF-S), with both space and ground-based instruments to create an extensive multiwavelength public data set for community research on the distant Universe. GOODS includes a SIRTF Legacy Program (PI: Mark Dickinson) and a Hubble Treasury Program of ACS imaging (PI: Mauro Giavalisco). The ACS imaging was also optimized for the detection of high-z supernovae which are being followed up by a further target of opportunity Hubble GO Program (PI: Adam Riess). The bulk of the CDF-S ground-based data presently available comes from an ESO Large Programme (PI: Catherine Cesarsky) which includes both deep imaging and multi-object followup spectroscopy. This is currently complemented in the South by additional CTIO imaging. Currently available HDF-N ground-based data forming part of GOODS includes NOAO imaging. Although the SIRTF part of the survey will not begin until later in the year the ACS imaging is well advanced and there is also a huge body of complementary ground-based imaging and some follow-up spectroscopy which is already publicly available. We summarize the current status of GOODS and give an overview of the data products currently available and present the timescales for the future. Many early science results from the survey are presented in other GOODS papers at this meeting. Support for the HST GOODS program presented here and in companion abstracts was provided by NASA thorugh grant number GO-9425 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.

  1. Near Infrared Imaging of the Hubble Deep Field with Keck Telescope

    NASA Technical Reports Server (NTRS)

    Hogg, David W.; Neugebauer, G.; Armus, Lee; Matthews, K.; Pahre, Michael A.; Soifer, B. T.; Weinberger, A. J.

    1997-01-01

    Two deep K-band (2.2 micrometer) images, with point-source detection limits of K=25.2 mag (one sigma), taken with the Keck Telescope in subfields of the Hubble Deep Field, are presented and analyzed. A sample of objects to K=24 mag is constructed and V(sub 606)- I(sub 814) and I(sub 814)-K colors are measured. By stacking visually selected objects, mean I(sub 814)-K colors can be measured to very faint levels, the mean I(sub 814)-K color is constant with apparent magnitude down to V(sub 606)=28 mag.

  2. Deep Learning and Its Applications in Biomedicine.

    PubMed

    Cao, Chensi; Liu, Feng; Tan, Hai; Song, Deshou; Shu, Wenjie; Li, Weizhong; Zhou, Yiming; Bo, Xiaochen; Xie, Zhi

    2018-02-01

    Advances in biological and medical technologies have been providing us explosive volumes of biological and physiological data, such as medical images, electroencephalography, genomic and protein sequences. Learning from these data facilitates the understanding of human health and disease. Developed from artificial neural networks, deep learning-based algorithms show great promise in extracting features and learning patterns from complex data. The aim of this paper is to provide an overview of deep learning techniques and some of the state-of-the-art applications in the biomedical field. We first introduce the development of artificial neural network and deep learning. We then describe two main components of deep learning, i.e., deep learning architectures and model optimization. Subsequently, some examples are demonstrated for deep learning applications, including medical image classification, genomic sequence analysis, as well as protein structure classification and prediction. Finally, we offer our perspectives for the future directions in the field of deep learning. Copyright © 2018. Production and hosting by Elsevier B.V.

  3. Deep Learning in Medical Image Analysis.

    PubMed

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2017-06-21

    This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.

  4. Deep ultraviolet scanning near-field optical microscopy for the structural analysis of organic and biological materials

    NASA Astrophysics Data System (ADS)

    Aoki, Hiroyuki; Hamamatsu, Toyohiro; Ito, Shinzaburo

    2004-01-01

    Scanning near-field optical microscopy (SNOM) using a deep ultraviolet (DUV) light source was developed for in situ imaging of a variety of chemical species without staining. Numerous kinds of chemical species have a carbon-carbon double bond or aromatic group in their chemical structure, which can be excited at the wavelength below 300 nm. In this study, the wavelength range available for SNOM imaging was extended to the DUV region. DUV-SNOM allowed the direct imaging of polymer thin films with high detection sensitivity and spatial resolution of several tens of nanometers. In addition to the polymer materials, we demonstrated the near-field imaging of a cell without using a fluorescence label.

  5. Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry

    NASA Astrophysics Data System (ADS)

    Thornton, Douglas E.; Spencer, Mark F.; Perram, Glen P.

    2017-09-01

    The effects of deep turbulence in long-range imaging applications presents unique challenges to properly measure and correct for aberrations incurred along the atmospheric path. In practice, digital holography can detect the path-integrated wavefront distortions caused by deep turbulence, and di erent recording geometries offer different benefits depending on the application of interest. Previous studies have evaluated the performance of the off-axis image and pupil plane recording geometries for deep-turbulence sensing. This study models digital holography in the on-axis phase shifting recording geometry using wave optics simulations. In particular, the analysis models spherical-wave propagation through varying deep-turbulence conditions to estimate the complex optical field, and performance is evaluated by calculating the field-estimated Strehl ratio and RMS wavefront error. Altogether, the results show that digital holography in the on-axis phase shifting recording geometry is an effective wavefront-sensing method in the presence of deep turbulence.

  6. A survey on deep learning in medical image analysis.

    PubMed

    Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I

    2017-12-01

    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. VizieR Online Data Catalog: Improved multi-band photometry from SERVS (Nyland+, 2017)

    NASA Astrophysics Data System (ADS)

    Nyland, K.; Lacy, M.; Sajina, A.; Pforr, J.; Farrah, D.; Wilson, G.; Surace, J.; Haussler, B.; Vaccari, M.; Jarvis, M.

    2017-07-01

    The Spitzer Extragalactic Representative Volume Survey (SERVS) sky footprint includes five well-studied astronomical deep fields with abundant multi-wavelength data spanning an area of ~18deg2 and a co-moving volume of ~0.8Gpc3. The five deep fields included in SERVS are the XMM-LSS field, Lockman Hole (LH), ELAIS-N1 (EN1), ELAIS-S1 (ES1), and Chandra Deep Field South (CDFS). SERVS provides NIR, post-cryogenic imaging in the 3.6 and 4.5um Spitzer/IRAC bands to a depth of ~2uJy. IRAC dual-band source catalogs generated using traditional catalog extraction methods are described in Mauduit+ (2012PASP..124..714M). The Spitzer IRAC data are complemented by ground-based NIR observations from the VISTA Deep Extragalactic Observations (VIDEO; Jarvis+ 2013MNRAS.428.1281J) survey in the south in the Z, Y, J, H, and Ks bands and UKIRT Infrared Deep Sky Survey (UKIDSS; Lawrence+ 2007, see II/319) in the north in the J and K bands. SERVS also provides substantial overlap with infrared data from SWIRE (Lonsdale+ 2003PASP..115..897L) and the Herschel Multitiered Extragalactic Survey (HerMES; Oliver+ 2012, VIII/95). As shown in Figure 1, one square degree of the XMM-LSS field overlaps with ground-based optical data from the Canada-France-Hawaii Telescope Legacy Survey Deep field 1 (CFHTLS-D1). The CFHTLS-D1 region is centered at RAJ2000=02:25:59, DEJ2000=-04:29:40 and includes imaging through the filter set u', g', r', i', and z'. Thus, in combination with the NIR data from SERVS and VIDEO that overlap with the CFHTLS-D1 region, multi-band imaging over a total of 12 bands is available. (2 data files).

  8. THE MULTIWAVELENGTH SURVEY BY YALE-CHILE (MUSYC): DEEP MEDIUM-BAND OPTICAL IMAGING AND HIGH-QUALITY 32-BAND PHOTOMETRIC REDSHIFTS IN THE ECDF-S

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardamone, Carolin N.; Van Dokkum, Pieter G.; Urry, C. Megan

    2010-08-15

    We present deep optical 18-medium-band photometry from the Subaru telescope over the {approx}30' x 30' Extended Chandra Deep Field-South, as part of the Multiwavelength Survey by Yale-Chile (MUSYC). This field has a wealth of ground- and space-based ancillary data, and contains the GOODS-South field and the Hubble Ultra Deep Field. We combine the Subaru imaging with existing UBVRIzJHK and Spitzer IRAC images to create a uniform catalog. Detecting sources in the MUSYC 'BVR' image we find {approx}40,000 galaxies with R {sub AB} < 25.3, the median 5{sigma} limit of the 18 medium bands. Photometric redshifts are determined using the EAzYmore » code and compared to {approx}2000 spectroscopic redshifts in this field. The medium-band filters provide very accurate redshifts for the (bright) subset of galaxies with spectroscopic redshifts, particularly at 0.1 < z < 1.2 and at z {approx}> 3.5. For 0.1 < z < 1.2, we find a 1{sigma} scatter in {Delta}z/(1 + z) of 0.007, similar to results obtained with a similar filter set in the COSMOS field. As a demonstration of the data quality, we show that the red sequence and blue cloud can be cleanly identified in rest-frame color-magnitude diagrams at 0.1 < z < 1.2. We find that {approx}20% of the red sequence galaxies show evidence of dust emission at longer rest-frame wavelengths. The reduced images, photometric catalog, and photometric redshifts are provided through the public MUSYC Web site.« less

  9. A warm Spitzer survey of the LSST/DES 'Deep drilling' fields

    NASA Astrophysics Data System (ADS)

    Lacy, Mark; Farrah, Duncan; Brandt, Niel; Sako, Masao; Richards, Gordon; Norris, Ray; Ridgway, Susan; Afonso, Jose; Brunner, Robert; Clements, Dave; Cooray, Asantha; Covone, Giovanni; D'Andrea, Chris; Dickinson, Mark; Ferguson, Harry; Frieman, Joshua; Gupta, Ravi; Hatziminaoglou, Evanthia; Jarvis, Matt; Kimball, Amy; Lubin, Lori; Mao, Minnie; Marchetti, Lucia; Mauduit, Jean-Christophe; Mei, Simona; Newman, Jeffrey; Nichol, Robert; Oliver, Seb; Perez-Fournon, Ismael; Pierre, Marguerite; Rottgering, Huub; Seymour, Nick; Smail, Ian; Surace, Jason; Thorman, Paul; Vaccari, Mattia; Verma, Aprajita; Wilson, Gillian; Wood-Vasey, Michael; Cane, Rachel; Wechsler, Risa; Martini, Paul; Evrard, August; McMahon, Richard; Borne, Kirk; Capozzi, Diego; Huang, Jiashang; Lagos, Claudia; Lidman, Chris; Maraston, Claudia; Pforr, Janine; Sajina, Anna; Somerville, Rachel; Strauss, Michael; Jones, Kristen; Barkhouse, Wayne; Cooper, Michael; Ballantyne, David; Jagannathan, Preshanth; Murphy, Eric; Pradoni, Isabella; Suntzeff, Nicholas; Covarrubias, Ricardo; Spitler, Lee

    2014-12-01

    We propose a warm Spitzer survey to microJy depth of the four predefined Deep Drilling Fields (DDFs) for the Large Synoptic Survey Telescope (LSST) (three of which are also deep drilling fields for the Dark Energy Survey (DES)). Imaging these fields with warm Spitzer is a key component of the overall success of these projects, that address the 'Physics of the Universe' theme of the Astro2010 decadal survey. With deep, accurate, near-infrared photometry from Spitzer in the DDFs, we will generate photometric redshift distributions to apply to the surveys as a whole. The DDFs are also the areas where the supernova searches of DES and LSST are concentrated, and deep Spitzer data is essential to obtain photometric redshifts, stellar masses and constraints on ages and metallicities for the >10000 supernova host galaxies these surveys will find. This 'DEEPDRILL' survey will also address the 'Cosmic Dawn' goal of Astro2010 through being deep enough to find all the >10^11 solar mass galaxies within the survey area out to z~6. DEEPDRILL will complete the final 24.4 square degrees of imaging in the DDFs, which, when added to the 14 square degrees already imaged to this depth, will map a volume of 1-Gpc^3 at z>2. It will find ~100 > 10^11 solar mass galaxies at z~5 and ~40 protoclusters at z>2, providing targets for JWST that can be found in no other way. The Spitzer data, in conjunction with the multiwavelength surveys in these fields, ranging from X-ray through far-infrared and cm-radio, will comprise a unique legacy dataset for studies of galaxy evolution.

  10. Quick acquisition and recognition method for the beacon in deep space optical communications.

    PubMed

    Wang, Qiang; Liu, Yuefei; Ma, Jing; Tan, Liying; Yu, Siyuan; Li, Changjiang

    2016-12-01

    In deep space optical communications, it is very difficult to acquire the beacon given the long communication distance. Acquisition efficiency is essential for establishing and holding the optical communication link. Here we proposed a quick acquisition and recognition method for the beacon in deep optical communications based on the characteristics of the deep optical link. To identify the beacon from the background light efficiently, we utilized the maximum similarity between the collecting image and the reference image for accurate recognition and acquisition of the beacon in the area of uncertainty. First, the collecting image and the reference image were processed by Fourier-Mellin. Second, image sampling and image matching were applied for the accurate positioning of the beacon. Finally, the field programmable gate array (FPGA)-based system was used to verify and realize this method. The experimental results showed that the acquisition time for the beacon was as fast as 8.1s. Future application of this method in the system design of deep optical communication will be beneficial.

  11. Near-UV Sources in the Hubble Ultra Deep Field: The Catalog

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.; Voyrer, Elysse; de Mello, Duilia F.; Siana, Brian; Quirk, Cori; Teplitz, Harry I.

    2009-01-01

    The catalog from the first high resolution U-band image of the Hubble Ultra Deep Field, taken with Hubble s Wide Field Planetary Camera 2 through the F300W filter, is presented. We detect 96 U-band objects and compare and combine this catalog with a Great Observatories Origins Deep Survey (GOODS) B-selected catalog that provides B, V, i, and z photometry, spectral types, and photometric redshifts. We have also obtained Far-Ultraviolet (FUV, 1614 Angstroms) data with Hubble s Advanced Camera for Surveys Solar Blind Channel (ACS/SBC) and with Galaxy Evolution Explorer (GALEX). We detected 31 sources with ACS/SBC, 28 with GALEX/FUV, and 45 with GALEX/NUV. The methods of observations, image processing, object identification, catalog preparation, and catalog matching are presented.

  12. Image inpainting and super-resolution using non-local recursive deep convolutional network with skip connections

    NASA Astrophysics Data System (ADS)

    Liu, Miaofeng

    2017-07-01

    In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously

  13. Deep erosions of the palmar aspect of the navicular bone diagnosed by standing magnetic resonance imaging.

    PubMed

    Sherlock, C; Mair, T; Blunden, T

    2008-11-01

    Erosion of the palmar (flexor) aspect of the navicular bone is difficult to diagnose with conventional imaging techniques. To review the clinical, magnetic resonance (MR) and pathological features of deep erosions of the palmar aspect of the navicular bone. Cases of deep erosions of the palmar aspect of the navicular bone, diagnosed by standing low field MR imaging, were selected. Clinical details, results of diagnostic procedures, MR features and pathological findings were reviewed. Deep erosions of the palmar aspect of the navicular bone were diagnosed in 16 mature horses, 6 of which were bilaterally lame. Sudden onset of lameness was recorded in 63%. Radiography prior to MR imaging showed equivocal changes in 7 horses. The MR features consisted of focal areas of intermediate or high signal intensity on T1-, T2*- and T2-weighted images and STIR images affecting the dorsal aspect of the deep digital flexor tendon, the fibrocartilage of the palmar aspect, subchondral compact bone and medulla of the navicular bone. On follow-up, 7/16 horses (44%) had been subjected to euthanasia and only one was being worked at its previous level. Erosions of the palmar aspect of the navicular bone were confirmed post mortem in 2 horses. Histologically, the lesions were characterised by localised degeneration of fibrocartilage with underlying focal osteonecrosis and fibroplasia. The adjacent deep digital flexor tendon showed fibril formation and fibrocartilaginous metaplasia. Deep erosions of the palmar aspect of the navicular bone are more easily diagnosed by standing low field MR imaging than by conventional radiography. The lesions involve degeneration of the palmar fibrocartilage with underlying osteonecrosis and fibroplasia affecting the subchondral compact bone and medulla, and carry a poor prognosis for return to performance. Diagnosis of shallow erosive lesions of the palmar fibrocartilage may allow therapeutic intervention earlier in the disease process, thereby preventing progression to deep erosive lesions.

  14. Deep Learning in Medical Image Analysis

    PubMed Central

    Shen, Dinggang; Wu, Guorong; Suk, Heung-Il

    2016-01-01

    The computer-assisted analysis for better interpreting images have been longstanding issues in the medical imaging field. On the image-understanding front, recent advances in machine learning, especially, in the way of deep learning, have made a big leap to help identify, classify, and quantify patterns in medical images. Specifically, exploiting hierarchical feature representations learned solely from data, instead of handcrafted features mostly designed based on domain-specific knowledge, lies at the core of the advances. In that way, deep learning is rapidly proving to be the state-of-the-art foundation, achieving enhanced performances in various medical applications. In this article, we introduce the fundamentals of deep learning methods; review their successes to image registration, anatomical/cell structures detection, tissue segmentation, computer-aided disease diagnosis or prognosis, and so on. We conclude by raising research issues and suggesting future directions for further improvements. PMID:28301734

  15. Hubble Sees a Legion of Galaxies

    NASA Image and Video Library

    2017-12-08

    Peering deep into the early universe, this picturesque parallel field observation from the NASA/ESA Hubble Space Telescope reveals thousands of colorful galaxies swimming in the inky blackness of space. A few foreground stars from our own galaxy, the Milky Way, are also visible. In October 2013 Hubble’s Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys (ACS) began observing this portion of sky as part of the Frontier Fields program. This spectacular skyscape was captured during the study of the giant galaxy cluster Abell 2744, otherwise known as Pandora’s Box. While one of Hubble’s cameras concentrated on Abell 2744, the other camera viewed this adjacent patch of sky near to the cluster. Containing countless galaxies of various ages, shapes and sizes, this parallel field observation is nearly as deep as the Hubble Ultra-Deep Field. In addition to showcasing the stunning beauty of the deep universe in incredible detail, this parallel field — when compared to other deep fields — will help astronomers understand how similar the universe looks in different directions. Image credit: NASA, ESA and the HST Frontier Fields team (STScI), NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  16. Deep multi-scale convolutional neural network for hyperspectral image classification

    NASA Astrophysics Data System (ADS)

    Zhang, Feng-zhe; Yang, Xia

    2018-04-01

    In this paper, we proposed a multi-scale convolutional neural network for hyperspectral image classification task. Firstly, compared with conventional convolution, we utilize multi-scale convolutions, which possess larger respective fields, to extract spectral features of hyperspectral image. We design a deep neural network with a multi-scale convolution layer which contains 3 different convolution kernel sizes. Secondly, to avoid overfitting of deep neural network, dropout is utilized, which randomly sleeps neurons, contributing to improve the classification accuracy a bit. In addition, new skills like ReLU in deep learning is utilized in this paper. We conduct experiments on University of Pavia and Salinas datasets, and obtained better classification accuracy compared with other methods.

  17. Fifty years of computer analysis in chest imaging: rule-based, machine learning, deep learning.

    PubMed

    van Ginneken, Bram

    2017-03-01

    Half a century ago, the term "computer-aided diagnosis" (CAD) was introduced in the scientific literature. Pulmonary imaging, with chest radiography and computed tomography, has always been one of the focus areas in this field. In this study, I describe how machine learning became the dominant technology for tackling CAD in the lungs, generally producing better results than do classical rule-based approaches, and how the field is now rapidly changing: in the last few years, we have seen how even better results can be obtained with deep learning. The key differences among rule-based processing, machine learning, and deep learning are summarized and illustrated for various applications of CAD in the chest.

  18. Ultraviolet Raman Wide-Field Hyperspectral Imaging Spectrometer for Standoff Trace Explosive Detection.

    PubMed

    Hufziger, Kyle T; Bykov, Sergei V; Asher, Sanford A

    2017-02-01

    We constructed the first deep ultraviolet (UV) Raman standoff wide-field imaging spectrometer. Our novel deep UV imaging spectrometer utilizes a photonic crystal to select Raman spectral regions for detection. The photonic crystal is composed of highly charged, monodisperse 35.5 ± 2.9 nm silica nanoparticles that self-assemble in solution to produce a face centered cubic crystalline colloidal array that Bragg diffracts a narrow ∼1.0 nm full width at half-maximum (FWHM) UV spectral region. We utilize this photonic crystal to select and image two different spectral regions containing resonance Raman bands of pentaerythritol tetranitrate (PETN) and NH 4 NO 3 (AN). These two deep UV Raman spectral regions diffracted were selected by angle tuning the photonic crystal. We utilized this imaging spectrometer to measure 229 nm excited UV Raman images containing ∼10-1000 µg/cm 2 samples of solid PETN and AN on aluminum surfaces at 2.3 m standoff distances. We estimate detection limits of ∼1 µg/cm 2 for PETN and AN films under these experimental conditions.

  19. VizieR Online Data Catalog: Merging galaxies with tidal tails in COSMOS to z=1 (Wen+, 2016)

    NASA Astrophysics Data System (ADS)

    Wen, Z. Z.; Zheng, X. Z.

    2017-02-01

    Our study utilizes the public data and catalogs from multi-band deep surveys of the COSMOS field. The UltraVISTA survey (McCracken+ 2012, J/A+A/544/A156) provides ultra-deep near-IR imaging observations of this field in the Y,J,H, and Ks-band, as well as a narrow band (NB118). The HST/ACS I-band imaging data are publicly available, allowing us to measure morphologies in the rest-frame optical for galaxies at z<=1. The HST/ACS I-band images reach a 5σ depth of 27.2 magnitude for point sources. (1 data file).

  20. Deep-subwavelength imaging of both electric and magnetic localized optical fields by plasmonic campanile nanoantenna

    DOE PAGES

    Caselli, Niccolò; La China, Federico; Bao, Wei; ...

    2015-06-05

    Tailoring the electromagnetic field at the nanoscale has led to artificial materials exhibiting fascinating optical properties unavailable in naturally occurring substances. Besides having fundamental implications for classical and quantum optics, nanoscale metamaterials provide a platform for developing disruptive novel technologies, in which a combination of both the electric and magnetic radiation field components at optical frequencies is relevant to engineer the light-matter interaction. Thus, an experimental investigation of the spatial distribution of the photonic states at the nanoscale for both field components is of crucial importance. Here we experimentally demonstrate a concomitant deep-subwavelength near-field imaging of the electric and magneticmore » intensities of the optical modes localized in a photonic crystal nanocavity. We take advantage of the “campanile tip”, a plasmonic near-field probe that efficiently combines broadband field enhancement with strong far-field to near-field coupling. In conclusion, by exploiting the electric and magnetic polarizability components of the campanile tip along with the perturbation imaging method, we are able to map in a single measurement both the electric and magnetic localized near-field distributions.« less

  1. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    PubMed

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  2. DEEP U BAND AND R IMAGING OF GOODS-SOUTH: OBSERVATIONS, DATA REDUCTION AND FIRST RESULTS ,

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nonino, M.; Cristiani, S.; Vanzella, E.

    2009-08-01

    We present deep imaging in the U band covering an area of 630 arcmin{sup 2} centered on the southern field of the Great Observatories Origins Deep Survey (GOODS). The data were obtained with the VIMOS instrument at the European Southern Observatory (ESO) Very Large Telescope. The final images reach a magnitude limit U {sub lim} {approx} 29.8 (AB, 1{sigma}, in a 1'' radius aperture), and have good image quality, with full width at half-maximum {approx}0.''8. They are significantly deeper than previous U-band images available for the GOODS fields, and better match the sensitivity of other multiwavelength GOODS photometry. The deepermore » U-band data yield significantly improved photometric redshifts, especially in key redshift ranges such as 2 < z < 4, and deeper color-selected galaxy samples, e.g., Lyman break galaxies at z {approx} 3. We also present the co-addition of archival ESO VIMOS R-band data, with R {sub lim} {approx} 29 (AB, 1{sigma}, 1'' radius aperture), and image quality {approx}0.''75. We discuss the strategies for the observations and data reduction, and present the first results from the analysis of the co-added images.« less

  3. DeepNeuron: an open deep learning toolbox for neuron tracing.

    PubMed

    Zhou, Zhi; Kuo, Hsien-Chi; Peng, Hanchuan; Long, Fuhui

    2018-06-06

    Reconstructing three-dimensional (3D) morphology of neurons is essential for understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semiautomatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new Open Source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.

  4. The Herschel Lensing Survey (HLS): HST Frontier Field Coverage

    NASA Astrophysics Data System (ADS)

    Egami, Eiichi

    2015-08-01

    The Herschel Lensing Survey (HLS; PI: Egami) is a large Far-IR/Submm imaging survey of massive galaxy clusters using the Herschel Space Observatory. Its main goal is to detect and study IR/Submm galaxies that are below the nominal confusion limit of Herschel by taking advantage of the strong gravitational lensing power of massive galaxy clusters. HLS has obtained deep PACS (100/160 um) and SPIRE (250/350/500 um) images for 54 cluster fields (HLS-deep) as well as shallower but nearly confusion-limited SPIRE-only images for 527 cluster fields (HLS-snapshot) with a total observing time of ~420 hours. Extensive multi-wavelength follow-up studies are currently on-going with a variety of observing facilities including ALMA.Here, I will focus on the analysis of the deep Herschel PACS/SPIRE images obtained for the 6 HST Frontier Fields (5 observed by HLS-deep; 1 observed by the Herschel GT programs). The Herschel/SPIRE maps are wide enough to cover the Frontier-Field parallel pointings, and we have detected a total of ~180 sources, some of which are strongly lensed. I will present the sample and discuss the properties of these Herschel-detected dusty star-forming galaxies (DSFGs) identified in the Frontier Fields. Although the majority of these Herschel sources are at moderate redshift (z<3), a small number of extremely high-redshift (z>6) candidates can be identified as "Herschel dropouts" when combined with longer-wavelength data. We have also identified ~40 sources as likely cluster members, which will allow us to study the properties of DSFGs in the dense cluster environment.A great legacy of our HLS project will be the extensive multi-wavelength database that incorporates most of the currently available data/information for the fields of the Frontier-Field, CLASH, and other HLS clusters (e.g., HST/Spitzer/Herschel images, spectroscopic/photometric redshifts, lensing models, best-fit SED models etc.). Provided with a user-friendly GUI and a flexible search engine, this database should serve as a powerful tool for a variety of projects including those with ALMA and JWST in the future. I will conclude by introducing this HLS database system.

  5. Hubble Space Telescope Medium Deep Survey. 2: Deconvolution of Wide Field Camera field galaxy images in the 13 hour + 43 deg field

    NASA Technical Reports Server (NTRS)

    Windhorst, R. A.; Schmidtke, P. C.; Pascarelle, S. M.; Gordon, J. M.; Griffiths, R. E.; Ratnatunga, K. U.; Neuschaefer, L. W.; Ellis, R. S.; Gilmore, G.; Glazebrook, K.

    1994-01-01

    We present isophotal profiles of six faint field galaxies from some of the first deep images taken for the Hubble Space Telescope (HST) Medium Deep Survey (MDS). These have redshifts in the range z = 0.126 to 0.402. The images were taken with the Wide Field Camera (WFC) in `parallel mode' and deconvolved with the Lucy method using as the point-spread function nearby stars in the image stack. The WFC deconvolutions have a dynamic range of 16 to 20 dB (4 to 5 mag) and an effective resolution approximately less than 0.2 sec (FWHM). The multiorbit HST images allow us to trace the morphology, light profiles, and color gradients of faint field galaxies down to V approximately equal to 22 to 23 mag at sub-kpc resolution, since the redshift range covered is z = 0.1 to 0.4. The goals of the MDS are to study the sub-kpc scale morphology, light profiles, and color gradients for a large samole of faint field galaxies down to V approximately equal to 23 mag, and to trace the fraction of early to late-type galaxies as function of cosmic time. In this paper we study the brighter MDS galaxies in the 13 hour + 43 deg MDS field in detail, and investigate to what extent model fits with pure exponential disks or a(exp 1/4) bulges are justified at V approximately less than 22 mag. Four of the six field galaxies have light profiles that indicate (small) inner bulges following r(exp 1/4) laws down to 0.2 sec resolution, plus a dominant surrounding exponential disk with little or no color gradients. Two occur in a group at z = 0.401, two are barred spiral galaxies at z = 0.179 and z = 0.302, and two are rather subluminous (and edge-on) disk galaxies at z = 0.126 and z = 0.179. Our deep MDS images can detect galaxies down to V, I approximately less than 25 to 26 mag, and demonstrate the impressive potential of HST--even with its pre-refurbished optics--to resolve morphological details in galaxies at cosmologically significant distances (v approximately less than 23 mag). Since the median redshift of these galaxies is approximately less than 0.4, the HST resolution allows us to study sub kpc size scales at the galaxy, which cannot be done with stable images over wide fields from the best ground-based sites.

  6. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.

    PubMed

    Chen, Liang-Chieh; Papandreou, George; Kokkinos, Iasonas; Murphy, Kevin; Yuille, Alan L

    2018-04-01

    In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.

  7. The Chandra Deepest Fields in the Infrared: Making the Connection between Normal Galaxies and AGN

    NASA Astrophysics Data System (ADS)

    Grogin, N. A.; Ferguson, H. C.; Dickinson, M. E.; Giavalisco, M.; Mobasher, B.; Padovani, P.; Williams, R. E.; Chary, R.; Gilli, R.; Heckman, T. M.; Stern, D.; Winge, C.

    2001-12-01

    Within each of the two Chandra Deepest Fields (CDFs), there are ~10'x15' regions targeted for non-proprietary, deep SIRTF 3.6--24μ m imaging as part of the Great Observatories Origins Deep Survey (GOODS) Legacy program. In advance of the SIRTF observations, the GOODS team has recently begun obtaining non-proprietary, deep ground-based optical and near-IR imaging and spectroscopy over these regions, which contain virtually all of the current ≈1 Msec CXO coverage in the CDF North and much of the ≈1 Msec coverage in the CDF South. In particular, the planned depth of the near-IR imaging (JAB ~ 25.3; HAB ~ 24.8; KAB ~ 24.4) combined with the deep Chandra data can allow us to trace the evolutionary connection between normal galaxies, starbursts, and AGN out to z ~ 1 and beyond. We describe our CDF Archival program, which is integrating these GOODS-supporting observations together with the CDF archival data and other publicly-available datasets in these regions to create a multi-wavelength deep imaging and spectroscpic database available to the entire community. We highlight progress toward near-term science goals of this program, including: (a) pushing constraints on the redshift distribution and spectral-energy distributions of the faintest X-ray sources to the deepest possible levels via photometric redshifts; and (b) better characterizing the heavily-obscured and the high-redshift populations via both a near-IR search for optically-undetected CDF X-ray sources and also X-ray stacking analyses on the CXO-undetected EROs in these fields.

  8. Deep CFHT Y-band Imaging of VVDS-F22 Field. II. Quasar Selection and Quasar Luminosity Function

    NASA Astrophysics Data System (ADS)

    Yang, Jinyi; Wu, Xue-Bing; Liu, Dezi; Fan, Xiaohui; Yang, Qian; Wang, Feige; McGreer, Ian D.; Fan, Zuhui; Yuan, Shuo; Shan, Huanyuan

    2018-03-01

    We report the results of a faint quasar survey in a one-square-degree field. The aim is to test the Y-K/g-z and J-K/i-Y color selection criteria for quasars at faint magnitudes to obtain a complete sample of quasars based on deep optical and near-infrared color–color selection and to measure the faint end of the quasar luminosity function (QLF) over a wide redshift range. We carried out a quasar survey based on the Y-K/g-z and J-K/i-Y quasar selection criteria, using the deep Y-band data obtained from our CFHT/WIRCam Y-band images in a two-degree field within the F22 field of the VIMOS VLT deep survey, optical co-added data from Sloan Digital Sky Survey Stripe 82 and deep near-infrared data from the UKIDSS Deep Extragalactic Survey in the same field. We discovered 25 new quasars at 0.5< z< 4.5 and i< 22.5 mag within one-square-degree field. The survey significantly increases the number of faint quasars in this field, especially at z∼ 2{--}3. It confirms that our color selections are highly complete in a wide redshift range (z< 4.5), especially over the quasar number density peak at z∼ 2{--}3, even for faint quasars. Combining all previous known quasars and new discoveries, we construct a sample with 109 quasars and measure the binned QLF and parametric QLF. Although the sample is small, our results agree with a pure luminosity evolution at lower redshift and luminosity evolution and density evolution model at redshift z> 2.5.

  9. THE ALLEN TELESCOPE ARRAY Pi GHz SKY SURVEY. III. THE ELAIS-N1, COMA, AND LOCKMAN HOLE FIELDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Steve; Bower, Geoffrey C.; Whysong, David

    2013-01-10

    We present results from a total of 459 repeated 3.1 GHz radio continuum observations (of which 379 were used in a search for transient sources) of the ELAIS-N1, Coma, Lockman Hole, and NOAO Deep Wide Field Survey fields as part of the Pi GHz Sky Survey. The observations were taken approximately once per day between 2009 May and 2011 April. Each image covers 11.8 square degrees and has 100'' FWHM resolution. Deep images for each of the four fields have rms noise between 180 and 310 {mu}Jy, and the corresponding catalogs contain {approx}200 sources in each field. Typically 40-50 ofmore » these sources are detected in each single-epoch image. This represents one of the shortest cadence, largest area, multi-epoch surveys undertaken at these frequencies. We compare the catalogs generated from the combined images to those from individual epochs, and from monthly averages, as well as to legacy surveys. We undertake a search for transients, with particular emphasis on excluding false positive sources. We find no confirmed transients, defined here as sources that can be shown to have varied by at least a factor of 10. However, we find one source that brightened in a single-epoch image to at least six times the upper limit from the corresponding deep image. We also find a source associated with a z = 0.6 quasar which appears to have brightened by a factor {approx}3 in one of our deep images, when compared to catalogs from legacy surveys. We place new upper limits on the number of transients brighter than 10 mJy: fewer than 0.08 transients deg{sup -2} with characteristic timescales of months to years; fewer than 0.02 deg{sup -2} with timescales of months; and fewer than 0.009 deg{sup -2} with timescales of days. We also plot upper limits as a function of flux density for transients on the same timescales.« less

  10. Deep learning methods to guide CT image reconstruction and reduce metal artifacts

    NASA Astrophysics Data System (ADS)

    Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge

    2017-03-01

    The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

  11. ARC-2012-ACD12-0020-001

    NASA Image and Video Library

    2012-02-02

    Stein_Sun: Visualization of the complex magnetic field produced as magnetic flux rises toward the Sun¹s surface from the deep convection zone. The image shows a snapshot of how the magnetic field has evolved two days from the time uniform, untwisted, horizontal magnetic field started to be advected by inflows at the bottom (20 megameters deep). Axes are in megameters, and the color scale shows the log of the magnetic field strength. Credit: Robert Stein, Michigan State University; Tim Sandstrom, NASA/Ames

  12. Accuracy of deep learning, a machine-learning technology, using ultra-wide-field fundus ophthalmoscopy for detecting rhegmatogenous retinal detachment.

    PubMed

    Ohsugi, Hideharu; Tabuchi, Hitoshi; Enno, Hiroki; Ishitobi, Naofumi

    2017-08-25

    Rhegmatogenous retinal detachment (RRD) is a serious condition that can lead to blindness; however, it is highly treatable with timely and appropriate treatment. Thus, early diagnosis and treatment of RRD is crucial. In this study, we applied deep learning, a machine-learning technology, to detect RRD using ultra-wide-field fundus images and investigated its performance. In total, 411 images (329 for training and 82 for grading) from 407 RRD patients and 420 images (336 for training and 84 for grading) from 238 non-RRD patients were used in this study. The deep learning model demonstrated a high sensitivity of 97.6% [95% confidence interval (CI), 94.2-100%] and a high specificity of 96.5% (95% CI, 90.2-100%), and the area under the curve was 0.988 (95% CI, 0.981-0.995). This model can improve medical care in remote areas where eye clinics are not available by using ultra-wide-field fundus ophthalmoscopy for the accurate diagnosis of RRD. Early diagnosis of RRD can prevent blindness.

  13. Corpus callosum segmentation using deep neural networks with prior information from multi-atlas images

    NASA Astrophysics Data System (ADS)

    Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min

    2018-03-01

    In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.

  14. Detection of Thermal Erosion Gullies from High-Resolution Images Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Huang, L.; Liu, L.; Jiang, L.; Zhang, T.; Sun, Y.

    2017-12-01

    Thermal erosion gullies, one type of thermokarst landforms, develop due to thawing of ice-rich permafrost. Mapping the location and extent of thermal erosion gullies can help understand the spatial distribution of thermokarst landforms and their temporal evolution. Remote sensing images provide an effective way for mapping thermokarst landforms, especially thermokarst lakes. However, thermal erosion gullies are challenging to map from remote sensing images due to their small sizes and significant variations in geometric/radiometric properties. It is feasible to manually identify these features, as a few previous studies have carried out. However manual methods are labor-intensive, therefore, cannot be used for a large study area. In this work, we conduct automatic mapping of thermal erosion gullies from high-resolution images by using Deep Learning. Our study area is located in Eboling Mountain (Qinghai, China). Within a 6 km2 peatland area underlain by ice-rich permafrost, at least 20 thermal erosional gullies are well developed. The image used is a 15-cm-resolution Digital Orthophoto Map (DOM) generated in July 2016. First, we extracted 14 gully patches and ten non-gully patches as training data. And we performed image augmentation. Next, we fine-tuned the pre-trained model of DeepLab, a deep-learning algorithm for semantic image segmentation based on Deep Convolutional Neural Networks. Then, we performed inference on the whole DOM and obtained intermediate results in forms of polygons for all identified gullies. At last, we removed misidentified polygons based on a few pre-set criteria on the size and shape of each polygon. Our final results include 42 polygons. Validated against field measurements using GPS, most of the gullies are detected correctly. There are 20 false detections due to the small number and low quality of training images. We also found three new gullies that missed in the field observations. This study shows that (1) despite a challenging mapping task, DeepLab can detect small, irregular-shaped thermal erosion gullies with high accuracy. (2) Automatic detection is critical for mapping thermal erosion gully since manual mapping or field work may miss some targets even in a relatively small region. (3) The quantity and quality of training data are crucial for detection accuracy.

  15. Quantum dots versus organic fluorophores in fluorescent deep-tissue imaging--merits and demerits.

    PubMed

    Bakalova, Rumiana; Zhelev, Zhivko; Gadjeva, Veselina

    2008-12-01

    The use of fluorescence in deep-tissue imaging is rapidly expanding in last several years. The progress in fluorescent molecular probes and fluorescent imaging techniques gives an opportunity to detect single cells and even molecular targets in live organisms. The highly sensitive and high-speed fluorescent molecular sensors and detection devices allow the application of fluorescence in functional imaging. With the development of novel bright fluorophores based on nanotechnologies and 3D fluorescence scanners with high spatial and temporal resolution, the fluorescent imaging has a potential to become an alternative of the other non-invasive imaging techniques as magnetic resonance imaging, positron-emission tomography, X-ray, computing tomography. The fluorescent imaging has also a potential to give a real map of human anatomy and physiology. The current review outlines the advantages of fluorescent nanoparticles over conventional organic dyes in deep-tissue imaging in vivo and defines the major requirements to the "perfect fluorophore". The analysis proceeds from the basic principles of fluorescence and major characteristics of fluorophores, light-tissue interactions, and major limitations of fluorescent deep-tissue imaging. The article is addressed to a broad readership - from specialists in this field to university students.

  16. Calibration of HST wide field camera for quantitative analysis of faint galaxy images

    NASA Technical Reports Server (NTRS)

    Ratnatunga, Kavan U.; Griffiths, Richard E.; Casertano, Stefano; Neuschaefer, Lyman W.; Wyckoff, Eric W.

    1994-01-01

    We present the methods adopted to optimize the calibration of images obtained with the Hubble Space Telescope (HST) Wide Field Camera (WFC) (1991-1993). Our main goal is to improve quantitative measurement of faint images, with special emphasis on the faint (I approximately 20-24 mag) stars and galaxies observed as a part of the Medium-Deep Survey. Several modifications to the standard calibration procedures have been introduced, including improved bias and dark images, and a new supersky flatfield obtained by combining a large number of relatively object-free Medium-Deep Survey exposures of random fields. The supersky flat has a pixel-to-pixel rms error of about 2.0% in F555W and of 2.4% in F785LP; large-scale variations are smaller than 1% rms. Overall, our modifications improve the quality of faint images with respect to the standard calibration by about a factor of five in photometric accuracy and about 0.3 mag in sensitivity, corresponding to about a factor of two in observing time. The relevant calibration images have been made available to the scientific community.

  17. Making Data Mobile: The Hubble Deep Field Academy iPad app

    NASA Astrophysics Data System (ADS)

    Eisenhamer, Bonnie; Cordes, K.; Davis, S.; Eisenhamer, J.

    2013-01-01

    Many school districts are purchasing iPads for educators and students to use as learning tools in the classroom. Educators often prefer these devices to desktop and laptop computers because they offer portability and an intuitive design, while having a larger screen size when compared to smart phones. As a result, we began investigating the potential of adapting online activities for use on Apple’s iPad to enhance the dissemination and usage of these activities in instructional settings while continuing to meet educators’ needs. As a pilot effort, we are developing an iPad app for the “Hubble Deep Field Academy” - an activity that is currently available online and commonly used by middle school educators. The Hubble Deep Field Academy app features the HDF-North image while centering on the theme of how scientists use light to explore and study the universe. It also includes features such as embedded links to vocabulary, images and videos, teacher background materials, and readings about Hubble’s other deep field surveys. It is our goal is to impact students’ engagement in STEM-related activities, while enhancing educators’ usage of NASA data via new and innovative mediums. We also hope to develop and share lessons learned with the E/PO community that can be used to support similar projects. We plan to test the Hubble Deep Field Academy app during the school year to determine if this new activity format is beneficial to the education community.

  18. Confocal multispot microscope for fast and deep imaging in semicleared tissues

    NASA Astrophysics Data System (ADS)

    Adam, Marie-Pierre; Müllenbroich, Marie Caroline; Di Giovanna, Antonino Paolo; Alfieri, Domenico; Silvestri, Ludovico; Sacconi, Leonardo; Pavone, Francesco Saverio

    2018-02-01

    Although perfectly transparent specimens are imaged faster with light-sheet microscopy, less transparent samples are often imaged with two-photon microscopy leveraging its robustness to scattering; however, at the price of increased acquisition times. Clearing methods that are capable of rendering strongly scattering samples such as brain tissue perfectly transparent specimens are often complex, costly, and time intensive, even though for many applications a slightly lower level of tissue transparency is sufficient and easily achieved with simpler and faster methods. Here, we present a microscope type that has been geared toward the imaging of semicleared tissue by combining multispot two-photon excitation with rolling shutter wide-field detection to image deep and fast inside semicleared mouse brain. We present a theoretical and experimental evaluation of the point spread function and contrast as a function of shutter size. Finally, we demonstrate microscope performance in fixed brain slices by imaging dendritic spines up to 400-μm deep.

  19. Ultra-deep K S-band Imaging of the Hubble Frontier Fields

    NASA Astrophysics Data System (ADS)

    Brammer, Gabriel B.; Marchesini, Danilo; Labbé, Ivo; Spitler, Lee; Lange-Vagle, Daniel; Barker, Elizbeth A.; Tanaka, Masayuki; Fontana, Adriano; Galametz, Audrey; Ferré-Mateu, Anna; Kodama, Tadayuki; Lundgren, Britt; Martis, Nicholas; Muzzin, Adam; Stefanon, Mauro; Toft, Sune; van der Wel, Arjen; Vulcani, Benedetta; Whitaker, Katherine E.

    2016-09-01

    We present an overview of the “KIFF” project, which provides ultra-deep K s -band imaging of all six of the Hubble Frontier Fields clusters, Abell 2744, MACS-0416, Abell S1063, Abell 370, MACS-0717, and MACS-1149. All of these fields have recently been observed with large allocations of Directors’ Discretionary Time with the Hubble and Spitzer telescopes, covering 0.4\\lt λ \\lt 1.6 μ {{m}} and 3.6-4.5 μ {{m}}, respectively. VLT/HAWK-I integrations of the first four fields reach 5σ limiting depths of {K}s˜ 26.0 (AB, point sources) and have excellent image quality (FWHM ˜ 0.″4). The MACS-0717 and MACS-1149 fields are observable from the northern hemisphere, and shorter Keck/MOSFIRE integrations on those fields reach limiting depths of K s = 25.5 and 25.1, with a seeing FWHM of ˜ 0.″4 and 0\\buildrel{\\prime\\prime}\\over{.} 5. In all cases the K s -band mosaics cover the primary cluster and parallel HST/ACS+WFC3 fields. The total area of the K s -band coverage is 490 arcmin2. The K s -band at 2.2 μ {{m}} crucially fills the gap between the reddest HST filter (1.6 μ {{m}} ˜ H band) and the IRAC 3.6 μ {{m}} passband. While reaching the full depths of the space-based imaging is not currently feasible from the ground, the deep K s -band images provide important constraints on both the redshifts and the stellar population properties of galaxies extending well below the characteristic stellar mass across most of the age of the universe, down to and including the redshifts of the targeted galaxy clusters (z≲ 0.5). Reduced, aligned mosaics of all six survey fields are provided.

  20. Deep Learning for Image-Based Cassava Disease Detection.

    PubMed

    Ramcharan, Amanda; Baranowski, Kelsee; McCloskey, Peter; Ahmed, Babuali; Legg, James; Hughes, David P

    2017-01-01

    Cassava is the third largest source of carbohydrates for human food in the world but is vulnerable to virus diseases, which threaten to destabilize food security in sub-Saharan Africa. Novel methods of cassava disease detection are needed to support improved control which will prevent this crisis. Image recognition offers both a cost effective and scalable technology for disease detection. New deep learning models offer an avenue for this technology to be easily deployed on mobile devices. Using a dataset of cassava disease images taken in the field in Tanzania, we applied transfer learning to train a deep convolutional neural network to identify three diseases and two types of pest damage (or lack thereof). The best trained model accuracies were 98% for brown leaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage (GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic disease (CMD). The best model achieved an overall accuracy of 93% for data not used in the training process. Our results show that the transfer learning approach for image recognition of field images offers a fast, affordable, and easily deployable strategy for digital plant disease detection.

  1. Intervertebral disc detection in X-ray images using faster R-CNN.

    PubMed

    Ruhan Sa; Owens, William; Wiegand, Raymond; Studin, Mark; Capoferri, Donald; Barooha, Kenneth; Greaux, Alexander; Rattray, Robert; Hutton, Adam; Cintineo, John; Chaudhary, Vipin

    2017-07-01

    Automatic identification of specific osseous landmarks on the spinal radiograph can be used to automate calculations for correcting ligament instability and injury, which affect 75% of patients injured in motor vehicle accidents. In this work, we propose to use deep learning based object detection method as the first step towards identifying landmark points in lateral lumbar X-ray images. The significant breakthrough of deep learning technology has made it a prevailing choice for perception based applications, however, the lack of large annotated training dataset has brought challenges to utilizing the technology in medical image processing field. In this work, we propose to fine tune a deep network, Faster-RCNN, a state-of-the-art deep detection network in natural image domain, using small annotated clinical datasets. In the experiment we show that, by using only 81 lateral lumbar X-Ray training images, one can achieve much better performance compared to traditional sliding window detection method on hand crafted features. Furthermore, we fine-tuned the network using 974 training images and tested on 108 images, which achieved average precision of 0.905 with average computation time of 3 second per image, which greatly outperformed traditional methods in terms of accuracy and efficiency.

  2. The JWST North Ecliptic Pole Survey Field for Time-domain Studies

    NASA Astrophysics Data System (ADS)

    Jansen, Rolf A.; Alpaslan, Mehmet; Ashby, Matthew; Ashcraft, Teresa; Cohen, Seth H.; Condon, James J.; Conselice, Christopher; Ferrara, Andrea; Frye, Brenda L.; Grogin, Norman A.; Hammel, Heidi B.; Hathi, Nimish P.; Joshi, Bhavin; Kim, Duho; Koekemoer, Anton M.; Mechtley, Matt; Milam, Stefanie N.; Rodney, Steven A.; Rutkowski, Michael J.; Strolger, Louis-Gregory; Trujillo, Chadwick A.; Willmer, Christopher; Windhorst, Rogier A.; Yan, Haojing

    2017-01-01

    The JWST North Ecliptic Pole (NEP) Survey field is located within JWST's northern Continuous Viewing Zone, will span ˜14‧ in diameter (˜10‧ with NIRISS coverage) and will be roughly circular in shape (initially sampled during Cycle 1 at 4 distinct orientations with JWST/NIRCam's 4.4‧×2.2‧ FoV —the JWST “windmill”) and will have NIRISS slitless grism spectroscopy taken in parallel, overlapping an alternate NIRCam orientation. This is the only region in the sky where JWST can observe a clean extragalactic deep survey field (free of bright foreground stars and with low Galactic foreground extinction AV) at arbitrary cadence or at arbitrary orientation. This will crucially enable a wide range of new and exciting time-domain science, including high redshift transient searches and monitoring (e.g., SNe), variability studies from Active Galactic Nuclei to brown dwarf atmospheres, as well as proper motions of extreme scattered Kuiper Belt and Oort Cloud Objects, and of nearby Galactic brown dwarfs, low-mass stars, and ultracool white dwarfs. We therefore welcome and encourage follow-up through GO programs of the initial GTO observations to realize its potential as a JWST time-domain community field. The JWST NEP Survey field was selected from an analysis of WISE 3.4+4.6 micron, 2MASS JHKs, and SDSS ugriz source counts and of Galactic foreground extinction, and is one of very few such ˜10‧ fields that are devoid of sources brighter than mAB = 16 mag. We have secured deep (mAB ˜ 26 mag) wide-field (˜23‧×25‧) Ugrz images of this field and its surroundings with LBT/LBC. We also expect that deep MMT/MMIRS YJHK images, deep 8-12 GHz VLA radio observations (pending), and possibly HST ACS/WFC and WFC3/UVIS ultraviolet-visible images will be available before JWST launches in Oct 2018.

  3. The JWST North Ecliptic Pole Survey Field for Time-domain Studies

    NASA Astrophysics Data System (ADS)

    Jansen, Rolf A.; Webb Medium Deep Fields IDS GTO Team, the NEPTDS-VLA/VLBA Team, and the NEPTDS-Chandra Team

    2017-06-01

    The JWST North Ecliptic Pole (NEP) Survey field is located within JWST's northern Continuous Viewing Zone, will span ~14‧ in diameter (~10‧ with NIRISS coverage) and will be roughly circular in shape (initially sampled during Cycle 1 at 4 distinct orientations with JWST/NIRCam's 4.4‧×2.2‧ FoV —the JWST "windmill") and will have NIRISS slitless grism spectroscopy taken in parallel, overlapping an alternate NIRCam orientation. This is the only region in the sky where JWST can observe a clean extragalactic deep survey field (free of bright foreground stars and with low Galactic foreground extinction AV) at arbitrary cadence or at arbitrary orientation. This will crucially enable a wide range of new and exciting time-domain science, including high redshift transient searches and monitoring (e.g., SNe), variability studies from Active Galactic Nuclei to brown dwarf atmospheres, as well as proper motions of extreme scattered Kuiper Belt and Oort Cloud Objects, and of nearby Galactic brown dwarfs, low-mass stars, and ultracool white dwarfs. We therefore welcome and encourage follow-up through GO programs of the initial GTO observations to realize its potential as a JWST time-domain community field. The JWST NEP Survey field was selected from an analysis of WISE 3.4+4.6 μm, 2MASS JHKs, and SDSS ugriz source counts and of Galactic foreground extinction, and is one of very few such ~10‧ fields that are devoid of sources brighter than mAB = 16 mag. We have secured deep (mAB ~ 26 mag) wide-field (~23‧×25‧) Ugrz images of this field and its surroundings with LBT/LBC. We also expect that deep MMT/MMIRS YJHK images, deep 3-4.5 GHz VLA and VLBA radio observations, and possibly HST ACS/WFC and WFC3/UVIS ultraviolet-visible (pending) and Chandra/ACIS X-ray (pending) images will be available before JWST launches in Oct 2018.

  4. Infrared Faint Radio Sources in the Extended Chandra Deep Field South

    NASA Astrophysics Data System (ADS)

    Huynh, Minh T.

    2009-01-01

    Infrared-Faint Radio Sources (IFRSs) are a class of radio objects found in the Australia Telescope Large Area Survey (ATLAS) which have no observable counterpart in the Spitzer Wide-area Infrared Extragalactic Survey (SWIRE). The extended Chandra Deep Field South now has even deeper Spitzer imaging (3.6 to 70 micron) from a number of Legacy surveys. We report the detections of two IFRS sources in IRAC images. The non-detection of two other IFRSs allows us to constrain the source type. Detailed modeling of the SED of these objects shows that they are consistent with high redshift AGN (z > 2).

  5. Stability of deep features across CT scanners and field of view using a physical phantom

    NASA Astrophysics Data System (ADS)

    Paul, Rahul; Shafiq-ul-Hassan, Muhammad; Moros, Eduardo G.; Gillies, Robert J.; Hall, Lawrence O.; Goldgof, Dmitry B.

    2018-02-01

    Radiomics is the process of analyzing radiological images by extracting quantitative features for monitoring and diagnosis of various cancers. Analyzing images acquired from different medical centers is confounded by many choices in acquisition, reconstruction parameters and differences among device manufacturers. Consequently, scanning the same patient or phantom using various acquisition/reconstruction parameters as well as different scanners may result in different feature values. To further evaluate this issue, in this study, CT images from a physical radiomic phantom were used. Recent studies showed that some quantitative features were dependent on voxel size and that this dependency could be reduced or removed by the appropriate normalization factor. Deep features extracted from a convolutional neural network, may also provide additional features for image analysis. Using a transfer learning approach, we obtained deep features from three convolutional neural networks pre-trained on color camera images. An we examination of the dependency of deep features on image pixel size was done. We found that some deep features were pixel size dependent, and to remove this dependency we proposed two effective normalization approaches. For analyzing the effects of normalization, a threshold has been used based on the calculated standard deviation and average distance from a best fit horizontal line among the features' underlying pixel size before and after normalization. The inter and intra scanner dependency of deep features has also been evaluated.

  6. The VIRMOS deep imaging survey. I. Overview, survey strategy, and CFH12K observations

    NASA Astrophysics Data System (ADS)

    Le Fèvre, O.; Mellier, Y.; McCracken, H. J.; Foucaud, S.; Gwyn, S.; Radovich, M.; Dantel-Fort, M.; Bertin, E.; Moreau, C.; Cuillandre, J.-C.; Pierre, M.; Le Brun, V.; Mazure, A.; Tresse, L.

    2004-04-01

    This paper describes the CFH12K-VIRMOS survey: a deep BVRI imaging survey in four fields totalling more than 17 deg2, conducted with the 40×30 arcmin2 field CFH-12K camera. The survey is intended to be a multi-purpose survey used for a variety of science goals, including surveys of very high redshift galaxies and weak lensing studies. Four high galactic latitude fields, each 2×2 deg2, have been selected along the celestial equator: 0226-04, 1003+01, 1400+05, and 2217+00. The 16 deg2 of the ``wide'' survey are covered with exposure times of 2 hr, 1.5 hr, 1 hr, 1 hr, respectively while the 1.3×1 deg2 area of the ``deep'' survey at the center of the 0226-04 field is covered with exposure times of 7 h, 4.5 h, 3 h, 3 h, in BVRI respectively. An additional area ˜2 deg2 has been imaged in the 0226-04 field corresponding to the area surveyed by the XMM-LSS program \\citep{pierre03}. The data is pipeline processed at the Terapix facility at the Institut d'Astrophysique de Paris to produce large mosaic images. The catalogs produced contain the positions, shapes, total and aperture magnitudes for 2.175 million objects measured in the four areas. The limiting magnitudes, measured as a 5σ measurement in a 3 arcsec diameter aperture is IAB=24.8 in the ``Wide'' areas, and IAB=25.3 in the deep area. Careful quality control has been applied on the data to ensure internal consistency and assess the photometric and astrometric accuracy as described in a joint paper \\citep{mccracken03}. These catalogs are used to select targets for the VIRMOS-VLT Deep Survey, a large spectroscopic survey of the distant universe (Le Fèvre et al. 2003). First results from the CFH12K-VIRMOS survey have been published on weak lensing (e.g. van Waerbeke & Mellier 2003). Catalogs and images are available through the VIRMOS database environment under Oracle (http://www.oamp.fr/cencos). They are open for general use since July 1st, 2003. Appendix A is only available in electronic form at http://www.edpsciences.org

  7. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology.

    PubMed

    Sharma, Harshita; Zerbe, Norman; Klempert, Iris; Hellwich, Olaf; Hufnagl, Peter

    2017-11-01

    Deep learning using convolutional neural networks is an actively emerging field in histological image analysis. This study explores deep learning methods for computer-aided classification in H&E stained histopathological whole slide images of gastric carcinoma. An introductory convolutional neural network architecture is proposed for two computerized applications, namely, cancer classification based on immunohistochemical response and necrosis detection based on the existence of tumor necrosis in the tissue. Classification performance of the developed deep learning approach is quantitatively compared with traditional image analysis methods in digital histopathology requiring prior computation of handcrafted features, such as statistical measures using gray level co-occurrence matrix, Gabor filter-bank responses, LBP histograms, gray histograms, HSV histograms and RGB histograms, followed by random forest machine learning. Additionally, the widely known AlexNet deep convolutional framework is comparatively analyzed for the corresponding classification problems. The proposed convolutional neural network architecture reports favorable results, with an overall classification accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A hybrid deep learning approach to predict malignancy of breast lesions using mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Heidari, Morteza; Mirniaharikandehei, Seyedehnafiseh; Gong, Jing; Qian, Wei; Qiu, Yuchen; Zheng, Bin

    2018-03-01

    Applying deep learning technology to medical imaging informatics field has been recently attracting extensive research interest. However, the limited medical image dataset size often reduces performance and robustness of the deep learning based computer-aided detection and/or diagnosis (CAD) schemes. In attempt to address this technical challenge, this study aims to develop and evaluate a new hybrid deep learning based CAD approach to predict likelihood of a breast lesion detected on mammogram being malignant. In this approach, a deep Convolutional Neural Network (CNN) was firstly pre-trained using the ImageNet dataset and serve as a feature extractor. A pseudo-color Region of Interest (ROI) method was used to generate ROIs with RGB channels from the mammographic images as the input to the pre-trained deep network. The transferred CNN features from different layers of the CNN were then obtained and a linear support vector machine (SVM) was trained for the prediction task. By applying to a dataset involving 301 suspicious breast lesions and using a leave-one-case-out validation method, the areas under the ROC curves (AUC) = 0.762 and 0.792 using the traditional CAD scheme and the proposed deep learning based CAD scheme, respectively. An ensemble classifier that combines the classification scores generated by the two schemes yielded an improved AUC value of 0.813. The study results demonstrated feasibility and potentially improved performance of applying a new hybrid deep learning approach to develop CAD scheme using a relatively small dataset of medical images.

  9. Dosimetric comparison of moderate deep inspiration breath-hold and free-breathing intensity-modulated radiotherapy for left-sided breast cancer.

    PubMed

    Chi, F; Wu, S; Zhou, J; Li, F; Sun, J; Lin, Q; Lin, H; Guan, X; He, Z

    2015-05-01

    This study determined the dosimetric comparison of moderate deep inspiration breath-hold using active breathing control and free-breathing intensity-modulated radiotherapy (IMRT) after breast-conserving surgery for left-sided breast cancer. Thirty-one patients were enrolled. One free breathe and two moderate deep inspiration breath-hold images were obtained. A field-in-field-IMRT free-breathing plan and two field-in-field-IMRT moderate deep inspiration breath-holding plans were compared in the dosimetry to target volume coverage of the glandular breast tissue and organs at risks for each patient. The breath-holding time under moderate deep inspiration extended significantly after breathing training (P<0.05). There was no significant difference between the free-breathing and moderate deep inspiration breath-holding in the target volume coverage. The volume of the ipsilateral lung in the free-breathing technique were significantly smaller than the moderate deep inspiration breath-holding techniques (P<0.05); however, there was no significant difference between the two moderate deep inspiration breath-holding plans. There were no significant differences in target volume coverage between the three plans for the field-in-field-IMRT (all P>0.05). The dose to ipsilateral lung, coronary artery and heart in the field-in-field-IMRT were significantly lower for the free-breathing plan than for the two moderate deep inspiration breath-holding plans (all P<0.05); however, there was no significant difference between the two moderate deep inspiration breath-holding plans. The whole-breast field-in-field-IMRT under moderate deep inspiration breath-hold with active breathing control after breast-conserving surgery in left-sided breast cancer can reduce the irradiation volume and dose to organs at risks. There are no significant differences between various moderate deep inspiration breath-holding states in the dosimetry of irradiation to the field-in-field-IMRT target volume coverage and organs at risks. Copyright © 2015 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  10. TuMore: generation of synthetic brain tumor MRI data for deep learning based segmentation approaches

    NASA Astrophysics Data System (ADS)

    Lindner, Lydia; Pfarrkirchner, Birgit; Gsaxner, Christina; Schmalstieg, Dieter; Egger, Jan

    2018-03-01

    Accurate segmentation and measurement of brain tumors plays an important role in clinical practice and research, as it is critical for treatment planning and monitoring of tumor growth. However, brain tumor segmentation is one of the most challenging tasks in medical image analysis. Since manual segmentations are subjective, time consuming and neither accurate nor reliable, there exists a need for objective, robust and fast automated segmentation methods that provide competitive performance. Therefore, deep learning based approaches are gaining interest in the field of medical image segmentation. When the training data set is large enough, deep learning approaches can be extremely effective, but in domains like medicine, only limited data is available in the majority of cases. Due to this reason, we propose a method that allows to create a large dataset of brain MRI (Magnetic Resonance Imaging) images containing synthetic brain tumors - glioblastomas more specifically - and the corresponding ground truth, that can be subsequently used to train deep neural networks.

  11. Deep brain stimulation as a functional scalpel.

    PubMed

    Broggi, G; Franzini, A; Tringali, G; Ferroli, P; Marras, C; Romito, L; Maccagnano, E

    2006-01-01

    Since 1995, at the Istituto Nazionale Neurologico "Carlo Besta" in Milan (INNCB,) 401 deep brain electrodes were implanted to treat several drug-resistant neurological syndromes (Fig. 1). More than 200 patients are still available for follow-up and therapeutical considerations. In this paper our experience is reviewed and pioneered fields are highlighted. The reported series of patients extends the use of deep brain stimulation beyond the field of Parkinson's disease to new fields such as cluster headache, disruptive behaviour, SUNCt, epilepsy and tardive dystonia. The low complication rate, the reversibility of the procedure and the available image guided surgery tools will further increase the therapeutic applications of DBS. New therapeutical applications are expected for this functional scalpel.

  12. THE TAIWAN ECDFS NEAR-INFRARED SURVEY: ULTRA-DEEP J AND K{sub S} IMAGING IN THE EXTENDED CHANDRA DEEP FIELD-SOUTH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, Bau-Ching; Wang, Wei-Hao; Hsieh, Chih-Chiang

    2012-12-15

    We present ultra-deep J and K{sub S} imaging observations covering a 30' Multiplication-Sign 30' area of the Extended Chandra Deep Field-South (ECDFS) carried out by our Taiwan ECDFS Near-Infrared Survey (TENIS). The median 5{sigma} limiting magnitudes for all detected objects in the ECDFS reach 24.5 and 23.9 mag (AB) for J and K{sub S} , respectively. In the inner 400 arcmin{sup 2} region where the sensitivity is more uniform, objects as faint as 25.6 and 25.0 mag are detected at 5{sigma}. Thus, this is by far the deepest J and K{sub S} data sets available for the ECDFS. To combinemore » TENIS with the Spitzer IRAC data for obtaining better spectral energy distributions of high-redshift objects, we developed a novel deconvolution technique (IRACLEAN) to accurately estimate the IRAC fluxes. IRACLEAN can minimize the effect of blending in the IRAC images caused by the large point-spread functions and reduce the confusion noise. We applied IRACLEAN to the images from the Spitzer IRAC/MUSYC Public Legacy in the ECDFS survey (SIMPLE) and generated a J+K{sub S} -selected multi-wavelength catalog including the photometry of both the TENIS near-infrared and the SIMPLE IRAC data. We publicly release the data products derived from this work, including the J and K{sub S} images and the J+K{sub S} -selected multi-wavelength catalog.« less

  13. Real-Time Blob-Wise Sugar Beets VS Weeds Classification for Monitoring Fields Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Milioto, A.; Lottes, P.; Stachniss, C.

    2017-08-01

    UAVs are becoming an important tool for field monitoring and precision farming. A prerequisite for observing and analyzing fields is the ability to identify crops and weeds from image data. In this paper, we address the problem of detecting the sugar beet plants and weeds in the field based solely on image data. We propose a system that combines vegetation detection and deep learning to obtain a high-quality classification of the vegetation in the field into value crops and weeds. We implemented and thoroughly evaluated our system on image data collected from different sugar beet fields and illustrate that our approach allows for accurately identifying the weeds on the field.

  14. MIGHTEE: The MeerKAT International GHz Tiered Extragalactic Exploration

    NASA Astrophysics Data System (ADS)

    Taylor, A. Russ; Jarvis, Matt

    2017-05-01

    The MeerKAT telescope is the precursor of the Square Kilometre Array mid-frequency dish array to be deployed later this decade on the African continent. MIGHTEE is one of the MeerKAT large survey projects designed to pathfind SKA key science in cosmology and galaxy evolution. Through a tiered radio continuum deep imaging project including several fields totaling 20 square degrees to microJy sensitivities and an ultra-deep image of a single 1 square degree field of view, MIGHTEE will explore dark matter and large scale structure, the evolution of galaxies, including AGN activity and star formation as a function of cosmic time and environment, the emergence and evolution of magnetic fields in galaxies, and the magnetic counter part to large scale structure of the universe.

  15. The FLARE mission: deep and wide-field 1-5um imaging and spectroscopy for the early universe: a proposal for M5 cosmic vision call

    NASA Astrophysics Data System (ADS)

    Burgarella, D.; Levacher, P.; Vives, S.; Dohlen, K.; Pascal, S.

    2016-07-01

    FLARE (First Light And Reionization Explorer) is a space mission that will be submitted to ESA (M5 call). Its primary goal (~80% of lifetime) is to identify and study the universe before the end of the reionization at z > 6. A secondary objective (~20% of lifetime) is to survey star formation in the Milky Way. FLARE's strategy optimizes the science return: imaging and spectroscopic integral-field observations will be carried out simultaneously on two parallel focal planes and over very wide instantaneous fields of view. FLARE will help addressing two of ESA's Cosmic Vision themes: a) << How did the universe originate and what is it made of? » and b) « What are the conditions for planet formation and the emergence of life? >> and more specifically, << From gas and dust to stars and planets >>. FLARE will provide to the ESA community a leading position to statistically study the early universe after JWST's deep but pin-hole surveys. Moreover, the instrumental development of wide-field imaging and wide-field integral-field spectroscopy in space will be a major breakthrough after making them available on ground-based telescopes.

  16. A Plane Target Detection Algorithm in Remote Sensing Images based on Deep Learning Network Technology

    NASA Astrophysics Data System (ADS)

    Shuxin, Li; Zhilong, Zhang; Biao, Li

    2018-01-01

    Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.

  17. A light and faster regional convolutional neural network for object detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan

    2018-07-01

    Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.

  18. Multi-Object Spectroscopy with MUSE

    NASA Astrophysics Data System (ADS)

    Kelz, A.; Kamann, S.; Urrutia, T.; Weilbacher, P.; Bacon, R.

    2016-10-01

    Since 2014, MUSE, the Multi-Unit Spectroscopic Explorer, is in operation at the ESO-VLT. It combines a superb spatial sampling with a large wavelength coverage. By design, MUSE is an integral-field instrument, but its field-of-view and large multiplex make it a powerful tool for multi-object spectroscopy too. Every data-cube consists of 90,000 image-sliced spectra and 3700 monochromatic images. In autumn 2014, the observing programs with MUSE have commenced, with targets ranging from distant galaxies in the Hubble Deep Field to local stellar populations, star formation regions and globular clusters. This paper provides a brief summary of the key features of the MUSE instrument and its complex data reduction software. Some selected examples are given, how multi-object spectroscopy for hundreds of continuum and emission-line objects can be obtained in wide, deep and crowded fields with MUSE, without the classical need for any target pre-selection.

  19. Quasar Host Galaxies/Neptune Rotation/Galaxy Building Blocks/Hubble Deep Field/Saturn Storm

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Computerized animations simulate a quasar erupting in the core of a normal spiral galaxy, the collision of two interacting galaxies, and the evolution of the universe. Hubble Space Telescope (HST) images show six quasars' host galaxies (including spirals, ellipticals, and colliding galaxies) and six clumps of galaxies approximately 11 billion light years away. A false color time lapse movie of Neptune displays the planet's 16-hour rotation, and the evolution of a storm on Saturn is seen though a video of the planet's rotation. A zoom sequence starts with a ground-based image of the constellation Ursa major and ends with the Hubble Deep Field through progressively narrower and deeper views.

  20. Hubble Goes to the eXtreme to Assemble Farthest-Ever View of the Universe

    NASA Image and Video Library

    2017-12-08

    NASA image release September 25, 2012 Like photographers assembling a portfolio of best shots, astronomers have assembled a new, improved portrait of mankind's deepest-ever view of the universe. Called the eXtreme Deep Field, or XDF, the photo was assembled by combining 10 years of NASA Hubble Space Telescope photographs taken of a patch of sky at the center of the original Hubble Ultra Deep Field. The XDF is a small fraction of the angular diameter of the full moon. The Hubble Ultra Deep Field is an image of a small area of space in the constellation Fornax, created using Hubble Space Telescope data from 2003 and 2004. By collecting faint light over many hours of observation, it revealed thousands of galaxies, both nearby and very distant, making it the deepest image of the universe ever taken at that time. The new full-color XDF image is even more sensitive, and contains about 5,500 galaxies even within its smaller field of view. The faintest galaxies are one ten-billionth the brightness of what the human eye can see. To read more go to:http://www.nasa.gov/mission_pages/hubble/science/xdf.html Credit: NASA; ESA; G. Illingworth, D. Magee, and P. Oesch, University of California, Santa Cruz; R. Bouwens, Leiden University; and the HUDF09 Team NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  1. ACS Data Handbook v.6.0

    NASA Astrophysics Data System (ADS)

    Gonzaga, S.; et al.

    2011-03-01

    ACS was designed to provide a deep, wide-field survey capability from the visible to near-IR using the Wide Field Camera (WFC), high resolution imaging from the near-UV to near-IR with the now-defunct High Resolution Camera (HRC), and solar-blind far-UV imaging using the Solar Blind Camera (SBC). The discovery efficiency of ACS's Wide Field Channel (i.e., the product of WFC's field of view and throughput) is 10 times greater than that of WFPC2. The failure of ACS's CCD electronics in January 2007 brought a temporary halt to CCD imaging until Servicing Mission 4 in May 2009, when WFC functionality was restored. Unfortunately, the high-resolution optical imaging capability of HRC was not recovered.

  2. Long ranging swept-source optical coherence tomography-based angiography outperforms its spectral-domain counterpart in imaging human skin microcirculations

    NASA Astrophysics Data System (ADS)

    Xu, Jingjiang; Song, Shaozhen; Men, Shaojie; Wang, Ruikang K.

    2017-11-01

    There is an increasing demand for imaging tools in clinical dermatology that can perform in vivo wide-field morphological and functional examination from surface to deep tissue regions at various skin sites of the human body. The conventional spectral-domain optical coherence tomography-based angiography (SD-OCTA) system is difficult to meet these requirements due to its fundamental limitations of the sensitivity roll-off, imaging range as well as imaging speed. To mitigate these issues, we demonstrate a swept-source OCTA (SS-OCTA) system by employing a swept source based on a vertical cavity surface-emitting laser. A series of comparisons between SS-OCTA and SD-OCTA are conducted. Benefiting from the high system sensitivity, long imaging range, and superior roll-off performance, the SS-OCTA system is demonstrated with better performance in imaging human skin than the SD-OCTA system. We show that the SS-OCTA permits remarkable deep visualization of both structure and vasculature (up to ˜2 mm penetration) with wide field of view capability (up to 18×18 mm2), enabling a more comprehensive assessment of the morphological features as well as functional blood vessel networks from the superficial epidermal to deep dermal layers. It is expected that the advantages of the SS-OCTA system will provide a ground for clinical translation, benefiting the existing dermatological practice.

  3. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Generation and evaluation of an ultra-high-field atlas with applications in DBS planning

    NASA Astrophysics Data System (ADS)

    Wang, Brian T.; Poirier, Stefan; Guo, Ting; Parrent, Andrew G.; Peters, Terry M.; Khan, Ali R.

    2016-03-01

    Purpose Deep brain stimulation (DBS) is a common treatment for Parkinson's disease (PD) and involves the use of brain atlases or intrinsic landmarks to estimate the location of target deep brain structures, such as the subthalamic nucleus (STN) and the globus pallidus pars interna (GPi). However, these structures can be difficult to localize with conventional clinical magnetic resonance imaging (MRI), and thus targeting can be prone to error. Ultra-high-field imaging at 7T has the ability to clearly resolve these structures and thus atlases built with these data have the potential to improve targeting accuracy. Methods T1 and T2-weighted images of 12 healthy control subjects were acquired using a 7T MR scanner. These images were then used with groupwise registration to generate an unbiased average template with T1w and T2w contrast. Deep brain structures were manually labelled in each subject by two raters and rater reliability was assessed. We compared the use of this unbiased atlas with two other methods of atlas-based segmentation (single-template and multi-template) for subthalamic nucleus (STN) segmentation on 7T MRI data. We also applied this atlas to clinical DBS data acquired at 1.5T to evaluate its efficacy for DBS target localization as compared to using a standard atlas. Results The unbiased templates provide superb detail of subcortical structures. Through one-way ANOVA tests, the unbiased template is significantly (p <0.05) more accurate than a single-template in atlas-based segmentation and DBS target localization tasks. Conclusion The generated unbiased averaged templates provide better visualization of deep brain nuclei and an increase in accuracy over single-template and lower field strength atlases.

  5. Reduction of susceptibility-induced signal losses in multi-gradient-echo images: application to improved visualization of the subthalamic nucleus.

    PubMed

    Volz, Steffen; Hattingen, Elke; Preibisch, Christine; Gasser, Thomas; Deichmann, Ralf

    2009-05-01

    T2-weighted gradient echo (GE) images yield good contrast of iron-rich structures like the subthalamic nuclei due to microscopic susceptibility induced field gradients, providing landmarks for the exact placement of deep brain stimulation electrodes in Parkinson's disease treatment. An additional advantage is the low radio frequency (RF) exposure of GE sequences. However, T2-weighted images are also sensitive to macroscopic field inhomogeneities, resulting in signal losses, in particular in orbitofrontal and temporal brain areas, limiting anatomical information from these areas. In this work, an image correction method for multi-echo GE data based on evaluation of phase information for field gradient mapping is presented and tested in vivo on a 3 Tesla whole body MR scanner. In a first step, theoretical signal losses are calculated from the gradient maps and a pixelwise image intensity correction is performed. In a second step, intensity corrected images acquired at different echo times TE are combined using optimized weighting factors: in areas not affected by macroscopic field inhomogeneities, data acquired at long TE are weighted more strongly to achieve the contrast required. For large field gradients, data acquired at short TE are favored to avoid signal losses. When compared to the original data sets acquired at different TE and the respective intensity corrected data sets, the resulting combined data sets feature reduced signal losses in areas with major field gradients, while intensity profiles and a contrast-to-noise (CNR) analysis between subthalamic nucleus, red nucleus and the surrounding white matter demonstrate good contrast in deep brain areas.

  6. Deep Extragalactic VIsible Legacy Survey (DEVILS): Motivation, Design and Target Catalogue

    NASA Astrophysics Data System (ADS)

    Davies, L. J. M.; Robotham, A. S. G.; Driver, S. P.; Lagos, C. P.; Cortese, L.; Mannering, E.; Foster, C.; Lidman, C.; Hashemizadeh, A.; Koushan, S.; O'Toole, S.; Baldry, I. K.; Bilicki, M.; Bland-Hawthorn, J.; Bremer, M. N.; Brown, M. J. I.; Bryant, J. J.; Catinella, B.; Croom, S. M.; Grootes, M. W.; Holwerda, B. W.; Jarvis, M. J.; Maddox, N.; Meyer, M.; Moffett, A. J.; Phillipps, S.; Taylor, E. N.; Windhorst, R. A.; Wolf, C.

    2018-06-01

    The Deep Extragalactic VIsible Legacy Survey (DEVILS) is a large spectroscopic campaign at the Anglo-Australian Telescope (AAT) aimed at bridging the near and distant Universe by producing the highest completeness survey of galaxies and groups at intermediate redshifts (0.3 < z < 1.0). Our sample consists of ˜60,000 galaxies to Y<21.2 mag, over ˜6 deg2 in three well-studied deep extragalactic fields (Cosmic Origins Survey field, COSMOS, Extended Chandra Deep Field South, ECDFS and the X-ray Multi-Mirror Mission Large-Scale Structure region, XMM-LSS - all Large Synoptic Survey Telescope deep-drill fields). This paper presents the broad experimental design of DEVILS. Our target sample has been selected from deep Visible and Infrared Survey Telescope for Astronomy (VISTA) Y-band imaging (VISTA Deep Extragalactic Observations, VIDEO and UltraVISTA), with photometry measured by PROFOUND. Photometric star/galaxy separation is done on the basis of NIR colours, and has been validated by visual inspection. To maximise our observing efficiency for faint targets we employ a redshift feedback strategy, which continually updates our target lists, feeding back the results from the previous night's observations. We also present an overview of the initial spectroscopic observations undertaken in late 2017 and early 2018.

  7. AEGIS-X: Deep Chandra Imaging of the Central Groth Strip

    NASA Astrophysics Data System (ADS)

    Nandra, K.; Laird, E. S.; Aird, J. A.; Salvato, M.; Georgakakis, A.; Barro, G.; Perez-Gonzalez, P. G.; Barmby, P.; Chary, R.-R.; Coil, A.; Cooper, M. C.; Davis, M.; Dickinson, M.; Faber, S. M.; Fazio, G. G.; Guhathakurta, P.; Gwyn, S.; Hsu, L.-T.; Huang, J.-S.; Ivison, R. J.; Koo, D. C.; Newman, J. A.; Rangel, C.; Yamada, T.; Willmer, C.

    2015-09-01

    We present the results of deep Chandra imaging of the central region of the Extended Groth Strip, the AEGIS-X Deep (AEGIS-XD) survey. When combined with previous Chandra observations of a wider area of the strip, AEGIS-X Wide (AEGIS-XW), these provide data to a nominal exposure depth of 800 ks in the three central ACIS-I fields, a region of approximately 0.29 deg2. This is currently the third deepest X-ray survey in existence; a factor ∼ 2-3 shallower than the Chandra Deep Fields (CDFs), but over an area ∼3 times greater than each CDF. We present a catalog of 937 point sources detected in the deep Chandra observations, along with identifications of our X-ray sources from deep ground-based, Spitzer, GALEX, and Hubble Space Telescope imaging. Using a likelihood ratio analysis, we associate multiband counterparts for 929/937 of our X-ray sources, with an estimated 95% reliability, making the identification completeness approximately 94% in a statistical sense. Reliable spectroscopic redshifts for 353 of our X-ray sources are available predominantly from Keck (DEEP2/3) and MMT Hectospec, so the current spectroscopic completeness is ∼38%. For the remainder of the X-ray sources, we compute photometric redshifts based on multiband photometry in up to 35 bands from the UV to mid-IR. Particular attention is given to the fact that the vast majority the X-ray sources are active galactic nuclei and require hybrid templates. Our photometric redshifts have mean accuracy of σ =0.04 and an outlier fraction of approximately 5%, reaching σ =0.03 with less than 4% outliers in the area covered by CANDELS . The X-ray, multiwavelength photometry, and redshift catalogs are made publicly available.

  8. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy.

    PubMed

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Boccara, A Claude; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  9. Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy

    NASA Astrophysics Data System (ADS)

    Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Claude Boccara, A.; Bourdieu, Laurent

    2011-11-01

    Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.

  10. Astrometry with LSST: Objectives and Challenges

    NASA Astrophysics Data System (ADS)

    Casetti-Dinescu, D. I.; Girard, T. M.; Méndez, R. A.; Petronchak, R. M.

    2018-01-01

    The forthcoming Large Synoptic Survey Telescope (LSST) is an optical telescope with an effective aperture of 6.4 m, and a field of view of 9.6 square degrees. Thus, LSST will have an étendue larger than any other optical telescope, performing wide-field, deep imaging of the sky. There are four broad categories of science objectives: 1) dark-energy and dark matter, 2) transients, 3) the Milky Way and its neighbours and, 4) the Solar System. In particular, for the Milky-Way science case, astrometry will make a critical contribution; therefore, special attention must be devoted to extract the maximum amount of astrometric information from the LSST data. Here, we outline the astrometric challenges posed by such a massive survey. We also present some current examples of ground-based, wide-field, deep imagers used for astrometry, as precursors of the LSST.

  11. Deep Imaging of the HCG 95 Field. I. Ultra-diffuse Galaxies

    NASA Astrophysics Data System (ADS)

    Shi, Dong Dong; Zheng, Xian Zhong; Zhao, Hai Bin; Pan, Zhi Zheng; Li, Bin; Zou, Hu; Zhou, Xu; Guo, KeXin; An, Fang Xia; Li, Yu Bin

    2017-09-01

    We present a detection of 89 candidates of ultra-diffuse galaxies (UDGs) in a 4.9 degree2 field centered on the Hickson Compact Group 95 (HCG 95) using deep g- and r-band images taken with the Chinese Near Object Survey Telescope. This field contains one rich galaxy cluster (Abell 2588 at z = 0.199) and two poor clusters (Pegasus I at z = 0.013 and Pegasus II at z = 0.040). The 89 candidates are likely associated with the two poor clusters, giving about 50-60 true UDGs with a half-light radius {r}{{e}}> 1.5 {kpc} and a central surface brightness μ (g,0)> 24.0 mag arcsec-2. Deep z\\prime -band images are available for 84 of the 89 galaxies from the Dark Energy Camera Legacy Survey (DECaLS), confirming that these galaxies have an extremely low central surface brightness. Moreover, our UDG candidates are spread over a wide range in g - r color, and ˜26% are as blue as normal star-forming galaxies, which is suggestive of young UDGs that are still in formation. Interestingly, we find that one UDG linked with HCG 95 is a gas-rich galaxy with H I mass 1.1× {10}9 M ⊙ detected by the Very Large Array, and has a stellar mass of {M}\\star ˜ 1.8× {10}8 M ⊙. This indicates that UDGs at least partially overlap with the population of nearly dark galaxies found in deep H I surveys. Our results show that the high abundance of blue UDGs in the HCG 95 field is favored by the environment of poor galaxy clusters residing in H I-rich large-scale structures.

  12. Urban Area Detection in Very High Resolution Remote Sensing Images Using Deep Convolutional Neural Networks.

    PubMed

    Tian, Tian; Li, Chang; Xu, Jinkang; Ma, Jiayi

    2018-03-18

    Detecting urban areas from very high resolution (VHR) remote sensing images plays an important role in the field of Earth observation. The recently-developed deep convolutional neural networks (DCNNs), which can extract rich features from training data automatically, have achieved outstanding performance on many image classification databases. Motivated by this fact, we propose a new urban area detection method based on DCNNs in this paper. The proposed method mainly includes three steps: (i) a visual dictionary is obtained based on the deep features extracted by pre-trained DCNNs; (ii) urban words are learned from labeled images; (iii) the urban regions are detected in a new image based on the nearest dictionary word criterion. The qualitative and quantitative experiments on different datasets demonstrate that the proposed method can obtain a remarkable overall accuracy (OA) and kappa coefficient. Moreover, it can also strike a good balance between the true positive rate (TPR) and false positive rate (FPR).

  13. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification.

    PubMed

    Sladojevic, Srdjan; Arsenovic, Marko; Anderla, Andras; Culibrk, Dubravko; Stefanovic, Darko

    2016-01-01

    The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%.

  14. Deep Neural Networks Based Recognition of Plant Diseases by Leaf Image Classification

    PubMed Central

    Sladojevic, Srdjan; Arsenovic, Marko; Culibrk, Dubravko; Stefanovic, Darko

    2016-01-01

    The latest generation of convolutional neural networks (CNNs) has achieved impressive results in the field of image classification. This paper is concerned with a new approach to the development of plant disease recognition model, based on leaf image classification, by the use of deep convolutional networks. Novel way of training and the methodology used facilitate a quick and easy system implementation in practice. The developed model is able to recognize 13 different types of plant diseases out of healthy leaves, with the ability to distinguish plant leaves from their surroundings. According to our knowledge, this method for plant disease recognition has been proposed for the first time. All essential steps required for implementing this disease recognition model are fully described throughout the paper, starting from gathering images in order to create a database, assessed by agricultural experts. Caffe, a deep learning framework developed by Berkley Vision and Learning Centre, was used to perform the deep CNN training. The experimental results on the developed model achieved precision between 91% and 98%, for separate class tests, on average 96.3%. PMID:27418923

  15. Localization of lung fields in HRCT images using a deep convolution neural network

    NASA Astrophysics Data System (ADS)

    Kumar, Abhishek; Agarwala, Sunita; Dhara, Ashis Kumar; Mukhopadhyay, Sudipta; Nandi, Debashis; Garg, Mandeep; Khandelwal, Niranjan; Kalra, Naveen

    2018-02-01

    Lung field segmentation is a prerequisite step for the development of a computer-aided diagnosis system for interstitial lung diseases observed in chest HRCT images. Conventional methods of lung field segmentation rely on a large gray value contrast between lung fields and surrounding tissues. These methods fail on lung HRCT images with dense and diffused pathology. An efficient prepro- cessing could improve the accuracy of segmentation of pathological lung field in HRCT images. In this paper, a convolution neural network is used for localization of lung fields in HRCT images. The proposed method provides an optimal bounding box enclosing the lung fields irrespective of the presence of diffuse pathology. The performance of the proposed algorithm is validated on 330 lung HRCT images obtained from MedGift database on ZF and VGG networks. The model achieves a mean average precision of 0.94 with ZF net and a slightly better performance giving a mean average precision of 0.95 in case of VGG net.

  16. Deep features for efficient multi-biometric recognition with face and ear images

    NASA Astrophysics Data System (ADS)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  17. A Model-Based Approach for Microvasculature Structure Distortion Correction in Two-Photon Fluorescence Microscopy Images

    PubMed Central

    Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh

    2015-01-01

    SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257

  18. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.

    PubMed

    Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P

    2017-10-01

    In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.

  19. VizieR Online Data Catalog: GOODS-S CANDELS multiwavelength catalog (Guo+, 2013)

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Ferguson, H. C.; Giavalisco, M.; Barro, G.; Willner, S. P.; Ashby, M. L. N.; Dahlen, T.; Donley, J. L.; Faber, S. M.; Fontana, A.; Galametz, A.; Grazian, A.; Huang, K.-H.; Kocevski, D. D.; Koekemoer, A. M.; Koo, D. C.; McGrath, E. J.; Peth, M.; Salvato, M.; Wuyts, S.; Castellano, M.; Cooray, A. R.; Dickinson, M. E.; Dunlop, J. S.; Fazio, G. G.; Gardner, J. P.; Gawiser, E.; Grogin, N. A.; Hathi, N. P.; Hsu, L.-T.; Lee, K.-S.; Lucas, R. A.; Mobasher, B.; Nandra, K.; Newman, J. A.; van der Wel, A.

    2014-04-01

    The Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; Grogin et al. 2011ApJS..197...35G; Koekemoer et al. 2011ApJS..197...36K) is designed to document galaxy formation and evolution over the redshift range of z=1.5-8. The core of CANDELS is to use the revolutionary near-infrared HST/WFC3 camera, installed on HST in 2009 May, to obtain deep imaging of faint and faraway objects. The GOODS-S field, centered at RAJ2000=03:32:30 and DEJ2000=-27:48:20 and located within the Chandra Deep Field South (CDFS; Giacconi et al. 2002, Cat. J/ApJS/139/369), is a sky region of about 170arcmin2 which has been targeted for some of the deepest observations ever taken by NASA's Great Observatories, HST, Spitzer, and Chandra as well as by other world-class telescopes. The field has been (among others) imaged in the optical wavelength with HST/ACS in F435W, F606W, F775W, and F850LP bands as part of the HST Treasury Program: the Great Observatories Origins Deep Survey (GOODS; Giavalisco et al. 2004, Cat. II/261); in the mid-IR (3.6-24um) wavelength with Spitzer as part of the GOODS Spitzer Legacy Program (PI: M. Dickinson). The CDF-S/GOODS field was observed by the MOSAIC II imager on the CTIO 4m Blanco telescope to obtain deep U-band observations in 2001 September. Another U-band survey in GOODS-S was carried out using the VIMOS instrument mounted at the Melipal Unit Telescope of the VLT at ESO's Cerro Paranal Observatory, Chile. This large program of ESO (168.A-0485; PI: C. Cesarsky) was obtained in service mode observations in UT3 between 2004 August and fall 2006. In the ground-based NIR, imaging observations of the CDFS were carried out in J, H, Ks bands using the ISAAC instrument mounted at the Antu Unit Telescope of the VLT. Data were obtained as part of the ESO Large Programme 168.A-0485 (PI: C. Cesarsky) as well as ESO Programmes 64.O-0643, 66.A-0572, and 68.A-0544 (PI: E. Giallongo) with a total allocation time of ~500 hr from 1999 October to 2007 January. The CANDELS/GOODS-S field was also observed in the NIR as part of the ongoing HAWK-I UDS and GOODS-S survey (HUGS; VLT large program ID 186.A-0898; PI: A. Fontana; A. Fontana et al., in preparation) using the High Acuity Wide field K-band Imager (HAWK-I) on VLT. (1 data file).

  20. VizieR Online Data Catalog: Morphologies of selected AGN (Griffith+, 2010)

    NASA Astrophysics Data System (ADS)

    Griffith, R. L.; Stern, D.

    2012-06-01

    The cornerstone data set for the COSMOS survey is its wide-field HST Advanced Camera for Surveys (ACS) imaging (Scoville et al. 2007ApJS..172...38S). With 583 single-orbit HST ACS F814W (I band; hereafter I814) observations, it is the largest contiguous HST imaging survey to date. The VLA-COSMOS large project (Schinnerer et al., 2007, Cat. J/ApJS/172/46) acquired deep, uniform 1.4GHz data over the entire COSMOS field using the A-array configuration of the Very Large Array (VLA). The XMM-Newton COSMOS survey (Hasinger et al., 2007, Cat. J/ApJS/172/29; Cappelluti et al., 2009, Cat. J/A+A/497/635) acquired deep X-ray data over the entire COSMOS HST ACS field. The S-COSMOS survey (Sanders et al., 2007ApJS..172...86S) is a Spitzer Legacy program which carried out a uniformly deep survey of the full COSMOS field in seven mid-IR bands (3.6, 4.5, 5.8, 8.0, 24, 70, and 160um). The Advanced Camera for Surveys General Catalog2 (ACS-GC) data (R.L. Griffith et al., 2012ApJS..200....9G) was constructed to study the evolution of galaxy morphologies over a wide range of look-back times. The ACS-GC uniformly analyzes the largest HST ACS imaging surveys (AEGIS, GEMS, GOODS-S, GOODS-N, and COSMOS) using the GALAPAGOS code. (3 data files).

  1. Drilling into a present-day migration pathway for hydrocarbons within a fault zone conduit in the Eugene Island 330 field, offshore Louisiana

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, R.N.

    1995-11-01

    Within the Global Basins Research Network, we have developed 4-D seismic analysis techniques that, when integrated with pressure and temperature mapping, production history, geochemical monitoring, and finite element modeling, allow for the imaging of active fluid migration in the subsurface. We have imaged fluid flow pathways that are actively recharging shallower hydrocarbon reservoirs in the Eugene Island 330 field, offshore Louisiana. The hydrocarbons appear to be sourcing from turbidite stacks within the salt-withdrawal mini-basin buried deep within geopressure. Fault zone conduits provide transient migration pathways out of geopressure. To accomplish this 4-D imaging, we use multiple 3-D seismic surveys donemore » several years apart over the same blocks. 3-D volume processing and attribute analysis algorithms are used to identify significant seismic amplitude interconnectivity and changes over time that result from active fluid migration. Pressures and temperatures are then mapped and modeled to pro- vide rate and timing constraints for the fluid movement. Geochemical variability observed in the shallow reservoirs is attributed to the mixing of new with old oils. The Department of Energy has funded an industry cost-sharing project to drill into one of these active conduits in Eugene Island Block 330. Active fluid flow was encountered within the fault zone in the field demonstration experiment, and hydrocarbons were recovered. The active migration events connecting shallow reservoirs to deep sourcing regions imply that large, heretofore undiscovered hydrocarbon reserves exist deep within geopressures along the deep continental shelf of the northern Gulf of Mexico.« less

  2. Building the Case for SNAP: Creation of Multi-Band, Simulated Images With Shapelets

    NASA Technical Reports Server (NTRS)

    Ferry, Matthew A.

    2005-01-01

    Dark energy has simultaneously been the most elusive and most important phenomenon in the shaping of the universe. A case for a proposed space-telescope called SNAP (SuperNova Acceleration Probe) is being built, a crucial component of which is image simulations. One method for this is "Shapelets," developed at Caltech. Shapelets form an orthonormal basis and are uniquely able to represent realistic space images and create new images based on real ones. Previously, simulations were created using the Hubble Deep Field (HDF) as a basis Set in one band. In this project, image simulations are created.using the 4 bands of the Hubble Ultra Deep Field (UDF) as a basis set. This provides a better basis for simulations because (1) the survey is deeper, (2) they have a higher resolution, and (3) this is a step closer to simulating the 9 bands of SNAP. Image simulations are achieved by detecting sources in the UDF, decomposing them into shapelets, tweaking their parameters in realistic ways, and recomposing them into new images. Morphological tests were also run to verify the realism of the simulations. They have a wide variety of uses, including the ability to create weak gravitational lensing simulations.

  3. Video-rate functional photoacoustic microscopy at depths

    NASA Astrophysics Data System (ADS)

    Wang, Lidai; Maslov, Konstantin; Xing, Wenxin; Garcia-Uribe, Alejandro; Wang, Lihong V.

    2012-10-01

    We report the development of functional photoacoustic microscopy capable of video-rate high-resolution in vivo imaging in deep tissue. A lightweight photoacoustic probe is made of a single-element broadband ultrasound transducer, a compact photoacoustic beam combiner, and a bright-field light delivery system. Focused broadband ultrasound detection provides a 44-μm lateral resolution and a 28-μm axial resolution based on the envelope (a 15-μm axial resolution based on the raw RF signal). Due to the efficient bright-field light delivery, the system can image as deep as 4.8 mm in vivo using low excitation pulse energy (28 μJ per pulse, 0.35 mJ/cm2 on the skin surface). The photoacoustic probe is mounted on a fast-scanning voice-coil scanner to acquire 40 two-dimensional (2-D) B-scan images per second over a 9-mm range. High-resolution anatomical imaging is demonstrated in the mouse ear and brain. Via fast dual-wavelength switching, oxygen dynamics of mouse cardio-vasculature is imaged in realtime as well.

  4. Results from Field Testing the RIMFAX GPR on Svalbard.

    NASA Astrophysics Data System (ADS)

    Hamran, S. E.; Amundsen, H. E. F.; Berger, T.; Carter, L. M.; Dypvik, H.; Ghent, R. R.; Kohler, J.; Mellon, M. T.; Nunes, D. C.; Paige, D. A.; Plettemeier, D.; Russell, P.

    2017-12-01

    The Radar Imager for Mars' Subsurface Experiment - RIMFAX is a Ground Penetrating Radar being developed for NASÁs MARS 2020 rover mission. The principal goals of the RIMFAX investigation are to image subsurface structures, provide context for sample sites, derive information regarding subsurface composition, and search for ice or brines. In meeting these goals, RIMFAX will provide a view of the stratigraphic section and a window into the geological and environmental history of Mars. To verify the design an Engineering Model (EM) of the radar was tested in the field in the spring 2017. Different sounding modes on the EM were tested in different types of subsurface geology on Svalbard. Deep soundings were performed on polythermal glaciers down to a couple of hundred meters. Shallow soundings were used to map a ground water table in the firn area of a glacier. A combination of deep and shallow soundings was used to image buried ice under a sedimentary layer of a couple of meters. Subsurface sedimentary layers were imaged down to more than 20 meters in sand stone permafrost. This presentation will give an overview of the RIMFAX investigation, describe the development of the radar system, and show results from field tests of the radar.

  5. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 11: Quantification of chest wall motion during deep inspiration breast hold treatments using cine EPID images and a physics based algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpuche Aviles, Jorge E.; VanBeek, Timothy

    Purpose: This work presents an algorithm used to quantify intra-fraction motion for patients treated using deep inspiration breath hold (DIBH). The algorithm quantifies the position of the chest wall in breast tangent fields using electronic portal images. Methods: The algorithm assumes that image profiles, taken along a direction perpendicular to the medial border of the field, follow a monotonically and smooth decreasing function. This assumption is invalid in the presence of lung and can be used to calculate chest wall position. The algorithm was validated by determining the position of the chest wall for varying field edge positions in portalmore » images of a thoracic phantom. The algorithm was used to quantify intra-fraction motion in cine images for 7 patients treated with DIBH. Results: Phantom results show that changes in the distance between chest wall and field edge were accurate within 0.1 mm on average. For a fixed field edge, the algorithm calculates the position of the chest wall with a 0.2 mm standard deviation. Intra-fraction motion for DIBH patients was within 1 mm 91.4% of the time and within 1.5 mm 97.9% of the time. The maximum intra-fraction motion was 3.0 mm. Conclusions: A physics based algorithm was developed and can be used to quantify the position of chest wall irradiated in tangent portal images with an accuracy of 0.1 mm and precision of 0.6 mm. Intra-fraction motion for patients treated with DIBH at our clinic is less than 3 mm.« less

  6. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera

    NASA Astrophysics Data System (ADS)

    Auksorius, Egidijus; Boccara, A. Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7 cm×1.7 cm en face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ˜0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging.

  7. Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.

    PubMed

    Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B

    2018-05-15

    In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  8. Deep Learning Methods for Quantifying Invasive Benthic Species in the Great Lakes

    NASA Astrophysics Data System (ADS)

    Billings, G.; Skinner, K.; Johnson-Roberson, M.

    2017-12-01

    In recent decades, invasive species such as the round goby and dreissenid mussels have greatly impacted the Great Lakes ecosystem. It is critical to monitor these species, model their distribution, and quantify the impacts on the native fisheries and surrounding ecosystem in order to develop an effective management response. However, data collection in underwater environments is challenging and expensive. Furthermore, the round goby is typically found in rocky habitats, which are inaccessible to standard survey techniques such as bottom trawling. In this work we propose a robotic system for visual data collection to automatically detect and quantify invasive round gobies and mussels in the Great Lakes. Robotic platforms equipped with cameras can perform efficient, cost-effective, low-bias benthic surveys. This data collection can be further optimized through automatic detection and annotation of the target species. Deep learning methods have shown success in image recognition tasks. However, these methods often rely on a labelled training dataset, with up to millions of labelled images. Hand labeling large numbers of images is expensive and often impracticable. Furthermore, data collected in the field may be sparse when only considering images that contain the objects of interest. It is easier to collect dense, clean data in controlled lab settings, but this data is not a realistic representation of real field environments. In this work, we propose a deep learning approach to generate a large set of labelled training data realistic of underwater environments in the field. To generate these images, first we draw random sample images of individual fish and mussels from a library of images captured in a controlled lab environment. Next, these randomly drawn samples will be automatically merged into natural background images. Finally, we will use a generative adversarial network (GAN) that incorporates constraints of the physical model of underwater light propagation to simulate the process of underwater image formation in various water conditions. The output of the GAN will be realistic looking annotated underwater images. This generated dataset of images will be used to train a classifier to identify round gobies and mussels in order to measure the biomass and abundance of these invasive species in the Great Lakes.

  9. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network.

    PubMed

    Prasoon, Adhish; Petersen, Kersten; Igel, Christian; Lauze, François; Dam, Erik; Nielsen, Mads

    2013-01-01

    Segmentation of anatomical structures in medical images is often based on a voxel/pixel classification approach. Deep learning systems, such as convolutional neural networks (CNNs), can infer a hierarchical representation of images that fosters categorization. We propose a novel system for voxel classification integrating three 2D CNNs, which have a one-to-one association with the xy, yz and zx planes of 3D image, respectively. We applied our method to the segmentation of tibial cartilage in low field knee MRI scans and tested it on 114 unseen scans. Although our method uses only 2D features at a single scale, it performs better than a state-of-the-art method using 3D multi-scale features. In the latter approach, the features and the classifier have been carefully adapted to the problem at hand. That we were able to get better results by a deep learning architecture that autonomously learns the features from the images is the main insight of this study.

  10. Use of kurtosis for locating deep blood vessels in raw speckle imaging using a homogeneity representation.

    PubMed

    Peregrina-Barreto, Hayde; Perez-Corona, Elizabeth; Rangel-Magdaleno, Jose; Ramos-Garcia, Ruben; Chiu, Roger; Ramirez-San-Juan, Julio C

    2017-06-01

    Visualization of deep blood vessels in speckle images is an important task as it is used to analyze the dynamics of the blood flow and the health status of biological tissue. Laser speckle imaging is a wide-field optical technique to measure relative blood flow speed based on the local speckle contrast analysis. However, it has been reported that this technique is limited to certain deep blood vessels (about ? = 300 ?? ? m ) because of the high scattering of the sample; beyond this depth, the quality of the vessel’s image decreases. The use of a representation based on homogeneity values, computed from the co-occurrence matrix, is proposed as it provides an improved vessel definition and its corresponding diameter. Moreover, a methodology is proposed for automatic blood vessel location based on the kurtosis analysis. Results were obtained from the different skin phantoms, showing that it is possible to identify the vessel region for different morphologies, even up to 900 ?? ? m in depth.

  11. Study of CT image texture using deep learning techniques

    NASA Astrophysics Data System (ADS)

    Dutta, Sandeep; Fan, Jiahua; Chevalier, David

    2018-03-01

    For CT imaging, reduction of radiation dose while improving or maintaining image quality (IQ) is currently a very active research and development topic. Iterative Reconstruction (IR) approaches have been suggested to be able to offer better IQ to dose ratio compared to the conventional Filtered Back Projection (FBP) reconstruction. However, it has been widely reported that often CT image texture from IR is different compared to that from FBP. Researchers have proposed different figure of metrics to quantitate the texture from different reconstruction methods. But there is still a lack of practical and robust method in the field for texture description. This work applied deep learning method for CT image texture study. Multiple dose scans of a 20cm diameter cylindrical water phantom was performed on Revolution CT scanner (GE Healthcare, Waukesha) and the images were reconstructed with FBP and four different IR reconstruction settings. The training images generated were randomly allotted (80:20) to a training and validation set. An independent test set of 256-512 images/class were collected with the same scan and reconstruction settings. Multiple deep learning (DL) networks with Convolution, RELU activation, max-pooling, fully-connected, global average pooling and softmax activation layers were investigated. Impact of different image patch size for training was investigated. Original pixel data as well as normalized image data were evaluated. DL models were reliably able to classify CT image texture with accuracy up to 99%. Results show that the deep learning techniques suggest that CT IR techniques may help lower the radiation dose compared to FBP.

  12. X-ray Full Field Microscopy at 30 keV

    NASA Astrophysics Data System (ADS)

    Marschall, F.; Last, A.; Simon, M.; Kluge, M.; Nazmov, V.; Vogt, H.; Ogurreck, M.; Greving, I.; Mohr, J.

    2014-04-01

    In our X-ray full field microscopy experiments, we demonstrated a resolution better than 260 nm over the entire field of view of 80 μm × 80 μm at 30 keV. Our experimental setup at PETRA III, P05, had a length of about 5 m consisting of an illumination optics, an imaging lens and a detector. For imaging, we used a compound refractive lens (CLR) consisting of mr-L negative photo resist, which was fabricated by deep X-ray lithography. As illumination optics, we choose a refractive rolled X-ray prism lens, which was adapted to the numerical aperture of the imaging lens.

  13. Correlation between low level fluctuations in the x ray background and faint galaxies

    NASA Technical Reports Server (NTRS)

    Tolstoy, Eline; Griffiths, R. E.

    1993-01-01

    A correlation between low-level x-ray fluctuations in the cosmic x-ray background flux and the large numbers of galaxies found in deep optical imaging, to m(sub v) is less than or equal to 24 - 26, is desired. These (faint) galaxies by their morphology and color in deep multi-color CCD images and plate material were optically identified. Statistically significant correlations between these galaxies and low-level x-ray fluctuations at the same positions in multiple deep Einstein HRI observations in PAVO and in a ROSAT PSPC field were searched for. Our aim is to test the hypothesis that faint 'star burst' galaxies might contribute significantly to the cosmic x-ray background (at approximately 1 keV).

  14. Tutorial on photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Yong; Yao, Junjie; Wang, Lihong V.

    2016-06-01

    Photoacoustic tomography (PAT) has become one of the fastest growing fields in biomedical optics. Unlike pure optical imaging, such as confocal microscopy and two-photon microscopy, PAT employs acoustic detection to image optical absorption contrast with high-resolution deep into scattering tissue. So far, PAT has been widely used for multiscale anatomical, functional, and molecular imaging of biological tissues. We focus on PAT's basic principles, major implementations, imaging contrasts, and recent applications.

  15. Study on the Classification of GAOFEN-3 Polarimetric SAR Images Using Deep Neural Network

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zhang, J.; Zhao, Z.

    2018-04-01

    Polarimetric Synthetic Aperture Radar (POLSAR) imaging principle determines that the image quality will be affected by speckle noise. So the recognition accuracy of traditional image classification methods will be reduced by the effect of this interference. Since the date of submission, Deep Convolutional Neural Network impacts on the traditional image processing methods and brings the field of computer vision to a new stage with the advantages of a strong ability to learn deep features and excellent ability to fit large datasets. Based on the basic characteristics of polarimetric SAR images, the paper studied the types of the surface cover by using the method of Deep Learning. We used the fully polarimetric SAR features of different scales to fuse RGB images to the GoogLeNet model based on convolution neural network Iterative training, and then use the trained model to test the classification of data validation.First of all, referring to the optical image, we mark the surface coverage type of GF-3 POLSAR image with 8m resolution, and then collect the samples according to different categories. To meet the GoogLeNet model requirements of 256 × 256 pixel image input and taking into account the lack of full-resolution SAR resolution, the original image should be pre-processed in the process of resampling. In this paper, POLSAR image slice samples of different scales with sampling intervals of 2 m and 1 m to be trained separately and validated by the verification dataset. Among them, the training accuracy of GoogLeNet model trained with resampled 2-m polarimetric SAR image is 94.89 %, and that of the trained SAR image with resampled 1 m is 92.65 %.

  16. The Hubble Space Telescope Medium Deep Survey with the Wide Field and Planetary Camera. 1: Methodology and results on the field near 3C 273

    NASA Technical Reports Server (NTRS)

    Griffiths, R. E.; Ratnatunga, K. U.; Neuschaefer, L. W.; Casertano, S.; Im, M.; Wyckoff, E. W.; Ellis, R. S.; Gilmore, G. F.; Elson, R. A. W.; Glazebrook, K.

    1994-01-01

    We present results from the Medium Deep Survey (MDS), a Key Project using the Hubble Space Telescope (HST). Wide Field Camera (WFC) images of random fields have been taken in 'parallel mode' with an effective resolution of 0.2 sec full width at half maximum (FWHM) in the V(F555W) and I(F785LP) filters. The exposures presented here were targeted on a field away from 3C 273, and resulted in approximately 5 hr integration time in each filter. Detailed morphological structure is seen in galaxy images with total integrated magnitudes down to V approximately = 22.5 and I approximately = 21.5. Parameters are estimated that best fit the observed galaxy images, and 143 objects are identified (including 23 stars) in the field to a fainter limiting magnitude of I approximately = 23.5. We outline the extragalactic goals of the HST Medium Deep Survey, summarize our basic data reduction procedures, and present number (magnitude) counts, a color-magnitude diagram for the field, surface brightness profiles for the brighter galaxies, and best-fit half-light radii for the fainter galaxies as a function of apparent magnitude. A median galaxy half-light radius of 0.4 sec is measured, and the distribution of galaxy sizes versus magnitude is presented. We observe an apparent deficit of galaxies with half-light radii between approximately 0.6 sec and 1.5 sec, with respect to standard no-evolution or mild evolution cosmological models. An apparent excess of compact objects (half-light radii approximately 0.1 sec) is also observed with respect to those models. Finally, we find a small excess in the number of faint galaxy pairs and groups with respect to a random low-redshift field sample.

  17. Resolution enhancement in deep-tissue nanoparticle imaging based on plasmonic saturated excitation microscopy

    NASA Astrophysics Data System (ADS)

    Deka, Gitanjal; Nishida, Kentaro; Mochizuki, Kentaro; Ding, Hou-Xian; Fujita, Katsumasa; Chu, Shi-Wei

    2018-03-01

    Recently, many resolution enhancing techniques are demonstrated, but most of them are severely limited for deep tissue applications. For example, wide-field based localization techniques lack the ability of optical sectioning, and structured light based techniques are susceptible to beam distortion due to scattering/aberration. Saturated excitation (SAX) microscopy, which relies on temporal modulation that is less affected when penetrating into tissues, should be the best candidate for deep-tissue resolution enhancement. Nevertheless, although fluorescence saturation has been successfully adopted in SAX, it is limited by photobleaching, and its practical resolution enhancement is less than two-fold. Recently, we demonstrated plasmonic SAX which provides bleaching-free imaging with three-fold resolution enhancement. Here we show that the three-fold resolution enhancement is sustained throughout the whole working distance of an objective, i.e., 200 μm, which is the deepest super-resolution record to our knowledge, and is expected to extend into deeper tissues. In addition, SAX offers the advantage of background-free imaging by rejecting unwanted scattering background from biological tissues. This study provides an inspirational direction toward deep-tissue super-resolution imaging and has the potential in tumor monitoring and beyond.

  18. The Grism Lens-amplified Survey from Space (GLASS). IV. Mass Reconstruction of the Lensing Cluster Abell 2744 from Frontier Field Imaging and GLASS Spectroscopy

    NASA Astrophysics Data System (ADS)

    Wang, X.; Hoag, A.; Huang, K.-H.; Treu, T.; Bradač, M.; Schmidt, K. B.; Brammer, G. B.; Vulcani, B.; Jones, T. A.; Ryan, R. E., Jr.; Amorín, R.; Castellano, M.; Fontana, A.; Merlin, E.; Trenti, M.

    2015-09-01

    We present a strong and weak lensing reconstruction of the massive cluster Abell 2744, the first cluster for which deep Hubble Frontier Fields (HFF) images and spectroscopy from the Grism Lens-Amplified Survey from Space (GLASS) are available. By performing a targeted search for emission lines in multiply imaged sources using the GLASS spectra, we obtain five high-confidence spectroscopic redshifts and two tentative ones. We confirm one strongly lensed system by detecting the same emission lines in all three multiple images. We also search for additional line emitters blindly and use the full GLASS spectroscopic catalog to test reliability of photometric redshifts for faint line emitters. We see a reasonable agreement between our photometric and spectroscopic redshift measurements, when including nebular emission in photometric redshift estimations. We introduce a stringent procedure to identify only secure multiple image sets based on colors, morphology, and spectroscopy. By combining 7 multiple image systems with secure spectroscopic redshifts (at 5 distinct redshift planes) with 18 multiple image systems with secure photometric redshifts, we reconstruct the gravitational potential of the cluster pixellated on an adaptive grid, using a total of 72 images. The resulting mass map is compared with a stellar mass map obtained from the deep Spitzer Frontier Fields data to study the relative distribution of stars and dark matter in the cluster. We find that the stellar to total mass ratio varies substantially across the cluster field, suggesting that stars do not trace exactly the total mass in this interacting system. The maps of convergence, shear, and magnification are made available in the standard HFF format.

  19. THE GRISM LENS-AMPLIFIED SURVEY FROM SPACE (GLASS). IV. MASS RECONSTRUCTION OF THE LENSING CLUSTER ABELL 2744 FROM FRONTIER FIELD IMAGING AND GLASS SPECTROSCOPY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, X.; Schmidt, K. B.; Jones, T. A.

    2015-09-20

    We present a strong and weak lensing reconstruction of the massive cluster Abell 2744, the first cluster for which deep Hubble Frontier Fields (HFF) images and spectroscopy from the Grism Lens-Amplified Survey from Space (GLASS) are available. By performing a targeted search for emission lines in multiply imaged sources using the GLASS spectra, we obtain five high-confidence spectroscopic redshifts and two tentative ones. We confirm one strongly lensed system by detecting the same emission lines in all three multiple images. We also search for additional line emitters blindly and use the full GLASS spectroscopic catalog to test reliability of photometricmore » redshifts for faint line emitters. We see a reasonable agreement between our photometric and spectroscopic redshift measurements, when including nebular emission in photometric redshift estimations. We introduce a stringent procedure to identify only secure multiple image sets based on colors, morphology, and spectroscopy. By combining 7 multiple image systems with secure spectroscopic redshifts (at 5 distinct redshift planes) with 18 multiple image systems with secure photometric redshifts, we reconstruct the gravitational potential of the cluster pixellated on an adaptive grid, using a total of 72 images. The resulting mass map is compared with a stellar mass map obtained from the deep Spitzer Frontier Fields data to study the relative distribution of stars and dark matter in the cluster. We find that the stellar to total mass ratio varies substantially across the cluster field, suggesting that stars do not trace exactly the total mass in this interacting system. The maps of convergence, shear, and magnification are made available in the standard HFF format.« less

  20. Adding the missing piece: Spitzer imaging of the HSC-Deep/PFS fields

    NASA Astrophysics Data System (ADS)

    Sajina, Anna; Bezanson, Rachel; Capak, Peter; Egami, Eiichi; Fan, Xiaohui; Farrah, Duncan; Greene, Jenny; Goulding, Andy; Lacy, Mark; Lin, Yen-Ting; Liu, Xin; Marchesini, Danilo; Moutard, Thibaud; Ono, Yoshiaki; Ouchi, Masami; Sawicki, Marcin; Strauss, Michael; Surace, Jason; Whitaker, Katherine

    2018-05-01

    We propose to observe a total of 7sq.deg. to complete the Spitzer-IRAC coverage of the HSC-Deep survey fields. These fields are the sites of the PrimeFocusSpectrograph (PFS) galaxy evolution survey which will provide spectra of wide wavelength range and resolution for almost all M* galaxies at z 0.7-1.7, and extend out to z 7 for targeted samples. Our fields already have deep broadband and narrowband photometry in 12 bands spanning from u through K and a wealth of other ancillary data. We propose completing the matching depth IRAC observations in the extended COSMOS, ELAIS-N1 and Deep2-3 fields. By complementing existing Spitzer coverage, this program will lead to an unprecedended in spectro-photometric coverage dataset across a total of 15 sq.deg. This dataset will have significant legacy value as it samples a large enough cosmic volume to be representative of the full range of environments, but also doing so with sufficient information content per galaxy to confidently derive stellar population characteristics. This enables detailed studies of the growth and quenching of galaxies and their supermassive black holes in the context of a galaxy's local and large scale environment.

  1. DeepSurveyCam--A Deep Ocean Optical Mapping System.

    PubMed

    Kwasnitschka, Tom; Köser, Kevin; Sticklus, Jan; Rothenbeck, Marcel; Weiß, Tim; Wenzlaff, Emanuel; Schoening, Timm; Triebe, Lars; Steinführer, Anja; Devey, Colin; Greinert, Jens

    2016-01-28

    Underwater photogrammetry and in particular systematic visual surveys of the deep sea are by far less developed than similar techniques on land or in space. The main challenges are the rough conditions with extremely high pressure, the accessibility of target areas (container and ship deployment of robust sensors, then diving for hours to the ocean floor), and the limitations of localization technologies (no GPS). The absence of natural light complicates energy budget considerations for deep diving flash-equipped drones. Refraction effects influence geometric image formation considerations with respect to field of view and focus, while attenuation and scattering degrade the radiometric image quality and limit the effective visibility. As an improvement on the stated issues, we present an AUV-based optical system intended for autonomous visual mapping of large areas of the seafloor (square kilometers) in up to 6000 m water depth. We compare it to existing systems and discuss tradeoffs such as resolution vs. mapped area and show results from a recent deployment with 90,000 mapped square meters of deep ocean floor.

  2. High resolution microphotonic needle for endoscopic imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tadayon, Mohammad Amin; Mohanty, Aseema; Roberts, Samantha P.; Barbosa, Felippe; Lipson, Michal

    2017-02-01

    GRIN (Graded index) lens have revolutionized micro endoscopy enabling deep tissue imaging with high resolution. The challenges of traditional GRIN lenses are their large size (when compared with the field of view) and their limited resolution. This is because of the relatively weak NA in standard graded index lenses. Here we introduce a novel micro-needle platform for endoscopy with much higher resolution than traditional GRIN lenses and a FOV that corresponds to the whole cross section of the needle. The platform is based on polymeric (SU-8) waveguide integrated with a microlens micro fabricated on a silicon substrate using a unique molding process. Due to the high index of refraction of the material the NA of the needle is much higher than traditional GRIN lenses. We tested the probe in a fluorescent dye solution (19.6 µM Alexa Flour 647 solution) and measured a numerical aperture of 0.25, focal length of about 175 µm and minimal spot size of about 1.6 µm. We show that the platform can image a sample with the field of view corresponding to the cross sectional area of the waveguide (80x100 µm2). The waveguide size can in principle be modified to vary size of the imaging field of view. This demonstration, combined with our previous work demonstrating our ability to implant the high NA needle in a live animal, shows that the proposed system can be used for deep tissue imaging with very high resolution and high field of view.

  3. VizieR Online Data Catalog: Spitzer-CANDELS catalog within 5 deep fields (Ashby+, 2015)

    NASA Astrophysics Data System (ADS)

    Ashby, M. L. N.; Willner, S. P.; Fazio, G. G.; Dunlop, J. S.; Egami, E.; Faber, S. M.; Ferguson, H. C.; Grogin, N. A.; Hora, J. L.; Huang, J.-S.; Koekemoer, A. M.; Labbe, I.; Wang, Z.

    2015-08-01

    We chose to locate S-CANDELS inside the wider and shallower fields already covered by Spitzer Extended Deep Survey (SEDS), in regions that enjoy deep optical and NIR imaging from HST/CANDELS. These S-CANDELS fields are thus the Extended GOODS-south (aka the GEMS field, hereafter ECDFS; Rix et al. 2004ApJS..152..163R; Castellano et al. 2010A&A...511A..20C), the Extended GOODS-north (HDFN; Giavalisco et al. 2004, II/261; Wang et al. 2010, J/ApJS/187/251; Hathi et al. 2012ApJ...757...43H; Lin et al. 2012ApJ...756...71L), the UKIDSS UDS (aka the Subaru/XMM Deep Field, Ouchi et al. 2001ApJ...558L..83O; Lawrence et al. 2007, II/319), a narrow field within the EGS (Davis et al. 2007ApJ...660L...1D; Bielby et al. 2012A&A...545A..23B), and a strip within the UltraVista deep survey of the larger COSMOS field (Scoville et al. 2007ApJS..172...38S; McCracken et al. 2012, J/A+A/544/A156). The S-CANDELS observing strategy was designed to maximize the area covered to full depth within the CANDELS area. Each field was visited twice with six months separating the two visits. Table 1 lists the epochs for each field. All of the IRAC full-depth coverage is within the SEDS area (Ashby et al. 2013, J/ApJ/769/80), and almost all is within the area covered by HST for CANDELS. (6 data files).

  4. Deepest Wide-Field Colour Image in the Southern Sky

    NASA Astrophysics Data System (ADS)

    2003-01-01

    LA SILLA CAMERA OBSERVES CHANDRA DEEP FIELD SOUTH ESO PR Photo 02a/03 ESO PR Photo 02a/03 [Preview - JPEG: 400 x 437 pix - 95k] [Normal - JPEG: 800 x 873 pix - 904k] [HiRes - JPEG: 4000 x 4366 pix - 23.1M] Caption : PR Photo 02a/03 shows a three-colour composite image of the Chandra Deep Field South (CDF-S) , obtained with the Wide Field Imager (WFI) camera on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile). It was produced by the combination of about 450 images with a total exposure time of nearly 50 hours. The field measures 36 x 34 arcmin 2 ; North is up and East is left. Technical information is available below. The combined efforts of three European teams of astronomers, targeting the same sky field in the southern constellation Fornax (The Oven) have enabled them to construct a very deep, true-colour image - opening an exceptionally clear view towards the distant universe . The image ( PR Photo 02a/03 ) covers an area somewhat larger than the full moon. It displays more than 100,000 galaxies, several thousand stars and hundreds of quasars. It is based on images with a total exposure time of nearly 50 hours, collected under good observing conditions with the Wide Field Imager (WFI) on the MPG/ESO 2.2m telescope at the ESO La Silla Observatory (Chile) - many of them extracted from the ESO Science Data Archive . The position of this southern sky field was chosen by Riccardo Giacconi (Nobel Laureate in Physics 2002) at a time when he was Director General of ESO, together with Piero Rosati (ESO). It was selected as a sky region towards which the NASA Chandra X-ray satellite observatory , launched in July 1999, would be pointed while carrying out a very long exposure (lasting a total of 1 million seconds, or 278 hours) in order to detect the faintest possible X-ray sources. The field is now known as the Chandra Deep Field South (CDF-S) . The new WFI photo of CDF-S does not reach quite as deep as the available images of the "Hubble Deep Fields" (HDF-N in the northern and HDF-S in the southern sky, cf. e.g. ESO PR Photo 35a/98 ), but the field-of-view is about 200 times larger. The present image displays about 50 times more galaxies than the HDF images, and therefore provides a more representative view of the universe . The WFI CDF-S image will now form a most useful basis for the very extensive and systematic census of the population of distant galaxies and quasars, allowing at once a detailed study of all evolutionary stages of the universe since it was about 2 billion years old . These investigations have started and are expected to provide information about the evolution of galaxies in unprecedented detail. They will offer insights into the history of star formation and how the internal structure of galaxies changes with time and, not least, throw light on how these two evolutionary aspects are interconnected. GALAXIES IN THE WFI IMAGE ESO PR Photo 02b/03 ESO PR Photo 02b/03 [Preview - JPEG: 488 x 400 pix - 112k] [Normal - JPEG: 896 x 800 pix - 1.0M] [Full-Res - JPEG: 2591 x 2313 pix - 8.6M] Caption : PR Photo 02b/03 contains a collection of twelve subfields from the full WFI Chandra Deep Field South (WFI CDF-S), centred on (pairs or groups of) galaxies. Each of the subfields measures 2.5 x 2.5 arcmin 2 (635 x 658 pix 2 ; 1 pixel = 0.238 arcsec). North is up and East is left. Technical information is available below. The WFI CDF-S colour image - of which the full field is shown in PR Photo 02a/03 - was constructed from all available observations in the optical B- ,V- and R-bands obtained under good conditions with the Wide Field Imager (WFI) on the 2.2-m MPG/ESO telescope at the ESO La Silla Observatory (Chile), and now stored in the ESO Science Data Archive. It is the "deepest" image ever taken with this instrument. It covers a sky field measuring 36 x 34 arcmin 2 , i.e., an area somewhat larger than that of the full moon. The observations were collected during a period of nearly four years, beginning in January 1999 when the WFI instrument was first installed (cf. ESO PR 02/99 ) and ending in October 2002. Altogether, nearly 50 hours of exposure were collected in the three filters combined here, cf. the technical information below. Although it is possible to identify more than 100,000 galaxies in the image - some of which are shown in PR Photo 02b/03 - it is still remarkably "empty" by astronomical standards. Even the brightest stars in the field (of visual magnitude 9) can hardly be seen by human observers with binoculars. In fact, the area density of bright, nearby galaxies is only half of what it is in "normal" sky fields. Comparatively empty fields like this one provide an unsually clear view towards the distant regions in the universe and thus open a window towards the earliest cosmic times . Research projects in the Chandra Deep Field South ESO PR Photo 02c/03 ESO PR Photo 02c/03 [Preview - JPEG: 400 x 513 pix - 112k] [Normal - JPEG: 800 x 1026 pix - 1.2M] [Full-Res - JPEG: 1717 x 2201 pix - 5.5M] ESO PR Photo 02d/03 ESO PR Photo 02d/03 [Preview - JPEG: 400 x 469 pix - 112k] [Normal - JPEG: 800 x 937 pix - 1.0M] [Full-Res - JPEG: 2545 x 2980 pix - 10.7M] Caption : PR Photo 02c-d/03 shows two sky fields within the WFI image of CDF-S, reproduced at full (pixel) size to illustrate the exceptional information richness of these data. The subfields measure 6.8 x 7.8 arcmin 2 (1717 x 1975 pixels) and 10.1 x 10.5 arcmin 2 (2545 x 2635 pixels), respectively. North is up and East is left. Technical information is available below. Astronomers from different teams and disciplines have been quick to join forces in a world-wide co-ordinated effort around the Chandra Deep Field South. Observations of this area are now being performed by some of the most powerful astronomical facilities and instruments. They include space-based X-ray and infrared observations by the ESA XMM-Newton , the NASA CHANDRA , Hubble Space Telescope (HST) and soon SIRTF (scheduled for launch in a few months), as well as imaging and spectroscopical observations in the infrared and optical part of the spectrum by telescopes at the ground-based observatories of ESO (La Silla and Paranal) and NOAO (Kitt Peak and Tololo). A huge database is currently being created that will help to analyse the evolution of galaxies in all currently feasible respects. All participating teams have agreed to make their data on this field publicly available, thus providing the world-wide astronomical community with a unique opportunity to perform competitive research, joining forces within this vast scientific project. Concerted observations The optical true-colour WFI image presented here forms an important part of this broad, concerted approach. It combines observations of three scientific teams that have engaged in complementary scientific projects, thereby capitalizing on this very powerful combination of their individual observations. The following teams are involved in this work: * COMBO-17 (Classifying Objects by Medium-Band Observations in 17 filters) : an international collaboration led by Christian Wolf and other scientists at the Max-Planck-Institut für Astronomie (MPIA, Heidelberg, Germany). This team used 51 hours of WFI observing time to obtain images through five broad-band and twelve medium-band optical filters in the visual spectral region in order to measure the distances (by means of "photometric redshifts") and star-formation rates of about 10,000 galaxies, thereby also revealing their evolutionary status. * EIS (ESO Imaging Survey) : a team of visiting astronomers from the ESO community and beyond, led by Luiz da Costa (ESO). They observed the CDF-S for 44 hours in six optical bands with the WFI camera on the MPG/ESO 2.2-m telescope and 28 hours in two near-infrared bands with the SOFI instrument at the ESO 3.5-m New Technology Telescope (NTT) , both at La Silla. These observations form part of the Deep Public Imaging Survey that covers a total sky area of 3 square degrees. * GOODS (The Great Observatories Origins Deep Survey) : another international team (on the ESO side, led by Catherine Cesarsky ) that focusses on the coordination of deep space- and ground-based observations on a smaller, central area of the CDF-S in order to image the galaxies in many differerent spectral wavebands, from X-rays to radio. GOODS has contributed with 40 hours of WFI time for observations in three broad-band filters that were designed for the selection of targets to be spectroscopically observed with the ESO Very Large Telescope (VLT) at the Paranal Observatory (Chile), for which over 200 hours of observations are planned. About 10,000 galaxies will be spectroscopically observed in order to determine their redshift (distance), star formation rate, etc. Another important contribution to this large research undertaking will come from the GEMS project. This is a "HST treasury programme" (with Hans-Walter Rix from MPIA as Principal Investigator) which observes the 10,000 galaxies identified in COMBO-17 - and eventually the entire WFI-field with HST - to show the evolution of their shapes with time. Great questions With the combination of data from many wavelength ranges now at hand, the astronomers are embarking upon studies of the many different processes in the universe. They expect to shed more light on several important cosmological questions, such as: * How and when was the first generation of stars born? * When exactly was the neutral hydrogen in the universe ionized the first time by powerful radiation emitted from the first stars and active galactic nuclei? * How did galaxies and groups of galaxies evolve during the past 13 billion years? * What is the true nature of those elusive objects that are only seen at the infrared and submillimetre wavelengths (cf. ESO PR 23/02 )? * Which fraction of galaxies had an "active" nucleus (probably with a black hole at the centre) in their past, and how long did this phase last? Moreover, since these extensive optical observations were obtained in the course of a dozen observing periods during several years, it is also possible to perform studies of certain variable phenomena: * How many variable sources are seen and what are their types and properties? * How many supernovae are detected per time interval, i.e. what is the supernovae frequency at different cosmic epochs? * How do those processes depend on each other? This is just a short and very incomplete list of questions astronomers world-wide will address using all the complementary observations. No doubt that the coming studies of the Chandra Deep Field South - with this and other data - will be most exciting and instructive! Other wide-field images Other wide-field images from the WFI have been published in various ESO press releases during the past four years - they are also available at the WFI Photo Gallery . A collection of full-resolution files (TIFF-format) is available on a WFI CD-ROM . Technical Information The very extensive data reduction and colour image processing needed to produce these images were performed by Mischa Schirmer and Thomas Erben at the "Wide Field Expertise Center" of the Institut für Astrophysik und Extraterrestrische Forschung der Universität Bonn (IAEF) in Germany. It was done by means of a software pipeline specialised for reduction of multiple CCD wide-field imaging camera data. This pipeline is mainly based on publicly available software modules and algorithms ( EIS , FLIPS , LDAC , Terapix , Wifix ). The image was constructed from about 150 exposures in each of the following wavebands: B-band (centred at wavelength 456 nm; here rendered as blue, 15.8 hours total exposure time), V-band (540 nm; green, 15.6 hours) and R-band (652 nm; red, 17.8 hours). Only images taken under sufficiently good observing conditions (defined as seeing less than 1.1 arcsec) were included. In total, 450 images were assembled to produce this colour image, together with about as many calibration images (biases, darks and flats). More than 2 Terabyte (TB) of temporary files were produced during the extensive data reduction. Parallel processing of all data sets took about two weeks on a four-processor Sun Enterprise 450 workstation and a 1.8 GHz dual processor Linux PC. The final colour image was assembled in Adobe Photoshop. The observations were performed by ESO (GOODS, EIS) and the COMBO-17 collaboration in the period 1/1999-10/2002.

  5. Tutorial on photoacoustic tomography

    PubMed Central

    Zhou, Yong; Yao, Junjie; Wang, Lihong V.

    2016-01-01

    Abstract. Photoacoustic tomography (PAT) has become one of the fastest growing fields in biomedical optics. Unlike pure optical imaging, such as confocal microscopy and two-photon microscopy, PAT employs acoustic detection to image optical absorption contrast with high-resolution deep into scattering tissue. So far, PAT has been widely used for multiscale anatomical, functional, and molecular imaging of biological tissues. We focus on PAT’s basic principles, major implementations, imaging contrasts, and recent applications. PMID:27086868

  6. High-Resolution 7T MR Imaging of the Motor Cortex in Amyotrophic Lateral Sclerosis.

    PubMed

    Cosottini, M; Donatelli, G; Costagli, M; Caldarazzo Ienco, E; Frosini, D; Pesaresi, I; Biagi, L; Siciliano, G; Tosetti, M

    2016-03-01

    Amyotrophic lateral sclerosis is a progressive motor neuron disorder that involves degeneration of both upper and lower motor neurons. In patients with amyotrophic lateral sclerosis, pathologic studies and ex vivo high-resolution MR imaging at ultra-high field strength revealed the co-localization of iron and activated microglia distributed in the deep layers of the primary motor cortex. The aims of the study were to measure the cortical thickness and evaluate the distribution of iron-related signal changes in the primary motor cortex of patients with amyotrophic lateral sclerosis as possible in vivo biomarkers of upper motor neuron impairment. Twenty-two patients with definite amyotrophic lateral sclerosis and 14 healthy subjects underwent a high-resolution 2D multiecho gradient-recalled sequence targeted on the primary motor cortex by using a 7T scanner. Image analysis consisted of the visual evaluation and quantitative measurement of signal intensity and cortical thickness of the primary motor cortex in patients and controls. Qualitative and quantitative MR imaging parameters were correlated with electrophysiologic and laboratory data and with clinical scores. Ultra-high field MR imaging revealed atrophy and signal hypointensity in the deep layers of the primary motor cortex of patients with amyotrophic lateral sclerosis with a diagnostic accuracy of 71%. Signal hypointensity of the deep layers of the primary motor cortex correlated with upper motor neuron impairment (r = -0.47; P < .001) and with disease progression rate (r = -0.60; P = .009). The combined high spatial resolution and sensitivity to paramagnetic substances of 7T MR imaging demonstrate in vivo signal changes of the cerebral motor cortex that resemble the distribution of activated microglia within the cortex of patients with amyotrophic lateral sclerosis. Cortical thinning and signal hypointensity of the deep layers of the primary motor cortex could constitute a marker of upper motor neuron impairment in patients with amyotrophic lateral sclerosis. © 2016 by American Journal of Neuroradiology.

  7. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field.

    PubMed

    Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik

    2016-11-11

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).

  8. DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field

    PubMed Central

    Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik

    2016-01-01

    Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717

  9. Young Galaxy Candidates in the Hubble Frontier Fields. IV. MACS J1149.5+2223

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zitrin, Adi; Infante, Leopoldo; Laporte, Nicolas; Huang, Xingxing; Moustakas, John; Ford, Holland C.; Shu, Xinwen; Wang, Junxian; Diego, Jose M.; Bauer, Franz E.; Troncoso Iribarren, Paulina; Broadhurst, Tom; Molino, Alberto

    2017-02-01

    We search for high-redshift dropout galaxies behind the Hubble Frontier Fields (HFF) galaxy cluster MACS J1149.5+2223, a powerful cosmic lens that has revealed a number of unique objects in its field. Using the deep images from the Hubble and Spitzer space telescopes, we find 11 galaxies at z > 7 in the MACS J1149.5+2223 cluster field, and 11 in its parallel field. The high-redshift nature of the bright z ≃ 9.6 galaxy MACS1149-JD, previously reported by Zheng et al., is further supported by non-detection in the extremely deep optical images from the HFF campaign. With the new photometry, the best photometric redshift solution for MACS1149-JD reduces slightly to z = 9.44 ± 0.12. The young galaxy has an estimated stellar mass of (7+/- 2)× {10}8 {M}⊙ , and was formed at z={13.2}-1.6+1.9 when the universe was ≈300 Myr old. Data available for the first four HFF clusters have already enabled us to find faint galaxies to an intrinsic magnitude of {M}{UV}≃ -15.5, approximately a factor of 10 deeper than the parallel fields.

  10. Fast subsurface fingerprint imaging with full-field optical coherence tomography system equipped with a silicon camera.

    PubMed

    Auksorius, Egidijus; Boccara, A Claude

    2017-09-01

    Images recorded below the surface of a finger can have more details and be of higher quality than the conventional surface fingerprint images. This is particularly true when the quality of the surface fingerprints is compromised by, for example, moisture or surface damage. However, there is an unmet need for an inexpensive fingerprint sensor that is able to acquire high-quality images deep below the surface in short time. To this end, we report on a cost-effective full-field optical coherent tomography system comprised of a silicon camera and a powerful near-infrared LED light source. The system, for example, is able to record 1.7  cm×1.7  cmen face images in 0.12 s with the spatial sampling rate of 2116 dots per inch and the sensitivity of 93 dB. We show that the system can be used to image internal fingerprints and sweat ducts with good contrast. Finally, to demonstrate its biometric performance, we acquired subsurface fingerprint images from 240 individual fingers and estimated the equal-error-rate to be ∼0.8%. The developed instrument could also be used in other en face deep-tissue imaging applications because of its high sensitivity, such as in vivo skin imaging. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  11. Accuracy of ultra-wide-field fundus ophthalmoscopy-assisted deep learning, a machine-learning technology, for detecting age-related macular degeneration.

    PubMed

    Matsuba, Shinji; Tabuchi, Hitoshi; Ohsugi, Hideharu; Enno, Hiroki; Ishitobi, Naofumi; Masumoto, Hiroki; Kiuchi, Yoshiaki

    2018-05-09

    To predict exudative age-related macular degeneration (AMD), we combined a deep convolutional neural network (DCNN), a machine-learning algorithm, with Optos, an ultra-wide-field fundus imaging system. First, to evaluate the diagnostic accuracy of DCNN, 364 photographic images (AMD: 137) were amplified and the area under the curve (AUC), sensitivity and specificity were examined. Furthermore, in order to compare the diagnostic abilities between DCNN and six ophthalmologists, we prepared yield 84 sheets comprising 50% of normal and wet-AMD data each, and calculated the correct answer rate, specificity, sensitivity, and response times. DCNN exhibited 100% sensitivity and 97.31% specificity for wet-AMD images, with an average AUC of 99.76%. Moreover, comparing the diagnostic abilities of DCNN versus six ophthalmologists, the average accuracy of the DCNN was 100%. On the other hand, the accuracy of ophthalmologists, determined only by Optos images without a fundus examination, was 81.9%. A combination of DCNN with Optos images is not better than a medical examination; however, it can identify exudative AMD with a high level of accuracy. Our system is considered useful for screening and telemedicine.

  12. VizieR Online Data Catalog: Ultradiffuse galaxies found in deep HST images of HFF (Lee+, 2017)

    NASA Astrophysics Data System (ADS)

    Lee, M. G.; Kang, J.; Lee, J. H.; Jang in, S.

    2018-03-01

    Abell S1063 and Abell 2744 are located at redshift z=0.348 and z=0.308, respectively, so their HST fields cover a relatively large fraction of each cluster. They are part of the target galaxy clusters in the Hubble Frontier Fields (HFF) Program, for which deep Hubble Space Telescope (HST) images are available (Lotz+ 2017ApJ...837...97L). We used ACS/F814W(I) and WFC3/F105W(Y) images for Abell S1063 and Abell 2744 in the HFF. The effective wavelengths of the F814W and F105W filters for the redshifts of Abell S1063 and Abell 2744 (6220 and 8030Å) correspond approximately to SDSS r' and Cousins I (or SDSS i') in the rest frame, respectively. Figure 1 display color images of the HST fields for Abell S1063 and Abell 2744. In this study we adopt the cosmological parameters H0=73km/s/Mpc, ΩM=0.27, and ΩΛ=0.73. For these parameters, luminosity distance moduli of Abell S1063 and Abell 2744 are (m-M)0=41.25 (d=1775Mpc) and 40.94 (d=1540Mpc), and angular diameter distances are 978 and 901Mpc, respectively. (5 data files).

  13. A survey on object detection in optical remote sensing images

    NASA Astrophysics Data System (ADS)

    Cheng, Gong; Han, Junwei

    2016-07-01

    Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.

  14. The Drizzling Cookbook

    NASA Astrophysics Data System (ADS)

    Gonzaga, S.; Biretta, J.; Wiggs, M. S.; Hsu, J. C.; Smith, T. E.; Bergeron, L.

    1998-12-01

    The drizzle software combines dithered images while preserving photometric accuracy, enhancing resolution, and removing geometric distortion. A recent upgrade also allows removal of cosmic rays from single images at each dither pointing. This document gives detailed examples illustrating drizzling procedures for six cases: WFPC2 observations of a deep field, a crowded field, a large galaxy, a planetary nebula, STIS/CCD observations of a HDF-North field, and NICMOS/NIC2 observations of the Egg Nebula. Command scripts and input images for each example are available on the WFPC2 WWW website. Users are encouraged to retrieve the data for the case that most closely resembles their own data and then practice and experiment drizzling the example.

  15. The Ring Sculptor

    NASA Image and Video Library

    2006-09-08

    Prometheus zooms across the Cassini spacecraft field of view, attended by faint streamers and deep gores in the F ring. This movie sequence of five images shows the F ring shepherd moon shaping the ring inner edge

  16. The ROSAT Deep Survey. 2; Optical Identification, Photometry and Spectra of X-Ray Sources in the Lockman Field

    NASA Technical Reports Server (NTRS)

    Schmidt, M.; Hasinger, G.; Gunn, J.; Schneider, D.; Burg, R.; Giacconi, R.; Lehmann, I.; MacKenty, J.; Truemper, J.; Zamorani, G.

    1998-01-01

    The ROSAT Deep Survey includes a complete sample of 50 X-ray sources with fluxes in the 0.5 - 2 keV band larger than 5.5 x 10(exp -15)erg/sq cm/s in the Lockman field (Hasinger et al., Paper 1). We have obtained deep broad-band CCD images of the field and spectra of many optical objects near the positions of the X-ray sources. We define systematically the process leading to the optical identifications of the X-ray sources. For this purpose, we introduce five identification (ID) classes that characterize the process in each case. Among the 50 X-ray sources, we identify 39 AGNs, 3 groups of galaxies, 1 galaxy and 3 galactic stars. Four X-ray sources remain unidentified so far; two of these objects may have an unusually large ratio of X-ray to optical flux.

  17. Reconstruction of Horizontal Plasma Motions at the Photosphere from Intensitygrams: A Comparison Between DeepVel, LCT, FLCT, and CST

    NASA Astrophysics Data System (ADS)

    Tremblay, Benoit; Roudier, Thierry; Rieutord, Michel; Vincent, Alain

    2018-04-01

    Direct measurements of plasma motions in the photosphere are limited to the line-of-sight component of the velocity. Several algorithms have therefore been developed to reconstruct the transverse components from observed continuum images or magnetograms. We compare the space and time averages of horizontal velocity fields in the photosphere inferred from pairs of consecutive intensitygrams by the LCT, FLCT, and CST methods and the DeepVel neural network in order to identify the method that is best suited for generating synthetic observations to be used for data assimilation. The Stein and Nordlund ( Astrophys. J. Lett. 753, L13, 2012) magnetoconvection simulation is used to generate synthetic SDO/HMI intensitygrams and reference flows to train DeepVel. Inferred velocity fields show that DeepVel performs best at subgranular and granular scales and is second only to FLCT at mesogranular and supergranular scales.

  18. Multispectral THz-VIS passive imaging system for hidden threats visualization

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw

    2013-10-01

    Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.

  19. Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning.

    PubMed

    Norouzzadeh, Mohammad Sadegh; Nguyen, Anh; Kosmala, Margaret; Swanson, Alexandra; Palmer, Meredith S; Packer, Craig; Clune, Jeff

    2018-06-19

    Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into "big data" sciences. Motion-sensor "camera traps" enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild. Copyright © 2018 the Author(s). Published by PNAS.

  20. Using Gaia as an Astrometric Tool for Deep Ground-based Surveys

    NASA Astrophysics Data System (ADS)

    Casetti-Dinescu, Dana I.; Girard, Terrence M.; Schriefer, Michael

    2018-04-01

    Gaia DR1 positions are used to astrometrically calibrate three epochs' worth of Subaru SuprimeCam images in the fields of globular cluster NGC 2419 and the Sextans dwarf spheroidal galaxy. Distortion-correction ``maps'' are constructed from a combination of offset dithers and reference to Gaia DR1. These are used to derive absolute proper motions in the field of NGC 2419. Notably, we identify the photometrically-detected Monoceros structure in the foreground of NGC 2419 as a kinematically-cold population of stars, distinct from Galactic-field stars. This project demonstrates the feasibility of combining Gaia with deep, ground-based surveys, thus extending high-quality astrometry to magnitudes beyond the limits of Gaia.

  1. Deep Spitzer/IRAC Imaging of the Subaru Deep Field

    NASA Astrophysics Data System (ADS)

    Jiang, Linhua; Egami, Eiichi; Cohen, Seth; Fan, Xiaohui; Ly, Chun; Mechtley, Matthew; Windhorst, Rogier

    2013-10-01

    The last decade saw great progress in our understanding of the distant Universe as a number of objects at z > 6 were discovered. The Subaru Deep Field (SDF) project has played an important role on study of high-z galaxies. The SDF is unique: it covers a large area of 850 sq arcmin; it has extremely deep optical images in a series of broad and narrow bands; it has the largest sample of spectroscopically-confirmed galaxies known at z >= 6, including ~100 Lyman alpha emitters (LAEs) and ~50 Lyman break galaxies (LBGs). Here we propose to carry out deep IRAC imaging observations of the central 75% of the SDF. The proposed observations together with those from our previous Spitzer programs will reach a depth of ~10 hours, and enable the first complete census of physical properties and stellar populations of spectroscopically-confirmed galaxies at the end of cosmic reionization. IRAC data is the key to measure stellar masses and constrain stellar populations in high-z galaxies. From SED modeling with secure redshifts, we will characterize the physical properties of these galaxies, and trace their mass assembly and star formation history. In particular, it allows us, for the first time, to study stellar populations in a large sample of z >=6 LAEs. We will also address some critical questions, such as whether LAEs and LBGs represent physically different galaxy populations. All these will help us to understand the earliest galaxy formation and evolution, and better constrain the galaxy contribution to reionization. The IRAC data will also cover 10,000 emission-line selected galaxies at z < 1.5, 50,000 UV and mass selected LBGs at 1.5 < z < 3, and more than 5,000 LBGs at 3 < z < 6. It will have a legacy value for SDF-related programs.

  2. PdBI cold dust imaging of two extremely red H – [4.5] > 4 galaxies discovered with SEDS and CANDELS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caputi, K. I.; Popping, G.; Spaans, M.

    2014-06-20

    We report Plateau de Bure Interferometer (PdBI) 1.1 mm continuum imaging toward two extremely red H – [4.5] > 4 (AB) galaxies at z > 3, which we have previously discovered making use of Spitzer SEDS and Hubble Space Telescope CANDELS ultra-deep images of the Ultra Deep Survey field. One of our objects is detected on the PdBI map with a 4.3σ significance, corresponding to S{sub ν}(1.1 mm)=0.78±0.18 mJy. By combining this detection with the Spitzer 8 and 24 μm photometry for this source, and SCUBA2 flux density upper limits, we infer that this galaxy is a composite active galacticmore » nucleus/star-forming system. The infrared (IR)-derived star formation rate is SFR ≈ 200 ± 100 M {sub ☉} yr{sup –1}, which implies that this galaxy is a higher-redshift analogue of the ordinary ultra-luminous infrared galaxies more commonly found at z ∼ 2-3. In the field of the other target, we find a tentative 3.1σ detection on the PdBI 1.1 mm map, but 3.7 arcsec away of our target position, so it likely corresponds to a different object. In spite of the lower significance, the PdBI detection is supported by a close SCUBA2 3.3σ detection. No counterpart is found on either the deep SEDS or CANDELS maps, so, if real, the PdBI source could be similar in nature to the submillimeter source GN10. We conclude that the analysis of ultra-deep near- and mid-IR images offers an efficient, alternative route to discover new sites of powerful star formation activity at high redshifts.« less

  3. Deep 12 and 25 Micron Imaging with the Wide Field Infrared Explorer

    NASA Technical Reports Server (NTRS)

    Londsdale, Carol J.

    1997-01-01

    The Wide Field Infrared Explorer is a new NASA Small Explorer class observatory to be launced in late 1998. It will survey hundreds of square degrees of high latitude sky in the mid-infrared 12 and 25 micron bands to flux densities up to a factor of 1000 better than IRAS.

  4. Deep learning for low-dose CT

    NASA Astrophysics Data System (ADS)

    Chen, Hu; Zhang, Yi; Zhou, Jiliu; Wang, Ge

    2017-09-01

    Given the potential risk of X-ray radiation to the patient, low-dose CT has attracted a considerable interest in the medical imaging field. Currently, the main stream low-dose CT methods include vendor-specific sinogram domain filtration and iterative reconstruction algorithms, but they need to access raw data whose formats are not transparent to most users. Due to the difficulty of modeling the statistical characteristics in the image domain, the existing methods for directly processing reconstructed images cannot eliminate image noise very well while keeping structural details. Inspired by the idea of deep learning, here we combine the autoencoder, deconvolution network, and shortcut connections into the residual encoder-decoder convolutional neural network (RED-CNN) for low-dose CT imaging. After patch-based training, the proposed RED-CNN achieves a competitive performance relative to the-state-of-art methods. Especially, our method has been favorably evaluated in terms of noise suppression and structural preservation.

  5. A Pool of Distant Galaxies

    NASA Astrophysics Data System (ADS)

    2008-11-01

    Anyone who has wondered what it might be like to dive into a pool of millions of distant galaxies of different shapes and colours, will enjoy the latest image released by ESO. Obtained in part with the Very Large Telescope, the image is the deepest ground-based U-band image of the Universe ever obtained. It contains more than 27 million pixels and is the result of 55 hours of observations with the VIMOS instrument. A Sea of Galaxies ESO PR Photo 39/08 A Pool of Distant Galaxies This uniquely beautiful patchwork image, with its myriad of brightly coloured galaxies, shows the Chandra Deep Field South (CDF-S), arguably the most observed and best studied region in the entire sky. The CDF-S is one of the two regions selected as part of the Great Observatories Origins Deep Survey (GOODS), an effort of the worldwide astronomical community that unites the deepest observations from ground- and space-based facilities at all wavelengths from X-ray to radio. Its primary purpose is to provide astronomers with the most sensitive census of the distant Universe to assist in their study of the formation and evolution of galaxies. The new image released by ESO combines data obtained with the VIMOS instrument in the U- and R-bands, as well as data obtained in the B-band with the Wide-Field Imager (WFI) attached to the 2.2 m MPG/ESO telescope at La Silla, in the framework of the GABODS survey. The newly released U-band image - the result of 40 hours of staring at the same region of the sky and just made ready by the GOODS team - is the deepest image ever taken from the ground in this wavelength domain. At these depths, the sky is almost completely covered by galaxies, each one, like our own galaxy, the Milky Way, home of hundreds of billions of stars. Galaxies were detected that are a billion times fainter than the unaided eye can see and over a range of colours not directly observable by the eye. This deep image has been essential to the discovery of a large number of new galaxies that are so far away that they are seen as they were when the Universe was only 2 billion years old. In this sea of galaxies - or island universes as they are sometimes called - only a very few stars belonging to the Milky Way are seen. One of them is so close that it moves very fast on the sky. This "high proper motion star" is visible to the left of the second brightest star in the image. It appears as a funny elongated rainbow because the star moved while the data were being taken in the different filters over several years. Notes Because the Universe looks the same in all directions, the number, types and distribution of galaxies is the same everywhere. Consequently, very deep observations of the Universe can be performed in any direction. A series of fields were selected where no foreground object could affect the deep space observations (such as a bright star in our galaxy, or the dust from our Solar System). These fields have been observed using a number of telescopes and satellites, so as to collect information at all possible wavelengths, and characterise the full spectrum of the objects in the field. The data acquired from these deep fields are normally made public to the whole community of astronomers, constituting the basis for large collaborations. Observations in the U-band, that is, at the boundary between visible light and ultraviolet are challenging: the Earth's atmosphere becomes more and more opaque out towards the ultraviolet, a useful property that protects people's skin, but limiting to ground-based telescopes. At shorter wavelengths, observations can only be done from space, using, for example, the Hubble Space Telescope. On the ground, only the very best sites, such as ESO's Paranal Observatory in the Atacama Desert, can perform useful observations in the U-band. Even with the best atmospheric conditions, instruments are at their limit at these wavelengths: the glass of normal lenses transmits less UV light, and detectors are less sensitive, so only instruments designed for UV observations, such as VIMOS on ESO's Very Large Telescope, can get enough light. The VIMOS U-band image, which was obtained as part of the ESO/GOODS public programme, is based on 40 hours of observations with the VLT. The VIMOS R-band image was obtained co-adding a large number of archival images totaling 15 hours of exposure. The WFI B-band image is part of the GABODS survey.

  6. Observing the Earliest Galaxies: Looking for the Sources of Reionization

    NASA Astrophysics Data System (ADS)

    Illingworth, Garth

    2015-04-01

    Systematic searches for the earliest galaxies in the reionization epoch finally became possible in 2009 when the Hubble Space Telescope was updated with a powerful new infrared camera during the final Shuttle servicing mission SM4 to Hubble. The reionization epoch represents the last major phase transition of the universe and was a major event in cosmic history. The intense ultraviolet radiation from young star-forming galaxies is increasingly considered to be the source of the photons that reionized intergalactic hydrogen in the period between the ``dark ages'' (the time before the first stars and galaxies at about 100-200 million years after the Big Bang) and the end of reionization around 800-900 million years. Yet finding and measuring the earliest galaxies in this era of cosmic dawn has proven to a challenging task, even with Hubble's new infrared camera. I will discuss the deep imaging undertaken by Hubble and the remarkable insights that have accrued from the imaging datasets taken over the last decade on the Hubble Ultra-Deep Field (HUDF, HUDF09/12) and other regions. The HUDF datasets are central to the story and have been assembled into the eXtreme Deep Field (XDF), the deepest image ever from Hubble data. The XDF, when combined with results from shallower wide-area imaging surveys (e.g., GOODS, CANDELS) and with detections of galaxies from the Frontier Fields, has provided significant insights into the role of galaxies in reionization. Yet many questions remain. The puzzle is far from being fully solved and, while much will done over the next few years, the solution likely awaits the launch of JWST. NASA/STScI Grant HST-GO-11563.

  7. Joint Segmentation of Multiple Thoracic Organs in CT Images with Two Collaborative Deep Architectures.

    PubMed

    Trullo, Roger; Petitjean, Caroline; Nie, Dong; Shen, Dinggang; Ruan, Su

    2017-09-01

    Computed Tomography (CT) is the standard imaging technique for radiotherapy planning. The delineation of Organs at Risk (OAR) in thoracic CT images is a necessary step before radiotherapy, for preventing irradiation of healthy organs. However, due to low contrast, multi-organ segmentation is a challenge. In this paper, we focus on developing a novel framework for automatic delineation of OARs. Different from previous works in OAR segmentation where each organ is segmented separately, we propose two collaborative deep architectures to jointly segment all organs, including esophagus, heart, aorta and trachea. Since most of the organ borders are ill-defined, we believe spatial relationships must be taken into account to overcome the lack of contrast. The aim of combining two networks is to learn anatomical constraints with the first network, which will be used in the second network, when each OAR is segmented in turn. Specifically, we use the first deep architecture, a deep SharpMask architecture, for providing an effective combination of low-level representations with deep high-level features, and then take into account the spatial relationships between organs by the use of Conditional Random Fields (CRF). Next, the second deep architecture is employed to refine the segmentation of each organ by using the maps obtained on the first deep architecture to learn anatomical constraints for guiding and refining the segmentations. Experimental results show superior performance on 30 CT scans, comparing with other state-of-the-art methods.

  8. Semantic labeling of high-resolution aerial images using an ensemble of fully convolutional networks

    NASA Astrophysics Data System (ADS)

    Sun, Xiaofeng; Shen, Shuhan; Lin, Xiangguo; Hu, Zhanyi

    2017-10-01

    High-resolution remote sensing data classification has been a challenging and promising research topic in the community of remote sensing. In recent years, with the rapid advances of deep learning, remarkable progress has been made in this field, which facilitates a transition from hand-crafted features designing to an automatic end-to-end learning. A deep fully convolutional networks (FCNs) based ensemble learning method is proposed to label the high-resolution aerial images. To fully tap the potentials of FCNs, both the Visual Geometry Group network and a deeper residual network, ResNet, are employed. Furthermore, to enlarge training samples with diversity and gain better generalization, in addition to the commonly used data augmentation methods (e.g., rotation, multiscale, and aspect ratio) in the literature, aerial images from other datasets are also collected for cross-scene learning. Finally, we combine these learned models to form an effective FCN ensemble and refine the results using a fully connected conditional random field graph model. Experiments on the ISPRS 2-D Semantic Labeling Contest dataset show that our proposed end-to-end classification method achieves an overall accuracy of 90.7%, a state-of-the-art in the field.

  9. SXDF-UDS-CANDELS-ALMA 1.5 arcmin2 deep survey

    NASA Astrophysics Data System (ADS)

    Kohno, Kotaro; Tamura, Yoichi; Yamaguchi, Yuki; Umehata, Hideki; Rujopakarn, Wiphu; Lee, Minju; Motohara, Kentaro; Makiya, Ryu; Izumi, Takuma; Ivison, Rob; Ikarashi, Soh; Tadaki, Ken-ichi; Kodama, Tadayuki; Hatsukade, Bunyo; Yabe, Kiyoto; Hayashi, Masao; Iono, Daisuke; Matsuda, Yuichi; Nakanishi, Kouichiro; Kawabe, Ryohei; Wilson, Grant; Yun, Min S.; Hughes, David; Caputi, Karina; Dunlop, James

    2015-08-01

    We have conducted 1.1 mm ALMA observations of a contiguous 105″ × 50″ or 1.5 arcmin2 window (achieved by 19 point mosaic) in the SXDF-UDS-CANDELS. We achieved a 5σ sensitivity of 0.28 mJy, giving a flat sensus of dusty star-forming galaxies with LIR ~6 × 1011 L⊙ (if Tdust = 40 K) or SFR ~100 M⊙ yr-1 up to z~10 thanks to the negative K-correction at this wavelength. We detect 5 brightest sources (S/N>6) and 18 low-significant sources (5 > S/N > 4; they may contain spurious detections, though) in the field. We find that these discrete sources are responsible for a faint filamentary emission seen in low-resolution (~30″) heavily confused AzTEC 1.1mm and SPIRE 0.5mm images. One of the 5 brightest ALMA sources is very dark in deep WFC3 and HAWK-I NIR images as well as VLA 1.4 GHz images, demonstrating that deep ALMA imaging can unveil new obscured star-forming galaxy population.

  10. ESO imaging survey: infrared observations of CDF-S and HDF-S

    NASA Astrophysics Data System (ADS)

    Olsen, L. F.; Miralles, J.-M.; da Costa, L.; Benoist, C.; Vandame, B.; Rengelink, R.; Rité, C.; Scodeggio, M.; Slijkhuis, R.; Wicenec, A.; Zaggia, S.

    2006-06-01

    This paper presents infrared data obtained from observations carried out at the ESO 3.5 m New Technology Telescope (NTT) of the Hubble Deep Field South (HDF-S) and the Chandra Deep Field South (CDF-S). These data were taken as part of the ESO Imaging Survey (EIS) program, a public survey conducted by ESO to promote follow-up observations with the VLT. In the HDF-S field the infrared observations cover an area of ~53 square arcmin, encompassing the HST WFPC2 and STIS fields, in the JHKs passbands. The seeing measured in the final stacked images ranges from 0.79 arcsec to 1.22 arcsec and the median limiting magnitudes (AB system, 2'' aperture, 5σ detection limit) are J_AB˜23.0, H_AB˜22.8 and K_AB˜23.0 mag. Less complete data are also available in JKs for the adjacent HST NICMOS field. For CDF-S, the infrared observations cover a total area of ~100 square arcmin, reaching median limiting magnitudes (as defined above) of J_AB˜23.6 and K_AB˜22.7 mag. For one CDF-S field H band data are also available. This paper describes the observations and presents the results of new reductions carried out entirely through the un-supervised, high-throughput EIS Data Reduction System and its associated EIS/MVM C++-based image processing library developed, over the past 5 years, by the EIS project and now publicly available. The paper also presents source catalogs extracted from the final co-added images which are used to evaluate the scientific quality of the survey products, and hence the performance of the software. This is done comparing the results obtained in the present work with those obtained by other authors from independent data and/or reductions carried out with different software packages and techniques. The final science-grade catalogs together with the astrometrically and photometrically calibrated co-added images are available at CDS.

  11. Fiber-bundle-basis sparse reconstruction for high resolution wide-field microendoscopy.

    PubMed

    Mekhail, Simon Peter; Abudukeyoumu, Nilupaer; Ward, Jonathan; Arbuthnott, Gordon; Chormaic, Síle Nic

    2018-04-01

    In order to observe deep regions of the brain, we propose the use of a fiber bundle for microendoscopy. Fiber bundles allow for the excitation and collection of fluorescence as well as wide field imaging while remaining largely impervious to image distortions brought on by bending. Furthermore, their thin diameter, from 200-500 µ m, means their impact on living tissue, though not absent, is minimal. Although wide field imaging with a bundle allows for a high temporal resolution since no scanning is involved, the largest criticism of bundle imaging is the drastically lowered spatial resolution. In this paper, we make use of sparsity in the object being imaged to up sample the low resolution images from the fiber bundle with compressive sensing. We take each image in a single shot by using a measurement basis dictated by the quasi-crystalline arrangement of the bundle's cores. We find that this technique allows us to increase the resolution of a typical image taken through a fiber bundle.

  12. Anomalously deep polarization in SrTiO3 (001) interfaced with an epitaxial ultrathin manganite film

    DOE PAGES

    Wang, Zhen; Tao, Jing; Yu, Liping; ...

    2016-10-17

    Using atomically-resolved imaging and spectroscopy, we reveal a remarkably deep polarization in non-ferroelectric SrTiO 3 near its interface with an ultrathin nonmetallic film of La 2/3Sr 1/3MnO 3. Electron holography shows an electric field near the interface in SrTiO 3, yielding a surprising spontaneous polarization density of ~ 21 μC/cm 2. Combining the experimental results with first principles calculations, we propose that the observed deep polarization is induced by the electric field originating from oxygen vacancies that extend beyond a dozen unit-cells from the interface, thus providing important evidence of the role of defects in the emergent interface properties ofmore » transition metal oxides.« less

  13. Progression of Local Glaucomatous Damage Near Fixation as Seen with Adaptive Optics Imaging.

    PubMed

    Hood, Donald C; Lee, Dongwon; Jarukasetphon, Ravivarn; Nunez, Jason; Mavrommatis, Maria A; Rosen, Richard B; Ritch, Robert; Dubra, Alfredo; Chui, Toco Y P

    2017-07-01

    Deep glaucomatous defects near fixation were followed over time with an adaptive optics-scanning light ophthalmoscope (AO-SLO) to better understand the progression of these defects and to explore the use of AO-SLO in detecting them. Six eyes of 5 patients were imaged with an AO-SLO from 2 to 4 times for a range of 14.6 to 33.6 months. All eyes had open-angle glaucoma with deep defects in the superior visual field (VF) near fixation as defined by 10-2 VFs with 5 or more points less than -15 dB; two of the eyes had deep defects in the inferior VF as well. AO-SLO images were obtained around the temporal edge of the disc. In 4 of the 6 eyes, the edge of the inferior-temporal disc region of the retinal nerve fiber (RNF) defect seen on AO-SLO moved closer to fixation within 10.6 to 14.7 months. In 4 eyes, RNF bundles in the affected region appeared to lose contrast and/or disappear. Progressive changes in RNF bundles associated with deep defects on 10-2 VFs can be seen within about 1 year with AO-SLO imaging. These changes are well below the spatial resolution of the 10-2 VF. On the other hand, subtle thinning of regions with RNF bundles is not easy to see with current AO-SLO technology, and may be better followed with OCT. AO-SLO imaging may be useful in clinical trials designed to see very small changes in deep defects.

  14. Brain MRI analysis for Alzheimer's disease diagnosis using an ensemble system of deep convolutional neural networks.

    PubMed

    Islam, Jyoti; Zhang, Yanqing

    2018-05-31

    Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer's disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer's disease diagnosis in clinical research. Detection of Alzheimer's disease is exacting due to the similarity in Alzheimer's disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer's disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer's disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.

  15. Degradation of CMOS image sensors in deep-submicron technology due to γ-irradiation

    NASA Astrophysics Data System (ADS)

    Rao, Padmakumar R.; Wang, Xinyang; Theuwissen, Albert J. P.

    2008-09-01

    In this work, radiation induced damage mechanisms in deep submicron technology is resolved using finger gated-diodes (FGDs) as a radiation sensitive tool. It is found that these structures are simple yet efficient structures to resolve radiation induced damage in advanced CMOS processes. The degradation of the CMOS image sensors in deep-submicron technology due to γ-ray irradiation is studied by developing a model for the spectral response of the sensor and also by the dark-signal degradation as a function of STI (shallow-trench isolation) parameters. It is found that threshold shifts in the gate-oxide/silicon interface as well as minority carrier life-time variations in the silicon bulk are minimal. The top-layer material properties and the photodiode Si-SiO2 interface quality are degraded due to γ-ray irradiation. Results further suggest that p-well passivated structures are inevitable for radiation-hard designs. It was found that high electrical fields in submicron technologies pose a threat to high quality imaging in harsh environments.

  16. Accuracy of open magnetic resonance imaging for guiding injection of the equine deep digital flexor tendon within the hoof.

    PubMed

    Groom, Lauren M; White, Nathaniel A; Adams, M Norris; Barrett, Jennifer G

    2017-11-01

    Lesions of the distal deep digital flexor tendon (DDFT) are frequently diagnosed using MRI in horses with foot pain. Intralesional injection of biologic therapeutics shows promise in tendon healing; however, accurate injection of distal deep digital flexor tendon lesions within the hoof is difficult. The aim of this experimental study was to evaluate accuracy of a technique for injection of the deep digital flexor tendon within the hoof using MRI-guidance, which could be performed in standing patients. We hypothesized that injection of the distal deep digital flexor tendon within the hoof could be accurately guided using open low-field MRI to target either the lateral or medial lobe at a specific location. Ten cadaver limbs were positioned in an open, low-field MRI unit. Each distal deep digital flexor tendon lobe was assigned to have a proximal (adjacent to the proximal aspect of the navicular bursa) or distal (adjacent to the navicular bone) injection. A titanium needle was inserted into each tendon lobe, guided by T1-weighted transverse images acquired simultaneously during injection. Colored dye was injected as a marker and postinjection MRI and gross sections were assessed. The success of injection as evaluated on gross section was 85% (70% proximal, 100% distal). The success of injection as evaluated by MRI was 65% (60% proximal, 70% distal). There was no significant difference between the success of injecting the medial versus lateral lobe. The major limitation of this study was the use of cadaver limbs with normal tendons. The authors conclude that injection of the distal deep digital flexor tendon within the hoof is possible using MRI guidance. © 2017 American College of Veterinary Radiology.

  17. DeepSurveyCam—A Deep Ocean Optical Mapping System

    PubMed Central

    Kwasnitschka, Tom; Köser, Kevin; Sticklus, Jan; Rothenbeck, Marcel; Weiß, Tim; Wenzlaff, Emanuel; Schoening, Timm; Triebe, Lars; Steinführer, Anja; Devey, Colin; Greinert, Jens

    2016-01-01

    Underwater photogrammetry and in particular systematic visual surveys of the deep sea are by far less developed than similar techniques on land or in space. The main challenges are the rough conditions with extremely high pressure, the accessibility of target areas (container and ship deployment of robust sensors, then diving for hours to the ocean floor), and the limitations of localization technologies (no GPS). The absence of natural light complicates energy budget considerations for deep diving flash-equipped drones. Refraction effects influence geometric image formation considerations with respect to field of view and focus, while attenuation and scattering degrade the radiometric image quality and limit the effective visibility. As an improvement on the stated issues, we present an AUV-based optical system intended for autonomous visual mapping of large areas of the seafloor (square kilometers) in up to 6000 m water depth. We compare it to existing systems and discuss tradeoffs such as resolution vs. mapped area and show results from a recent deployment with 90,000 mapped square meters of deep ocean floor. PMID:26828495

  18. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment.

    PubMed

    Mezgec, Simon; Koroušić Seljak, Barbara

    2017-06-27

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86 . 72 % , along with an accuracy of 94 . 47 % on a detection dataset containing 130 , 517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson's disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55 % , which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson's disease patients.

  19. Deep Learning: A Primer for Radiologists.

    PubMed

    Chartrand, Gabriel; Cheng, Phillip M; Vorontsov, Eugene; Drozdzal, Michal; Turcotte, Simon; Pal, Christopher J; Kadoury, Samuel; Tang, An

    2017-01-01

    Deep learning is a class of machine learning methods that are gaining success and attracting interest in many domains, including computer vision, speech recognition, natural language processing, and playing games. Deep learning methods produce a mapping from raw inputs to desired outputs (eg, image classes). Unlike traditional machine learning methods, which require hand-engineered feature extraction from inputs, deep learning methods learn these features directly from data. With the advent of large datasets and increased computing power, these methods can produce models with exceptional performance. These models are multilayer artificial neural networks, loosely inspired by biologic neural systems. Weighted connections between nodes (neurons) in the network are iteratively adjusted based on example pairs of inputs and target outputs by back-propagating a corrective error signal through the network. For computer vision tasks, convolutional neural networks (CNNs) have proven to be effective. Recently, several clinical applications of CNNs have been proposed and studied in radiology for classification, detection, and segmentation tasks. This article reviews the key concepts of deep learning for clinical radiologists, discusses technical requirements, describes emerging applications in clinical radiology, and outlines limitations and future directions in this field. Radiologists should become familiar with the principles and potential applications of deep learning in medical imaging. © RSNA, 2017.

  20. LMT imaging of the Extended Groth Strip: a search for the high-redshift tail of the sub-mm galaxy population

    NASA Astrophysics Data System (ADS)

    Aretxaga, Itziar

    2015-08-01

    The combination of short and long-wavelength deep (sub-)mm surveys can effectively be used to identify high-redshift sub-millimeter galaxies (z>4). Having star formation rates in excess of 500 Msun/yr, these bright (sub-)mm sources have been identified with the progenitors of massive elliptical galaxies undergoing rapid growth. With this purpose in mind, we are surveying a 20 sq. arcmin field within the Extended Groth Strip with the 1.1mm AzTEC camera mounted at the Large Millimeter Telescope that overlaps with the deep 450/850um SCUBA-2 Cosmology Legacy Survey and the CANDELS deep NIR imaging. The improved beamsize of the LMT (8”) over previous surveys aids the identification of the most prominent optical/IR counterparts. We discuss the high-redshift candidates found.

  1. Visitor from Deep Space

    NASA Image and Video Library

    2010-02-17

    Comet Siding Spring appears to streak across the sky like a superhero in this new infrared image from NASA Wide-field Infrared Survey Explorer. The comet, also known as C/2007 Q3, was discovered in 2007 by observers in Australia.

  2. Reflection-artifact-free photoacoustic imaging using PAFUSion (photoacoustic-guided focused ultrasound)

    NASA Astrophysics Data System (ADS)

    Kuniyil Ajith Singh, Mithun; Jaeger, Michael; Frenz, Martin; Steenbergen, Wiendelt

    2016-03-01

    Reflection artifacts caused by acoustic inhomogeneities are a main challenge to deep-tissue photoacoustic imaging. Photoacoustic transients generated by the skin surface and superficial vasculature will propagate into the tissue and reflect back from echogenic structures to generate reflection artifacts. These artifacts can cause problems in image interpretation and limit imaging depth. In its basic version, PAFUSion mimics the inward travelling wave-field from blood vessel-like PA sources by applying focused ultrasound pulses, and thus provides a way to identify reflection artifacts. In this work, we demonstrate reflection artifact correction in addition to identification, towards obtaining an artifact-free photoacoustic image. In view of clinical applications, we implemented an improved version of PAFUSion in which photoacoustic data is backpropagated to imitate the inward travelling wave-field and thus the reflection artifacts of a more arbitrary distribution of PA sources that also includes the skin melanin layer. The backpropagation is performed in a synthetic way based on the pulse-echo acquisitions after transmission on each single element of the transducer array. We present a phantom experiment and initial in vivo measurements on human volunteers where we demonstrate significant reflection artifact reduction using our technique. The results provide a direct confirmation that reflection artifacts are prominent in clinical epi-photoacoustic imaging, and that PAFUSion can reduce these artifacts significantly to improve the deep-tissue photoacoustic imaging.

  3. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  4. Bessel light sheet structured illumination microscopy

    NASA Astrophysics Data System (ADS)

    Noshirvani Allahabadi, Golchehr

    Biomedical study researchers using animals to model disease and treatment need fast, deep, noninvasive, and inexpensive multi-channel imaging methods. Traditional fluorescence microscopy meets those criteria to an extent. Specifically, two-photon and confocal microscopy, the two most commonly used methods, are limited in penetration depth, cost, resolution, and field of view. In addition, two-photon microscopy has limited ability in multi-channel imaging. Light sheet microscopy, a fast developing 3D fluorescence imaging method, offers attractive advantages over traditional two-photon and confocal microscopy. Light sheet microscopy is much more applicable for in vivo 3D time-lapsed imaging, owing to its selective illumination of tissue layer, superior speed, low light exposure, high penetration depth, and low levels of photobleaching. However, standard light sheet microscopy using Gaussian beam excitation has two main disadvantages: 1) the field of view (FOV) of light sheet microscopy is limited by the depth of focus of the Gaussian beam. 2) Light-sheet images can be degraded by scattering, which limits the penetration of the excitation beam and blurs emission images in deep tissue layers. While two-sided sheet illumination, which doubles the field of view by illuminating the sample from opposite sides, offers a potential solution, the technique adds complexity and cost to the imaging system. We investigate a new technique to address these limitations: Bessel light sheet microscopy in combination with incoherent nonlinear Structured Illumination Microscopy (SIM). Results demonstrate that, at visible wavelengths, Bessel excitation penetrates up to 250 microns deep in the scattering media with single-side illumination. Bessel light sheet microscope achieves confocal level resolution at a lateral resolution of 0.3 micron and an axial resolution of 1 micron. Incoherent nonlinear SIM further reduces the diffused background in Bessel light sheet images, resulting in confocal quality images in thick tissue. The technique was applied to live transgenic zebra fish tg(kdrl:GFP), and the sub-cellular structure of fish vasculature genetically labeled with GFP was captured in 3D. The superior speed of the microscope enables us to acquire signal from 200 layers of a thick sample in 4 minutes. The compact microscope uses exclusively off-the-shelf components and offers a low-cost imaging solution for studying small animal models or tissue samples.

  5. First imaging results from Apertif, a phased-array feed for WSRT

    NASA Astrophysics Data System (ADS)

    Adams, Elizabeth A.; Adebahr, Björn; de Blok, Willem J. G.; Hess, Kelley M.; Hut, Boudewijn; Lucero, Danielle M.; Maccagni, Filippo; Morganti, Raffaella; Oosterloo, Tom; Staveley-Smith, Lister; van der Hulst, Thijs; Verheijen, Marc; Verstappen, Joris

    2017-01-01

    Apertif is a phased-array feed for the Westerbork Synthesis Radio Telescope (WSRT), increasing the field of view of the telescope by a factor of twenty-five. In 2017, three legacy surveys will commence: a shallow imaging survey, a medium-deep imaging survey, and a pulsars and fast transients survey. The medium-deep imaging survey will include coverage of the northern Herschel Atlas field, the CVn region, HetDex, and the Perseus-Pisces supercluster. The shallow imaging survey increases overlap with HetDex, has expanded coverage of the Perseus-Pisces supercluster, and includes part of the Zone of Avoidance. Both imaging surveys are coordinating with MaNGA and will have WEAVE follow-up. The imaging surveys will be done in full polarization over the frequency range 1130-1430 MHz, which corresponds to redshifts of z=0-0.256 for neutral hydrogen (HI). The spectral resolution is 12.2 kHz, or an HI velocity resolution of 2.6 km/s at z=0 and 3.2 km/s at z=0.256. The full resolution images will have a beam size of 15"x15"/sin(declination), and tapered data products (i.e., 30" resolution images) will also be available. The shallow survey will cover ~3500 square degrees with a four-sigma HI imaging sensitivity of 2.5x10^20 atoms cm^-2 (20 km/s linewidth) at the highest resolution and a continuum sensitivity of 15 uJy/beam (11 uJy/beam for polarization data). The current plan calls for the medium deep survey to cover 450 square degrees and provide an HI imaging sensitivity of 1.0x10^20 atoms cm^-2 at the highest resolution and a continuum sensitivity of 6 uJy/beam, close to the confusion limit (4 uJy/beam for polarization data, not confusion limited). Up-to-date information on Apertif and the planned surveys can be found at: http://www.apertif.nl.Commissioning of the Apertif instrument is currently underway. Here we present first results from the image commissioning, including the detection of HI absorption plus continuum and HI imaging. These results highlight the data quality that will be achieved for the surveys.

  6. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  7. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation.

    PubMed

    Zhao, Xiaomei; Wu, Yihong; Song, Guidong; Li, Zhenye; Zhang, Yazhuo; Fan, Yong

    2018-01-01

    Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks (FCNNs) and Conditional Random Fields (CRFs) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training FCNNs using image patches; 2) training CRFs as Recurrent Neural Networks (CRF-RNN) using image slices with parameters of FCNNs fixed; and 3) fine-tuning the FCNNs and the CRF-RNN using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge (BRATS) 2013, BRATS 2015 and BRATS 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Utilization of digital LANDSAT imagery for the study of granitoid bodies in Rondonia: Case example of the Pedra Branca massif

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Almeidafilho, R.; Payolla, B. L.; Depinho, O. G.; Bettencourt, J. S.

    1984-01-01

    Analysis of digital multispectral MSS-LANDSAT images enhanced through computer techniques and enlarged to a video scale of 1:100.000, show the main geological and structura features of the Pedra Branca granitic massif in Rondonia. These are not observed in aerial photographs or adar images. Field work shows that LANDSAT photogeological units correspond to different facies of granitic rocks in the Pedra Branca massif. Even under the particular characteristics of Amazonia (Tropical Forest, deep weathering, and Quaternary sedimentary covers), an adequate utilization of orbital remote sensing images can be important tools for the orientation of field works.

  9. The Capodimonte Deep Field

    NASA Astrophysics Data System (ADS)

    2001-04-01

    A Window towards the Distant Universe Summary The Osservatorio Astronomico Capodimonte Deep Field (OACDF) is a multi-colour imaging survey project that is opening a new window towards the distant universe. It is conducted with the ESO Wide Field Imager (WFI) , a 67-million pixel advanced camera attached to the MPG/ESO 2.2-m telescope at the La Silla Observatory (Chile). As a pilot project at the Osservatorio Astronomico di Capodimonte (OAC) [1], the OACDF aims at providing a large photometric database for deep extragalactic studies, with important by-products for galactic and planetary research. Moreover, it also serves to gather experience in the proper and efficient handling of very large data sets, preparing for the arrival of the VLT Survey Telescope (VST) with the 1 x 1 degree 2 OmegaCam facility. PR Photo 15a/01 : Colour composite of the OACDF2 field . PR Photo 15b/01 : Interacting galaxies in the OACDF2 field. PR Photo 15c/01 : Spiral galaxy and nebulous object in the OACDF2 field. PR Photo 15d/01 : A galaxy cluster in the OACDF2 field. PR Photo 15e/01 : Another galaxy cluster in the OACDF2 field. PR Photo 15f/01 : An elliptical galaxy in the OACDF2 field. The Capodimonte Deep Field ESO PR Photo 15a/01 ESO PR Photo 15a/01 [Preview - JPEG: 400 x 426 pix - 73k] [Normal - JPEG: 800 x 851 pix - 736k] [Hi-Res - JPEG: 3000 x 3190 pix - 7.3M] Caption : This three-colour image of about 1/4 of the Capodimonte Deep Field (OACDF) was obtained with the Wide-Field Imager (WFI) on the MPG/ESO 2.2-m telescope at the la Silla Observatory. It covers "OACDF Subfield no. 2 (OACDF2)" with an area of about 35 x 32 arcmin 2 (about the size of the full moon), and it is one of the "deepest" wide-field images ever obtained. Technical information about this photo is available below. With the comparatively few large telescopes available in the world, it is not possible to study the Universe to its outmost limits in all directions. Instead, astronomers try to obtain the most detailed information possible in selected viewing directions, assuming that what they find there is representative for the Universe as a whole. This is the philosophy behind the so-called "deep-field" projects that subject small areas of the sky to intensive observations with different telescopes and methods. The astronomers determine the properties of the objects seen, as well as their distances and are then able to obtain a map of the space within the corresponding cone-of-view (the "pencil beam"). Recent, successful examples of this technique are the "Hubble Deep Field" (cf. ESO PR Photo 26/98 ) and the "Chandra Deep Field" ( ESO PR 05/01 ). In this context, the Capodimonte Deep Field (OACDF) is a pilot research project, now underway at the Osservatorio Astronomico di Capodimonte (OAC) in Napoli (Italy). It is a multi-colour imaging survey performed with the Wide Field Imager (WFI) , a 67-million pixel (8k x 8k) digital camera that is installed at the 2.2-m MPG/ESO Telescope at ESO's La Silla Observatory in Chile. The scientific goal of the OACDF is to provide an important database for subsequent extragalactic, galactic and planetary studies. It will allow the astronomers at OAC - who are involved in the VLT Survey Telescope (VST) project - to gain insight into the processing (and use) of the large data flow from a camera similar to, but four times smaller than the OmegaCam wide-field camera that will be installed at the VST. The field selection for the OACDF was based on the following criteria: * There must be no stars brighter than about 9th magnitude in the field, in order to avoid saturation of the CCD detector and effects from straylight in the telescope and camera. No Solar System planets should be near the field during the observations; * It must be located far from the Milky Way plane (at high galactic latitude) in order to reduce the number of galactic stars seen in this direction; * It must be located in the southern sky in order to optimize observing conditions (in particular, the altitude of the field above the horizon), as seen from the La Silla and Paranal sites; * There should be little interstellar material in this direction that may obscure the view towards the distant Universe; * Observations in this field should have been made with the Hubble Space Telescope (HST) that may serve for comparison and calibration purposes. Based on these criteria, the astronomers selected a field measuring about 1 x 1 deg 2 in the southern constellation of Corvus (The Raven). This is now known as the Capodimonte Deep Field (OACDF) . The above photo ( PR Photo 15a/01 ) covers one-quarter of the full field (Subfield No. 2 - OACDF2) - some of the objects seen in this area are shown below in more detail. More than 35,000 objects have been found in this area; the faintest are nearly 100 million fainter than what can be perceived with the unaided eye in the dark sky. Selected objects in the Capodimonte Deep Field ESO PR Photo 15b/01 ESO PR Photo 15b/01 [Preview - JPEG: 400 x 435 pix - 60k] [Normal - JPEG: 800 x 870 pix - 738k] [Hi-Res - JPEG: 3000 x 3261 pix - 5.1M] Caption : Enlargement of the interacting galaxies that are seen in the upper left corner of the OACDF2 field shown in PR Photo 15a/01 . The enlargement covers 1250 x 1130 WFI pixels (1 pixel = 0.24 arcsec), or about 5.0 x 4.5 arcmin 2 in the sky. The lower spiral is itself an interactive double. ESO PR Photo 15c/01 ESO PR Photo 15c/01 [Preview - JPEG: 557 x 400 pix - 93k] [Normal - JPEG: 1113 x 800 pix - 937k] [Hi-Res - JPEG: 3000 x 2156 pix - 4.0M] Caption : Enlargement of a spiral galaxy and a nebulous object in this area. The field shown covers 1250 x 750 pixels, or about 5 x 3 arcmin 2 in the sky. Note the very red objects next to the two bright stars in the lower-right corner. The colours of these objects are consistent with those of spheroidal galaxies at intermediate distances (redshifts). ESO PR Photo 15d/01 ESO PR Photo 15d/01 [Preview - JPEG: 400 x 530 pix - 68k] [Normal - JPEG: 800 x 1060 pix - 870k] [Hi-Res - JPEG: 2768 x 3668 pix - 6.2M] Caption : A further enlargement of a galaxy cluster of which most members are located in the north-east quadrant (upper left) and have a reddish colour. The nebulous object to the upper left is a dwarf galaxy of spheroidal shape. The red object, located near the centre of the field and resembling a double star, is very likely a gravitational lens [2]. Some of the very red, point-like objects in the field may be distant quasars, very-low mass stars or, possibly, relatively nearby brown dwarf stars. The field shown covers 1380 x 1630 pixels, or 5.5 x 6.5 arcmin 2. ESO PR Photo 15e/01 ESO PR Photo 15e/01 [Preview - JPEG: 400 x 418 pix - 56k] [Normal - JPEG: 800 x 835 pix - 700k] [Hi-Res - JPEG: 3000 x 3131 pix - 5.0M] Caption : Enlargement of a moderately distant galaxy cluster in the south-east quadrant (lower left) of the OACDF2 field. The field measures 1380 x 1260 pixels, or about 5.5 x 5.0 arcmin 2 in the sky. ESO PR Photo 15f/01 ESO PR Photo 15f/01 [Preview - JPEG: 449 x 400 pix - 68k] [Normal - JPEG: 897 x 800 pix - 799k] [Hi-Res - JPEG: 3000 x 2675 pix - 5.6M] Caption : Enlargement of the elliptical galaxy that is located to the west (right) in the OACDF2 field. The numerous tiny objects surrounding the galaxy may be globular clusters. The fuzzy object on the right edge of the field may be a dwarf spheroidal galaxy. The size of the field is about 6 x 5 arcmin 2. Technical Information about the OACDF Survey The observations for the OACDF project were performed in three different ESO periods (18-22 April 1999, 7-12 March 2000 and 26-30 April 2000). Some 100 Gbyte of raw data were collected during each of the three observing runs. The first OACDF run was done just after the commissioning of the ESO-WFI. The observational strategy was to perform a 1 x 1 deg 2 short-exposure ("shallow") survey and then a 0.5 x 1 deg 2 "deep" survey. The shallow survey was performed in the B, V, R and I broad-band filters. Four adjacent 30 x 30 arcmin 2 fields, together covering a 1 x 1 deg 2 field in the sky, were observed for the shallow survey. Two of these fields were chosen for the 0.5 x 1 deg 2 deep survey; OACDF2 shown above is one of these. The deep survey was performed in the B, V, R broad-bands and in other intermediate-band filters. The OACDF data are fully reduced and the catalogue extraction has started. A two-processor (500 Mhz each) DS20 machine with 100 Gbyte of hard disk, specifically acquired at the OAC for WFI data reduction, was used. The detailed guidelines of the data reduction, as well as the catalogue extraction, are reported in a research paper that will appear in the European research journal Astronomy & Astrophysics . Notes [1]: The team members are: Massimo Capaccioli, Juan M. Alcala', Roberto Silvotti, Magda Arnaboldi, Vincenzo Ripepi, Emanuella Puddu, Massimo Dall'Ora, Giuseppe Longo and Roberto Scaramella . [2]: This is a preliminary result by Juan Alcala', Massimo Capaccioli, Giuseppe Longo, Mikhail Sazhin, Roberto Silvotti and Vincenzo Testa , based on recent observations with the Telescopio Nazionale Galileo (TNG) which show that the spectra of the two objects are identical. Technical information about the photos PR Photo 15a/01 has been obtained by the combination of the B, V, and R stacked images of the OACDF2 field. The total exposure times in the three bands are 2 hours in B and V (12 ditherings of 10 min each were stacked to produce the B and V images) and 3 hours in R (13 ditherings of 15 min each). The mosaic images in the B and V bands were aligned relative to the R-band image and adjusted to a logarithmic intensity scale prior to the combination. The typical seeing was of the order of 1 arcsec in each of the three bands. Preliminary estimates of the three-sigma limiting magnitudes in B, V and R indicate 25.5, 25.0 and 25.0, respectively. More than 35,000 objects are detected above the three-sigma level. PR Photos 15b-f/01 display selected areas of the field shown in PR Photo 15a/01 at the original WFI scale, hereby also demonstrating the enormous amount of information contained in these wide-field images. In all photos, North is up and East is left.

  10. Monitoring controlled graves representing common burial scenarios with ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Schultz, John J.; Martin, Michael M.

    2012-08-01

    Implementing controlled geophysical research is imperative to understand the variables affecting detection of clandestine graves during real-life forensic searches. This study focused on monitoring two empty control graves (shallow and deep) and six burials containing a small pig carcass (Sus scrofa) representing different burial forensic scenarios: a shallow buried naked carcass, a deep buried naked carcass, a deep buried carcass covered by a layer of rocks, a deep buried carcass covered by a layer of lime, a deep buried carcass wrapped in an impermeable tarpaulin and a deep buried carcass wrapped in a cotton blanket. Multi-frequency, ground penetrating radar (GPR) data were collected monthly over a 12-month monitoring period. The research site was a cleared field within a wooded area in a humid subtropical environment, and the soil consisted of a Spodosol, a common soil type in Florida. This study compared 2D GPR reflection profiles and horizontal time slices obtained with both 250 and 500 MHz dominant frequency antennae to determine the utility of both antennae for grave detection in this environment over time. Overall, a combination of both antennae frequencies provided optimal detection of the targets. Better images were noted for deep graves, compared to shallow graves. The 250 MHz antenna provided better images for detecting deep graves, as less non-target anomalies were produced with lower radar frequencies. The 250 MHz antenna also provided better images detecting the disturbed ground. Conversely, the 500 MHz antenna provided better images when detecting the shallow pig grave. The graves that contained a pig carcass with associated grave items provided the best results, particularly the carcass covered with rocks and the carcass wrapped in a tarpaulin. Finally, during periods of increased soil moisture levels, there was increased detection of graves that was most likely related to conductive decompositional fluid from the carcasses.

  11. Exploring the extremely low surface brightness sky: distances to 23 newly discovered objects in Dragonfly fields

    NASA Astrophysics Data System (ADS)

    van Dokkum, Pieter

    2016-10-01

    We are obtaining deep, wide field images of nearby galaxies with the Dragonfly Telephoto Array. This telescope is optimized for low surface brightness imaging, and we are finding many low surface brightness objects in the Dragonfly fields. In Cycle 22 we obtained ACS imaging for 7 galaxies that we had discovered in a Dragonfly image of the galaxy M101. Unexpectedly, the ACS data show that only 3 of the galaxies are members of the M101 group, and the other 4 are very large Ultra Diffuse Galaxies (UDGs) at much greater distance. Building on our Cycle 22 program, here we request ACS imaging for 23 newly discovered low surface brightness objects in four Dragonfly fields centered on the galaxies NGC 1052, NGC 1084, NGC 3384, and NGC 4258. The immediate goals are to construct the satellite luminosity functions in these four fields and to constrain the number density of UDGs that are not in rich clusters. More generally, this complete sample of extremely low surface brightness objects provides the first systematic insight into galaxies whose brightness peaks at >25 mag/arcsec^2.

  12. A deep ALMA image of the Hubble Ultra Deep Field

    NASA Astrophysics Data System (ADS)

    Dunlop, J. S.; McLure, R. J.; Biggs, A. D.; Geach, J. E.; Michałowski, M. J.; Ivison, R. J.; Rujopakarn, W.; van Kampen, E.; Kirkpatrick, A.; Pope, A.; Scott, D.; Swinbank, A. M.; Targett, T. A.; Aretxaga, I.; Austermann, J. E.; Best, P. N.; Bruce, V. A.; Chapin, E. L.; Charlot, S.; Cirasuolo, M.; Coppin, K.; Ellis, R. S.; Finkelstein, S. L.; Hayward, C. C.; Hughes, D. H.; Ibar, E.; Jagannathan, P.; Khochfar, S.; Koprowski, M. P.; Narayanan, D.; Nyland, K.; Papovich, C.; Peacock, J. A.; Rieke, G. H.; Robertson, B.; Vernstrom, T.; Werf, P. P. van der; Wilson, G. W.; Yun, M.

    2017-04-01

    We present the results of the first, deep Atacama Large Millimeter Array (ALMA) imaging covering the full ≃4.5 arcmin2 of the Hubble Ultra Deep Field (HUDF) imaged with Wide Field Camera 3/IR on HST. Using a 45-pointing mosaic, we have obtained a homogeneous 1.3-mm image reaching σ1.3 ≃ 35 μJy, at a resolution of ≃0.7 arcsec. From an initial list of ≃50 > 3.5σ peaks, a rigorous analysis confirms 16 sources with S1.3 > 120 μJy. All of these have secure galaxy counterparts with robust redshifts ( = 2.15). Due to the unparalleled supporting data, the physical properties of the ALMA sources are well constrained, including their stellar masses (M*) and UV+FIR star formation rates (SFR). Our results show that stellar mass is the best predictor of SFR in the high-redshift Universe; indeed at z ≥ 2 our ALMA sample contains seven of the nine galaxies in the HUDF with M* ≥ 2 × 1010 M⊙, and we detect only one galaxy at z > 3.5, reflecting the rapid drop-off of high-mass galaxies with increasing redshift. The detections, coupled with stacking, allow us to probe the redshift/mass distribution of the 1.3-mm background down to S1.3 ≃ 10 μJy. We find strong evidence for a steep star-forming 'main sequence' at z ≃ 2, with SFR ∝M* and a mean specific SFR ≃ 2.2 Gyr-1. Moreover, we find that ≃85 per cent of total star formation at z ≃ 2 is enshrouded in dust, with ≃65 per cent of all star formation at this epoch occurring in high-mass galaxies (M* > 2 × 1010 M⊙), for which the average obscured:unobscured SF ratio is ≃200. Finally, we revisit the cosmic evolution of SFR density; we find this peaks at z ≃ 2.5, and that the star-forming Universe transits from primarily unobscured to primarily obscured at z ≃ 4.

  13. Cosmic Accretion and Galaxy Co-Evolution: Lessons from the Extended Chandra Deep Field South

    NASA Astrophysics Data System (ADS)

    Urry, C. Megan

    2011-05-01

    The Chandra deep fields reveal that most cosmic accretion onto supermassive black holes is obscured by gas and dust. The GOODS and MUSYC multiwavelength data show that many X-ray-detected AGN are faint and red (or even undetectable) in the optical but bright in the infrared, as is characteristic of obscured sources. (N.B. The ECDFS is most sensitive to the AGN that constitute the X-ray background, namely, moderate luminosity AGN, with log Lx=43-44, at moderate redshifts, 0.5

  14. Deep learning in breast cancer risk assessment: evaluation of convolutional neural networks on a clinical dataset of full-field digital mammograms.

    PubMed

    Li, Hui; Giger, Maryellen L; Huynh, Benjamin Q; Antropova, Natalia O

    2017-10-01

    To evaluate deep learning in the assessment of breast cancer risk in which convolutional neural networks (CNNs) with transfer learning are used to extract parenchymal characteristics directly from full-field digital mammographic (FFDM) images instead of using computerized radiographic texture analysis (RTA), 456 clinical FFDM cases were included: a "high-risk" BRCA1/2 gene-mutation carriers dataset (53 cases), a "high-risk" unilateral cancer patients dataset (75 cases), and a "low-risk dataset" (328 cases). Deep learning was compared to the use of features from RTA, as well as to a combination of both in the task of distinguishing between high- and low-risk subjects. Similar classification performances were obtained using CNN [area under the curve [Formula: see text]; standard error [Formula: see text

  15. A deep learning approach for pose estimation from volumetric OCT data.

    PubMed

    Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander

    2018-05-01

    Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. MRI-induced heating of deep brain stimulation leads

    NASA Astrophysics Data System (ADS)

    Mohsin, Syed A.; Sheikh, Noor M.; Saeed, Usman

    2008-10-01

    The radiofrequency (RF) field used in magnetic resonance imaging is scattered by medical implants. The scattered field of a deep brain stimulation lead can be very intense near the electrodes stimulating the brain. The effect is more pronounced if the lead behaves as a resonant antenna. In this paper, we examine the resonant length effect. We also use the finite element method to compute the near field for (i) the lead immersed in inhomogeneous tissue (fat, muscle, and brain tissues) and (ii) the lead connected to an implantable pulse generator. Electric field, specific absorption rate and induced temperature rise distributions have been obtained in the brain tissue surrounding the electrodes. The worst-case scenario has been evaluated by neglecting the effect of blood perfusion. The computed values are in good agreement with in vitro measurements made in the laboratory.

  17. Case Study of Image-Guided Deep Brain Stimulation: Magnetic Resonance Imaging-Based White Matter Tractography Shows Differences in Responders and Nonresponders.

    PubMed

    O'Halloran, Rafael L; Chartrain, Alexander G; Rasouli, Jonathan J; Ramdhani, Ritesh A; Kopell, Brian Harris

    2016-12-01

    The caudal zona incerta (cZI) is an increasingly popular deep brain stimulation (DBS) target for the treatment of tremor-predominant disease. The dentatorubrothalamic tract (DRTT) is a white matter fiber bundle that traverses the cZI and can be identified using diffusion-weighted magnetic resonance imaging fiber tractography to ascertain its precise course. In this report, we compare 2 patient cases of cZI DBS, a responder and a nonresponder. Patient 1 (responder) is a 65-year-old man with medically refractory Parkinson disease who underwent bilateral DBS lead placement in the cZI. Postoperatively he demonstrated >90% reduction in baseline tremor and was not limited by stimulation side effects. Postoperative imaging showed correct lead placement in the cZI. Tractography revealed a DRTT within the field of stimulation, bilaterally. Patient 2 (nonresponder) is a 61-year-old man with medically refractory Parkinson disease who also underwent bilateral DBS lead placement in the cZI. He initially demonstrated >90% reduction in baseline tremor but developed disabling dystonia of his left leg and significant slurring of his speech in the months after surgery. Postoperative imaging showed bilateral lead placement in the cZI. Right-sided electrode revision was recommended and resulted in relief of tremor and reduced dystonic side effects. Tractography analysis of the original leads revealed a DRTT with an atypical anterior trajectory and a location outside the field of stimulation. Tractography analysis of the revised lead showed a DRTT within the field of stimulation. Preoperative diffusion-weighted magnetic resonance imaging fiber tractography imaging of the DRTT has the potential to improve and individualize DBS planning. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Modeling Stratigraphic Architecture of Deep-water Deposits Using a Small Unmanned Aircraft: Neogene Thin-bedded Turbidites, East Coast Basin, New Zealand

    NASA Astrophysics Data System (ADS)

    Nieminski, N.; Graham, S. A.

    2014-12-01

    One of the outstanding challenges of field geology is inaccessibility of exposure. The ability to view and characterize outcrops that are difficult to study from the ground is greatly improved by aerial investigation. Detailed stratigraphic architecture of such exposures is best addressed by using advances and availability of small unmanned aircraft systems (sUAS) that can safely navigate from high-altitude overviews of study areas to within a meter of the exposure of interest. High-resolution photographs acquired at various elevations and azimuths by sUAS are then used to convert field measurements to digital representations in three-dimensions at a fine scale. Photogrammetric software is used to capture complex, detailed topography by creating digital surface models with a range imaging technique that estimates three-dimensional structures from two-dimensional image sequences. The digital surface model is overlain by detailed, high-resolution photography. Pairing sUAS technology with readily available photogrammetry software that requires little processing time and resources offers a revolutionary and cost-effective methodology for geoscientists to investigate and quantify stratigraphic and structural complexity of field studies from the convenience of the office. These methods of imaging and modeling remote outcrops are demonstrated in the East Coast Basin, New Zealand, where wave-cut platform exposures of Miocene deep-water deposits offer a unique opportunity to investigate the flow processes and resulting characteristics of thin-bedded turbidite deposits. Stratigraphic architecture of wavecut platform and vertically-dipping exposures of these thin-bedded turbidites is investigated with sUAS coupled with Structure from Motion (SfM) photogrammetry software. This approach allows the geometric and spatial variation of deep-water architecture to be characterized continuously along 2,000 meters of lateral exposure, as well as to measure and quantify cyclic variations in thin-bedded turbidites at centimeter scale. Results yield a spatial and temporal understanding of a deep-water depositional system at a scale that was previously unattainable using conventional field geology techniques, and a virtual outcrop that can be used for classroom education.

  19. HFF-DeepSpace Photometric Catalogs of the 12 Hubble Frontier Fields, Clusters, and Parallels: Photometry, Photometric Redshifts, and Stellar Masses

    NASA Astrophysics Data System (ADS)

    Shipley, Heath V.; Lange-Vagle, Daniel; Marchesini, Danilo; Brammer, Gabriel B.; Ferrarese, Laura; Stefanon, Mauro; Kado-Fong, Erin; Whitaker, Katherine E.; Oesch, Pascal A.; Feinstein, Adina D.; Labbé, Ivo; Lundgren, Britt; Martis, Nicholas; Muzzin, Adam; Nedkova, Kalina; Skelton, Rosalind; van der Wel, Arjen

    2018-03-01

    We present Hubble multi-wavelength photometric catalogs, including (up to) 17 filters with the Advanced Camera for Surveys and Wide Field Camera 3 from the ultra-violet to near-infrared for the Hubble Frontier Fields and associated parallels. We have constructed homogeneous photometric catalogs for all six clusters and their parallels. To further expand these data catalogs, we have added ultra-deep K S -band imaging at 2.2 μm from the Very Large Telescope HAWK-I and Keck-I MOSFIRE instruments. We also add post-cryogenic Spitzer imaging at 3.6 and 4.5 μm with the Infrared Array Camera (IRAC), as well as archival IRAC 5.8 and 8.0 μm imaging when available. We introduce the public release of the multi-wavelength (0.2–8 μm) photometric catalogs, and we describe the unique steps applied for the construction of these catalogs. Particular emphasis is given to the source detection band, the contamination of light from the bright cluster galaxies (bCGs), and intra-cluster light (ICL). In addition to the photometric catalogs, we provide catalogs of photometric redshifts and stellar population properties. Furthermore, this includes all the images used in the construction of the catalogs, including the combined models of bCGs and ICL, the residual images, segmentation maps, and more. These catalogs are a robust data set of the Hubble Frontier Fields and will be an important aid in designing future surveys, as well as planning follow-up programs with current and future observatories to answer key questions remaining about first light, reionization, the assembly of galaxies, and many more topics, most notably by identifying high-redshift sources to target.

  20. Deep learning for the detection of barchan dunes in satellite images

    NASA Astrophysics Data System (ADS)

    Azzaoui, A. M.; Adnani, M.; Elbelrhiti, H.; Chaouki, B. E. K.; Masmoudi, L.

    2017-12-01

    Barchan dunes are known to be the fastest moving sand dunes in deserts as they form under unidirectional winds and limited sand supply over a firm coherent basement (Elbelrhiti and Hargitai,2015). They were studied in the context of natural hazard monitoring as they could be a threat to human activities and infrastructures. Also, they were studied as a natural phenomenon occurring in other planetary landforms such as Mars or Venus (Bourke et al., 2010). Our region of interest was located in a desert region in the south of Morocco, in a barchan dunes corridor next to the town of Tarfaya. This region which is part of the Sahara desert contained thousands of barchans; which limits the number of dunes that could be studied during field missions. Therefore, we chose to monitor barchan dunes with satellite imagery, which can be seen as a complementary approach to field missions. We collected data from the Sentinel platform (https://scihub.copernicus.eu/dhus/); we used a machine learning method as a basis for the detection of barchan dunes positions in the satellite image. We trained a deep learning model on a mid-sized dataset that contained blocks representing images of barchan dunes, and images of other desert features, that we collected by cropping and annotating the source image. During testing, we browsed the satellite image with a gliding window that evaluated each block, and then produced a probability map. Finally, a threshold on the latter map exposed the location of barchan dunes. We used a subsample of data to train the model and we gradually incremented the size of the training set to get finer results and avoid over fitting. The positions of barchan dunes were successfully detected and deep learning was an effective method for this application. Sentinel-2 images were chosen for their availability and good temporal resolution, which will allow the tracking of barchan dunes in future work. While Sentinel images had sufficient spatial resolution for the detection of mid-size to large size barchans, we noted that it was relatively difficult to detect smaller barchan dunes. Overall, deep learning allowed us to achieve a high accuracy in the detection of barchan dunes. The tracking of hundreds of barchans using this detection method would provide an insight into the understanding of the dynamics of this natural phenomenon.

  1. Advanced imaging in acute and chronic deep vein thrombosis

    PubMed Central

    Karande, Gita Yashwantrao; Sanchez, Yadiel; Baliyan, Vinit; Mishra, Vishala; Ganguli, Suvranu; Prabhakar, Anand M.

    2016-01-01

    Deep venous thrombosis (DVT) affecting the extremities is a common clinical problem. Prompt imaging aids in rapid diagnosis and adequate treatment. While ultrasound (US) remains the workhorse of detection of extremity venous thrombosis, CT and MRI are commonly used as the problem-solving tools either to visualize the thrombosis in central veins like superior or inferior vena cava (IVC) or to test for the presence of complications like pulmonary embolism (PE). The cross-sectional modalities also offer improved visualization of venous collaterals. The purpose of this article is to review the established modalities used for characterization and diagnosis of DVT, and further explore promising innovations and recent advances in this field. PMID:28123971

  2. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network.

    PubMed

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-02-11

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  3. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    PubMed Central

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-01-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods. PMID:26864172

  4. WFIRST: Science from the Guest Investigator and Parallel Observation Programs

    NASA Astrophysics Data System (ADS)

    Postman, Marc; Nataf, David; Furlanetto, Steve; Milam, Stephanie; Robertson, Brant; Williams, Ben; Teplitz, Harry; Moustakas, Leonidas; Geha, Marla; Gilbert, Karoline; Dickinson, Mark; Scolnic, Daniel; Ravindranath, Swara; Strolger, Louis; Peek, Joshua; Marc Postman

    2018-01-01

    The Wide Field InfraRed Survey Telescope (WFIRST) mission will provide an extremely rich archival dataset that will enable a broad range of scientific investigations beyond the initial objectives of the proposed key survey programs. The scientific impact of WFIRST will thus be significantly expanded by a robust Guest Investigator (GI) archival research program. We will present examples of GI research opportunities ranging from studies of the properties of a variety of Solar System objects, surveys of the outer Milky Way halo, comprehensive studies of cluster galaxies, to unique and new constraints on the epoch of cosmic re-ionization and the assembly of galaxies in the early universe.WFIRST will also support the acquisition of deep wide-field imaging and slitless spectroscopic data obtained in parallel during campaigns with the coronagraphic instrument (CGI). These parallel wide-field imager (WFI) datasets can provide deep imaging data covering several square degrees at no impact to the scheduling of the CGI program. A competitively selected program of well-designed parallel WFI observation programs will, like the GI science above, maximize the overall scientific impact of WFIRST. We will give two examples of parallel observations that could be conducted during a proposed CGI program centered on a dozen nearby stars.

  5. Three-dimensional all-dielectric metamaterial solid immersion lens for subwavelength imaging at visible frequencies

    PubMed Central

    Fan, Wen; Yan, Bing; Wang, Zengbo; Wu, Limin

    2016-01-01

    Although all-dielectric metamaterials offer a low-loss alternative to current metal-based metamaterials to manipulate light at the nanoscale and may have important applications, very few have been reported to date owing to the current nanofabrication technologies. We develop a new “nano–solid-fluid assembly” method using 15-nm TiO2 nanoparticles as building blocks to fabricate the first three-dimensional (3D) all-dielectric metamaterial at visible frequencies. Because of its optical transparency, high refractive index, and deep-subwavelength structures, this 3D all-dielectric metamaterial-based solid immersion lens (mSIL) can produce a sharp image with a super-resolution of at least 45 nm under a white-light optical microscope, significantly exceeding the classical diffraction limit and previous near-field imaging techniques. Theoretical analysis reveals that electric field enhancement can be formed between contacting TiO2 nanoparticles, which causes effective confinement and propagation of visible light at the deep-subwavelength scale. This endows the mSIL with unusual abilities to illuminate object surfaces with large-area nanoscale near-field evanescent spots and to collect and convert the evanescent information into propagating waves. Our all-dielectric metamaterial design strategy demonstrates the potential to develop low-loss nanophotonic devices at visible frequencies. PMID:27536727

  6. HST Imaging of the (Almost) Dark ALFALFA Source AGC 229385

    NASA Astrophysics Data System (ADS)

    Brunker, Samantha; Salzer, John Joseph; McQuinn, Kristen B.; Janowiecki, Steven; Leisman, Luke; Rhode, Katherine L.; Adams, Elizabeth A.; Cannon, John M.; Giovanelli, Riccardo; Haynes, Martha P.

    2017-06-01

    We present deep HST imaging photometry of the extreme galaxy AGC 229385. This system was first discovered as an HI source in the ALFALFA all-sky HI survey. It was cataloged as an (almost) dark galaxy because it did not exhibit any obvious optical counterpart in the available wide-field survey data (e.g., SDSS). Deep optical imaging with the WIYN 3.5-m telescope revealed an ultra-low surface brightness stellar component located at the center of the HI detection. With a peak central surface brightness of 26.4 mag/sq. arcsec in g and very blue colors (g-r = -0.1), the stellar component to this gas-rich system is quite enigmatic. We have used our HST images to produce a deep CMD of the resolved stellar population present in AGC 229385. We clearly detect a red-giant branch and use it to infer a distance of 5.50 ± 0.23 Mpc. The CMD is dominated by older stars, contrary to expectations given the blue optical colors obtained from our ground-based photometry. Our new distance is substantially lower than earlier estimates, and shows that AGC 229385 is an extreme dwarf galaxy with one of the highest MHI/L ratios known.

  7. Manifold learning of brain MRIs by deep learning.

    PubMed

    Brosch, Tom; Tam, Roger

    2013-01-01

    Manifold learning of medical images plays a potentially important role for modeling anatomical variability within a population with pplications that include segmentation, registration, and prediction of clinical parameters. This paper describes a novel method for learning the manifold of 3D brain images that, unlike most existing manifold learning methods, does not require the manifold space to be locally linear, and does not require a predefined similarity measure or a prebuilt proximity graph. Our manifold learning method is based on deep learning, a machine learning approach that uses layered networks (called deep belief networks, or DBNs) and has received much attention recently in the computer vision field due to their success in object recognition tasks. DBNs have traditionally been too computationally expensive for application to 3D images due to the large number of trainable parameters. Our primary contributions are (1) a much more computationally efficient training method for DBNs that makes training on 3D medical images with a resolution of up to 128 x 128 x 128 practical, and (2) the demonstration that DBNs can learn a low-dimensional manifold of brain volumes that detects modes of variations that correlate to demographic and disease parameters.

  8. NutriNet: A Deep Learning Food and Drink Image Recognition System for Dietary Assessment

    PubMed Central

    Koroušić Seljak, Barbara

    2017-01-01

    Automatic food image recognition systems are alleviating the process of food-intake estimation and dietary assessment. However, due to the nature of food images, their recognition is a particularly challenging task, which is why traditional approaches in the field have achieved a low classification accuracy. Deep neural networks have outperformed such solutions, and we present a novel approach to the problem of food and drink image detection and recognition that uses a newly-defined deep convolutional neural network architecture, called NutriNet. This architecture was tuned on a recognition dataset containing 225,953 512 × 512 pixel images of 520 different food and drink items from a broad spectrum of food groups, on which we achieved a classification accuracy of 86.72%, along with an accuracy of 94.47% on a detection dataset containing 130,517 images. We also performed a real-world test on a dataset of self-acquired images, combined with images from Parkinson’s disease patients, all taken using a smartphone camera, achieving a top-five accuracy of 55%, which is an encouraging result for real-world images. Additionally, we tested NutriNet on the University of Milano-Bicocca 2016 (UNIMIB2016) food image dataset, on which we improved upon the provided baseline recognition result. An online training component was implemented to continually fine-tune the food and drink recognition model on new images. The model is being used in practice as part of a mobile app for the dietary assessment of Parkinson’s disease patients. PMID:28653995

  9. Exploring Deep Learning and Sparse Matrix Format Selection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Y.; Liao, C.; Shen, X.

    We proposed to explore the use of Deep Neural Networks (DNN) for addressing the longstanding barriers. The recent rapid progress of DNN technology has created a large impact in many fields, which has significantly improved the prediction accuracy over traditional machine learning techniques in image classifications, speech recognitions, machine translations, and so on. To some degree, these tasks resemble the decision makings in many HPC tasks, including the aforementioned format selection for SpMV and linear solver selection. For instance, sparse matrix format selection is akin to image classification—such as, to tell whether an image contains a dog or a cat;more » in both problems, the right decisions are primarily determined by the spatial patterns of the elements in an input. For image classification, the patterns are of pixels, and for sparse matrix format selection, they are of non-zero elements. DNN could be naturally applied if we regard a sparse matrix as an image and the format selection or solver selection as classification problems.« less

  10. Hubble's deepest view ever of the Universe unveils earliest galaxies

    NASA Astrophysics Data System (ADS)

    2004-03-01

    Hubble sees galaxies galore hi-res Size hi-res: 446 kb Credits: NASA, ESA, and S. Beckwith (STScI) and the HUDF Team Hubble sees galaxies galore Galaxies, galaxies everywhere - as far as the NASA/ESA Hubble Space Telescope can see. This view of nearly 10,000 galaxies is the deepest visible-light image of the cosmos. Called the Hubble Ultra Deep Field, this galaxy-studded view represents a ‘deep’ core sample of the universe, cutting across billions of light-years. Hubble reveals galactic drama hi-res Size hi-res: 879 kb Credits: NASA, ESA, and S. Beckwith (STScI) and the HUDF Team Hubble reveals galactic drama A galactic brawl. A close encounter with a spiral galaxy. Blue wisps of galaxies. These close-up snapshots of galaxies in the Hubble Ultra Deep Field reveal the drama of galactic life. Here three galaxies just below centre are enmeshed in battle, their shapes distorted by the brutal encounter. Hubble reveals galactic drama hi-res Size hi-res: 886 kb Credits: NASA, ESA, and S. Beckwith (STScI) and the HUDF Team Hubble reveals galactic drama A galactic brawl. A close encounter with a spiral galaxy. Blue wisps of galaxies. These close-up snapshots of galaxies in the Hubble Ultra Deep Field reveal the drama of galactic life. Here three galaxies just below centre are enmeshed in battle, their shapes distorted by the brutal encounter. Hubble reveals galactic drama hi-res Size hi-res: 892 kb Credits: NASA, ESA, and S. Beckwith (STScI) and the HUDF Team Hubble reveals galactic drama A galactic brawl. A close encounter with a spiral galaxy. Blue wisps of galaxies. These close-up snapshots of galaxies in the Hubble Ultra Deep Field reveal the drama of galactic life. The galaxies in this panel were plucked from a harvest of nearly 10,000 galaxies in the Ultra Deep Field, the deepest visible-light image of the cosmos. This historic new view is actually made up by two separate images taken by Hubble's Advanced Camera for Surveys (ACS) and the Near Infrared Camera and Multi-object Spectrometer (NICMOS). Both images reveal some galaxies that are too faint to be seen by ground-based telescopes, or even in Hubble's previous faraway looks, called the Hubble Deep Fields (HDFs), taken in 1995 and 1998. The HUDF field contains an estimated 10,000 galaxies in a patch of sky just one-tenth the diameter of the full Moon. Besides the rich harvest of classic spiral and elliptical galaxies, there is a zoo of oddball galaxies littering the field. Some look like toothpicks; others like links on a bracelet. A few appear to be interacting. Their strange shapes are a far cry from the majestic spiral and elliptical galaxies we see today. These oddball galaxies chronicle a period when the Universe was more chaotic. Order and structure were just beginning to emerge. The combination of ACS and NICMOS images will be used to search for galaxies that existed between 400 and 800 million years after the Big Bang (in cosmological terms this corresponds to a 'redshift' range of 7 to 12). Astronomers around the world will use these data to understand whether in this very early stages the Universe appears to be the same as it did when the cosmos was between 1000 and 2000 million years old. Hubble's ACS allows astronomers to see galaxies two to four times fainter than Hubble could view previously, but the NICMOS sees even farther than the ACS. The NICMOS reveals the farthest galaxies ever seen because the expanding Universe has stretched their light into the near-infrared portion of the spectrum. The ACS uncovered galaxies that existed 800 million years after the Big Bang (at a redshift of 7). But the NICMOS might have spotted galaxies that lived just 400 million years after the birth of the cosmos (at a redshift of 12). Just like the previous HDFs, the new data are expected to galvanise the astronomical community and lead to dozens of research papers that will offer new insights into the birth and evolution of galaxies. This will hold the record as the deepest-ever view of the Universe until ESA together with NASA launches the James Webb Space Telescope in 2011. Notes for editors More information, images, animations and interactive zoomable images are available from http://www.spacetelescope.org/news/html/heic0406.html The Hubble Space Telescope is a project of international cooperation between ESA and NASA. Image credit: NASA, ESA, S. Beckwith (STScI) and the HUDF Team

  11. JPL Closeup

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Voyager, Infrared Astronomical Satellite, Galileo, Viking, Solar Mesosphere Explorer, Wide-field/Planetary Camera, Venus Mapper, International Solar Polar Mission - Solar Interplanetary Satellite, Extreme Ultraviolet Explores, Starprobe, International Halley Watch, Marine Mark II, Samex, Shuttle Imaging Radar-A, Deep Space Network, Biomedical Technology, Ocean Studies and Robotics are summarized.

  12. Lyman Break Galaxies in the Hubble Ultra Deep Field through Deep U-Band Imaging

    NASA Astrophysics Data System (ADS)

    Rafelski, Marc; Wolfe, A. M.; Cooke, J.; Chen, H. W.; Armandroff, T. E.; Wirth, G. D.

    2009-12-01

    We introduce an extremely deep U-band image taken of the Hubble Ultra Deep Field (HUDF), with a one sigma depth of 30.7 mag arcsec-2 and a detection limiting magnitude of 28 mag arcsec-2. The observations were carried out on the Keck I telescope using the LRIS-B detector. The U-band image substantially improves the accuracy of photometric redshift measurements of faint galaxies in the HUDF at z=[2.5,3.5]. The U-band for these galaxies is attenuated by lyman limit absorption, allowing for more reliable selections of candidate Lyman Break Galaxies (LBGs) than from photometric redshifts without U-band. We present a reliable sample of 300 LBGs at z=[2.5,3.5] in the HUDF. Accurate redshifts of faint galaxies at z=[2.5,3.5] are needed to obtain empirical constraints on the star formation efficiency of neutral gas at high redshift. Wolfe & Chen (2006) showed that the star formation rate (SFR) density in damped Ly-alpha absorption systems (DLAs) at z=[2.5,3.5] is significantly lower than predicted by the Kennicutt-Schmidt law for nearby galaxies. One caveat to this result that we wish to test is whether LBGs are embedded in DLAs. If in-situ star formation is occurring in DLAs, we would see it as extended low surface brightness emission around LBGs. We shall use the more accurate photometric redshifts to create a sample of LBGs around which we will look for extended emission in the more sensitive and higher resolution HUDF images. The absence of extended emission would put limits on the SFR density of DLAs associated with LBGs at high redshift. On the other hand, detection of faint emission on scales large compared to the bright LBG cores would indicate the presence of in situ star formation in those DLAs. Such gas would presumably fuel the higher star formation rates present in the LBG cores.

  13. Comparative Studies of Prediction Strategies for Solar X-ray Time Series

    NASA Astrophysics Data System (ADS)

    Muranushi, T.; Hattori, T.; Jin, Q.; Hishinuma, T.; Tominaga, M.; Nakagawa, K.; Fujiwara, Y.; Nakamura, T.; Sakaue, T.; Takahashi, T.; Seki, D.; Namekata, K.; Tei, A.; Ban, M.; Kawamura, A. D.; Hada-Muranushi, Y.; Asai, A.; Nemoto, S.; Shibata, K.

    2016-12-01

    Crucial virtues for operational space weather forecast are real-timeforecast ability, forecast precision and customizability to userneeds. The recent development of deep-learning makes it veryattractive to space weather, because (1) it learns gradually incomingdata, (2) it exhibits superior accuracy over conventional algorithmsin many fields, and (3) it makes the customization of the forecasteasier because it accepts raw images.However, the best deep-learning applications are only attainable bycareful human designers that understands both the mechanism of deeplearning and the application field. Therefore, we need to foster youngresearchers to enter the field of machine-learning aided forecast. So,we have held a seminar every Monday with undergraduate and graduatestudents from May to August 2016.We will review the current status of space weather science and theautomated real-time space weather forecast engine UFCORIN. Then, weintroduce the deep-learning space weather forecast environments wehave set up using Python and Chainer on students' laptop computers.We have started from simple image classification neural network, thenimplemented space-weather neural network that predicts future X-rayflux of the Sun based on the past X-ray lightcurve and magnetic fieldline-of-sight images.In order to perform each forecast faster, we have focused on simplelightcurve-to-lightcurve forecast, and performed comparative surveysby changing following parameters: The size and topology of the neural network Batchsize Neural network hyperparameters such as learning rates to optimize the preduction accuracy, and time for prediction.We have found how to design compact, fast but accurate neural networkto perform forecast. Our forecasters can perform predictionexperiment for four-year timespan in a few minutes, and achieveslog-scale errors of the order of 1. Our studies is ongoing, and inour talk we will review our progress till December.

  14. Metal-dielectric composites for beam splitting and far-field deep sub-wavelength resolution for visible wavelengths.

    PubMed

    Yan, Changchun; Zhang, Dao Hua; Zhang, Yuan; Li, Dongdong; Fiddy, M A

    2010-07-05

    We report beam splitting in a metamaterial composed of a silver-alumina composite covered by a layer of chromium containing one slit. By simulating distributions of energy flow in the metamaterial for H-polarized waves, we find that the beam splitting occurs when the width of the slit is shorter than the wavelength, which is conducive to making a beam splitter in sub-wavelength photonic devices. We also find that the metamaterial possesses deep sub-wavelength resolution capabilities in the far field when there are two slits and the central silver layer is at least 36 nm in thickness, which has potential applications in superresolution imaging.

  15. Searching for prostate cancer by fully automated magnetic resonance imaging classification: deep learning versus non-deep learning.

    PubMed

    Wang, Xinggang; Yang, Wei; Weinreb, Jeffrey; Han, Juan; Li, Qiubai; Kong, Xiangchuang; Yan, Yongluan; Ke, Zan; Luo, Bo; Liu, Tao; Wang, Liang

    2017-11-13

    Prostate cancer (PCa) is a major cause of death since ancient time documented in Egyptian Ptolemaic mummy imaging. PCa detection is critical to personalized medicine and varies considerably under an MRI scan. 172 patients with 2,602 morphologic images (axial 2D T2-weighted imaging) of the prostate were obtained. A deep learning with deep convolutional neural network (DCNN) and a non-deep learning with SIFT image feature and bag-of-word (BoW), a representative method for image recognition and analysis, were used to distinguish pathologically confirmed PCa patients from prostate benign conditions (BCs) patients with prostatitis or prostate benign hyperplasia (BPH). In fully automated detection of PCa patients, deep learning had a statistically higher area under the receiver operating characteristics curve (AUC) than non-deep learning (P = 0.0007 < 0.001). The AUCs were 0.84 (95% CI 0.78-0.89) for deep learning method and 0.70 (95% CI 0.63-0.77) for non-deep learning method, respectively. Our results suggest that deep learning with DCNN is superior to non-deep learning with SIFT image feature and BoW model for fully automated PCa patients differentiation from prostate BCs patients. Our deep learning method is extensible to image modalities such as MR imaging, CT and PET of other organs.

  16. Active Galactic Nuclei, Quasars, BL Lac Objects and X-Ray Background

    NASA Technical Reports Server (NTRS)

    Mushotzky, Richard (Technical Monitor); Elvis, Martin

    2005-01-01

    The XMM COSMOS survey is producing the large surface density of X-ray sources anticipated. The first batch of approx. 200 sources is being studied in relation to the large scale structure derived from deep optical/near-IR imaging from Subaru and CFHT. The photometric redshifts from the opt/IR imaging program allow a first look at structure vs. redshift, identifying high z clusters. A consortium of SAO, U. Arizona and the Carnegie Institute of Washington (Pasadena) has started a large program using the 6.5meter Magellan telescopes in Chile with the prime objective of identifying the XMM X-ray sources in the COSMOS field. The first series of observing runs using the new IMACS multi-slit spectrograph on Magellan will take place in January and February of 2005. Some 300 spectra per field will be taken, including 70%-80% of the XMM sources in each field. The four first fields cover the center of the COSMOS field. A VLT consortium is set to obtain bulk redshifts of the field galaxies. The added accuracy of the spectroscopic redshifts over the photo-z's will allow much lower density structures to be seen, voids and filaments. The association of X-ray selected AGNs, and quasars with these filaments, is a major motivation for our studies. Comparison to the deep VLA radio data now becoming available is about to begin.

  17. On The Spatial Homogeneity Of The Wave Spectra In Deep Water Employing ERS-2 SAR Precision Image

    NASA Astrophysics Data System (ADS)

    Violante-Carvalho, Nelson; Robinson, Ian; Gommenginger, Christine; Carvalho, Luiz Mariano; Goldstein, Brunno

    2010-04-01

    Using wave spectra extracted from image mode ERS-2 SAR, the spatial homogeneity of the wave field in deep water is investigated against directional buoy measurements. From the 100 x 100 km image, several small images of 6.4 x 6.4 km are selected and the wave spectra are computed. The locally disturbed wind velocity pat- tern, caused by the sheltering effect of large mountains near the coast, translates into the selected SAR image as regions of higher and lower wind speed. Assuming that a swell component is uniform over the whole image, SAR wave spectra retrieved from the sheltered and non-sheltered areas are intercompared. Any difference between them could be related to a possible interaction between wind sea and swell, since the wind sea part of the spectrum would be slightly different due to the different wind speeds. The results show that there is no significative variation, and apparently there is no clear difference in the swell spectra despite the different wind sea components.

  18. Numerical correction of distorted images in full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Min, Gihyeon; Kim, Ju Wan; Choi, Woo June; Lee, Byeong Ha

    2012-03-01

    We propose a numerical method which can numerically correct the distorted en face images obtained with a full field optical coherence tomography (FF-OCT) system. It is shown that the FF-OCT image of the deep region of a biological sample is easily blurred or degraded because the sample has a refractive index (RI) much higher than its surrounding medium in general. It is analyzed that the focal plane of the imaging system is segregated from the imaging plane of the coherence-gated system due to the RI mismatch. This image-blurring phenomenon is experimentally confirmed by imaging the chrome pattern of a resolution test target through its glass substrate in water. Moreover, we demonstrate that the blurred image can be appreciably corrected by using the numerical correction process based on the Fresnel-Kirchhoff diffraction theory. The proposed correction method is applied to enhance the image of a human hair, which permits the distinct identification of the melanin granules inside the cortex layer of the hair shaft.

  19. Network effects of deep brain stimulation

    PubMed Central

    Alhourani, Ahmad; McDowell, Michael M.; Randazzo, Michael J.; Wozny, Thomas A.; Kondylis, Efstathios D.; Lipski, Witold J.; Beck, Sarah; Karp, Jordan F.; Ghuman, Avniel S.

    2015-01-01

    The ability to differentially alter specific brain functions via deep brain stimulation (DBS) represents a monumental advance in clinical neuroscience, as well as within medicine as a whole. Despite the efficacy of DBS in the treatment of movement disorders, for which it is often the gold-standard therapy when medical management becomes inadequate, the mechanisms through which DBS in various brain targets produces therapeutic effects is still not well understood. This limited knowledge is a barrier to improving efficacy and reducing side effects in clinical brain stimulation. A field of study related to assessing the network effects of DBS is gradually emerging that promises to reveal aspects of the underlying pathophysiology of various brain disorders and their response to DBS that will be critical to advancing the field. This review summarizes the nascent literature related to network effects of DBS measured by cerebral blood flow and metabolic imaging, functional imaging, and electrophysiology (scalp and intracranial electroencephalography and magnetoencephalography) in order to establish a framework for future studies. PMID:26269552

  20. The Anisotropy of the Microwave Background to l = 3500: Deep Field Observations with the Cosmic Background Imager

    NASA Technical Reports Server (NTRS)

    Mason, B. S.; Pearson, T. J.; Readhead, A. C. S.; Shepherd, M. C.; Sievers, J.; Udomprasert, P. S.; Cartwright, J. K.; Farmer, A. J.; Padin, S.; Myers, S. T.; hide

    2002-01-01

    We report measurements of anisotropy in the cosmic microwave background radiation over the multipole range l approximately 200 (right arrow) 3500 with the Cosmic Background Imager based on deep observations of three fields. These results confirm the drop in power with increasing l first reported in earlier measurements with this instrument, and extend the observations of this decline in power out to l approximately 2000. The decline in power is consistent with the predicted damping of primary anisotropies. At larger multipoles, l = 2000-3500, the power is 3.1 sigma greater than standard models for intrinsic microwave background anisotropy in this multipole range, and 3.5 sigma greater than zero. This excess power is not consistent with expected levels of residual radio source contamination but, for sigma 8 is approximately greater than 1, is consistent with predicted levels due to a secondary Sunyaev-Zeldovich anisotropy. Further observations are necessary to confirm the level of this excess and, if confirmed, determine its origin.

  1. Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.

    2017-05-01

    Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.

  2. Discovery of bright z ≃ 7 galaxies in the UltraVISTA survey

    NASA Astrophysics Data System (ADS)

    Bowler, R. A. A.; Dunlop, J. S.; McLure, R. J.; McCracken, H. J.; Milvang-Jensen, B.; Furusawa, H.; Fynbo, J. P. U.; Le Fèvre, O.; Holt, J.; Ideue, Y.; Ihara, Y.; Rogers, A. B.; Taniguchi, Y.

    2012-11-01

    We have exploited the new, deep, near-infrared UltraVISTA imaging of the Cosmological Evolution Survey (COSMOS) field, in tandem with deep optical and mid-infrared imaging, to conduct a new search for luminous galaxies at redshifts z ≃ 7. The year-one UltraVISTA data provide contiguous Y, J, H, Ks imaging over 1.5 deg2, reaching a 5σ detection limit of Y + J ≃ 25 (AB mag, 2-arcsec-diameter aperture). The central ≃1 deg2 of this imaging coincides with the final deep optical (u*, g, r, i) data provided by the Canada-France-Hawaii Telescope (CFHT) Legacy Survey and new deep Subaru/Suprime-Cam z'-band imaging obtained specifically to enable full exploitation of UltraVISTA. It also lies within the Hubble Space Telescope (HST) I814 band and Spitzer/Infrared Array Camera imaging obtained as part of the COSMOS survey. We have utilized this unique multiwavelength dataset to select galaxy candidates at redshifts z > 6.5 by searching first for Y + J-detected objects which are undetected in the CFHT and HST optical data. This sample was then refined using a photometric redshift fitting code, enabling the rejection of lower redshift galaxy contaminants and cool galactic M, L, T dwarf stars. The final result of this process is a small sample of (at most) 10 credible galaxy candidates at z > 6.5 (from over 200 000 galaxies detected in the year-one UltraVISTA data) which we present in this paper. The first four of these appear to be robust galaxies at z > 6.5, and fitting to their stacked spectral energy distribution yields zphot = 6.98 ± 0.05 with a stellar mass M* ≃ 5 × 109 M⊙ and rest-frame ultraviolet (UV) spectral slope β ≃ -2.0 ± 0.2 (where fλ ∝ λβ). The next three are also good candidates for z > 6.5 galaxies, but the possibility that they are dwarf stars cannot be completely excluded. Our final subset of three additional candidates is afflicted not only by potential dwarf star contamination, but also contains objects likely to lie at redshifts just below z = 6.5. We show that the three even-brighter z ≳ 7 galaxy candidates reported in the COSMOS field by Capak et al. are in fact all lower redshift galaxies at z ≃ 1.5-3.5. Consequently the new z ≃ 7 galaxies reported here are the first credible z ≃ 7 Lyman-break galaxies discovered in the COSMOS field and, as the most UV luminous discovered to date at these redshifts, are prime targets for deep follow-up spectroscopy. We explore their physical properties, and briefly consider the implications of their inferred number density for the form of the galaxy luminosity function at z ≃ 7.

  3. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor.

    PubMed

    Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung

    2017-06-30

    The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.

  4. Radio Identification of Millimeter-Bright Galaxies Detected in the AzTEC/ASTE Blank Field Survey

    NASA Astrophysics Data System (ADS)

    Hatsukade, Bunyo; Kohno, Kotaro; White, Glenn; Matsuura, Shuji; Hanami, Hitoshi; Shirahata, Mai; Nakanishi, Kouichiro; Hughes, David; Tamura, Yoichi; Iono, Daisuke; Wilson, Grant; Yun, Min

    2008-10-01

    We propose a deep 1.4-GHz imaging of millimeter-bright sources in the AzTEC/ASTE 1.1-mm blank field survey of AKARI Deep Field-South. The AzTEC/ASTE uncovered 37 sources, which are possibly at z > 2. We have obtained multi-wavelength data in this field, but the large beam size of AzTEC/ASTE (30 arcsec) prevents us from identifying counterparts. The aim of this proposal is to identify radio counterparts with higher-angular resolution. This enables us (i) To identifying optical/IR counterparts. It enables optical spectroscopy to determine precise redshifts, allowing us to derive SFRs, luminosity functions, clustering properties, mass of dark matter halos, etc. (ii) To constrain luminosity evolutions of SMGs by comparing of 1.4-GHz number counts (and luminosity functions) with luminosity evolution models. (iii) To estimate photometric redshifts from 1.4-GHz and 1.1-mm data using the radio-FIR flux correlation. In case of non-detection, we can put deep lower limits (3 sigma limit of z > 3). These information lead to the study of evolutionary history of SMGs, their relationship with other galaxy populations, contribution to the cosmic star formation history and the infrared background.

  5. Condenser for ring-field deep-ultraviolet and extreme-ultraviolet lithography

    DOEpatents

    Chapman, Henry N.; Nugent, Keith A.

    2001-01-01

    A condenser for use with a ring-field deep ultraviolet or extreme ultraviolet lithography system. A condenser includes a ripple-plate mirror which is illuminated by a collimated beam at grazing incidence. The ripple plate comprises a plate mirror into which is formed a series of channels along an axis of the mirror to produce a series of concave surfaces in an undulating pattern. Light incident along the channels of the mirror is reflected onto a series of cones. The distribution of slopes on the ripple plate leads to a distribution of angles of reflection of the incident beam. This distribution has the form of an arc, with the extremes of the arc given by the greatest slope in the ripple plate. An imaging mirror focuses this distribution to a ring-field arc at the mask plane.

  6. A Magnified View of the Epoch of Reionization with the Hubble Frontier Fields

    NASA Astrophysics Data System (ADS)

    Livermore, Rachael C.; Finkelstein, Steven L.; Lotz, Jennifer M.

    2017-06-01

    The Hubble Frontier Fields program has obtained deep optical and near-infrared Hubble Space Telescope imaging of six galaxy clusters and associated parallel fields. The depth of the imaging (m_AB ~ 29) means we can identify faint galaxies at z >6, and those in the cluster fields also benefit from magnification due to strong gravitational lensing. Using wavelet decomposition to subtract the foreground cluster galaxies, we can reach intrinsic absolute magnitudes of M_UV ~ -12.5 at z ~ 6. Here, we present the UV luminosity functions at 6

  7. The UV Luminosity Function at 6 < z < 10 from the Hubble Frontier Fields

    NASA Astrophysics Data System (ADS)

    Livermore, Rachael C.; Finkelstein, Steven L.; Lotz, Jennifer M.

    2017-01-01

    The Hubble Frontier Fields program has obtained deep optical and near-infrared Hubble Space Telescope imaging of six galaxy clusters and associated parallel fields. The depth of the imaging (m_AB ~ 29) means we can identify faint galaxies at z > 6, and those in the cluster fields also benefit from magnification due to strong gravitational lensing that allows us to reach intrinsic absolute magnitudes of M_UV ~ -12.5 at z ~ 6. Here, we present the UV luminosity functions at 6 < z < 10 from the complete Hubble Frontier Fields data, revealing a steep faint-end slope that extends to the limits of the data. The lack of any apparent turnover in the luminosity functions means that faint galaxies in the early Universe may have provided sufficient ionizing radiation to sustain reionization.

  8. Detection of z~2 Type IIn Supernovae

    NASA Astrophysics Data System (ADS)

    Cooke, Jeff; Sullivan, Mark; Barton, Elizabeth J.

    2009-05-01

    Type IIn supernovae (SNe IIn) result from the deaths of massive stars. The broad magnitude distribution of SNe IIn make these some of the most luminous SN events ever recorded. In addition, they are the most luminous SN type in the rest-frame UV which make them ideal targets for wide-field optical high redshift searches. We briefly describe our method to detect z~2 SNe IIn events that involves monitoring color-selected galaxies in deep stacked images and our program that applies this method to the CFHTLS survey. Initial results have detected four compelling photometric candidates from their subtracted images and light curves. SNe IIn spectra exhibit extremely bright narrow emission lines as a result of the interaction between the SN ejecta and the circumstellar material released in pre-explosion outbursts. These emission lines remain bright for years after outburst and are above the thresholds of current 8 m-class telescope sensitivities to z~3. The deep spectroscopy required to confirm z~2 host galaxies has the potential to detect the SN emission lines and measure their energies. Finally, planned deep, wide-field surveys have the capability to detect and confirm SNe IIn to z~6. The emission lines of such high-redshift events are expected to be above the sensitivity of future 30 m-class telescopes and the James Webb Space Telescope.

  9. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2007-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts 2x3, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>lO, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (<50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.

  10. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan F.; Barbier, L. M.; Barthelmy, S. D.; Cummings, J. R.; Fenimore, E. E.; Gehrels, N.; Hullinger, D. D.; Markwardt, C. B.; Palmer, D. M.; Parsons, A. M.; hide

    2006-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts 2-6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 27 microns. In addition to JWST s ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.

  11. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2007-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z>6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z>10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (<50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth- Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems.

  12. Vibration amplitude sonoelastography lesion imaging using low-frequency audible vibration

    NASA Astrophysics Data System (ADS)

    Taylor, Lawrence; Parker, Kevin

    2003-04-01

    Sonoelastography or vibration amplitude imaging is an ultrasound imaging technique in which low-amplitude, low-frequency shear waves, less than 0.1-mm displacement and 1-kHz frequency, are propagated deep into tissue, while real time Doppler techniques are used to image the resulting vibration pattern. Finite-element studies and experiments on tissue-mimicking phantoms verify that a discrete hard inhomogeneity present within a larger region of soft tissue will cause a decrease in the vibration field at its location. This forms the basis for tumor detection using sonoelastography. Real time relative imaging of the vibration field is possible because a vibrating particle will phase modulate an ultrasound signal. The particle's amplitude is directly proportional to the spectral spread of the reflected Doppler echo. Real time estimation of the variance of the Doppler power spectrum at each pixel allows the vibration field to be imaged. Results are shown for phantom lesions, thermal lesions, and 3-D in vitro and 2-D in vivo prostate cancer. MRI and whole mount histology is used to validate the system accuracy.

  13. Nonlinear Programming shallow tomography improves deep structure imaging

    NASA Astrophysics Data System (ADS)

    Li, J.; Morozov, I.

    2004-05-01

    In areas with strong variations in topography or near-surface lithology, conventional seismic data processing methods do not produce clear images, neither shallow nor deep. The conventional reflection data processing methods do not resolve stacking velocities at very shallow depth; however, refraction tomography can be used to obtain the near-surface velocities. We use Nonlinear Programming (NP) via known velocity and depth in points from shallow boreholes and outcrop as well as derivation of slowness as constraint conditions to gain accurate shallow velocities. We apply this method to a 2D reflection survey shot across the Flame Mountain, a typical mountain with high gas reserve volume in Western China, by PetroChina and BGP in 1990s. The area has a highly rugged topography with strong variations of lithology near the surface. Over its hillside, the quality of reflection data is very good, but on the mountain ridge, reflection quality is poorer. Because of strong noise, only the first breaks are clear in the records, with velocities varying by more than 3 times in the near offsets. Because this region contains a steep cliff and an overthrust fold, it is very difficult to find a standard refraction horizon, therefore, GLI refractive statics conventional field and residual statics do not result in a good image. Our processing approach includes: 1) The Herglotz-Wiechert method to derive a starting velocity model which is better than horizontal velocity model; 2) using shallow boreholes and geological data, construct smoothness constraints on the velocity field as well as; 3) perform tomographic velocity inversion by NP algorithm; 4) by using the resulting accurate shallow velocities, derive the statics to correct the seismic data for the complex near-surface velocity variations. The result indicates that shallow refraction tomography can greatly improve deep seismic images in complex surface conditions.

  14. A simple anaesthetic and monitoring system for magnetic resonance imaging.

    PubMed

    Rejger, V S; Cohn, B F; Vielvoye, G J; de Raadt, F B

    1989-09-01

    Clinical magnetic resonance imaging (MRI) is a digital tomographic technique which utilizes radio waves emitted by hydrogen protons in a powerful magnetic field to form an image of soft-tissue structures and abnormalities within the body. Unfortunately, because of the relatively long scanning time required and the narrow deep confines of the MRI tunnel and Faraday cage, some patients cannot be examined without the use of heavy sedation or general anaesthesia. Due to poor access to the patient and the strong magnetic field, several problems arise in monitoring and administering anaesthesia during this procedure. In this presentation these problems and their solutions, as resolved by our institution, are discussed. Of particular interest is the anaesthesia circuit specifically adapted for use during MRI scanning.

  15. Three Dimensional Imaging of Cold Atoms in a Magneto Optical Trap with a Light Field Microscope

    DTIC Science & Technology

    2017-09-14

    dimensional (3D) volume of the atoms is reconstructed using a modeled point spread function (PSF), taking into consideration the low magnification (1.25...axis fluorescence image. Optical axis separation between two atom clouds is measured to a 100µm accuracy in a 3mm deep volume , with a 16µm in-focus...79 vi Page 4.5 Phase Term Effects on the 3D Volume

  16. Machine Learning Approaches in Cardiovascular Imaging.

    PubMed

    Henglin, Mir; Stein, Gillian; Hushcha, Pavel V; Snoek, Jasper; Wiltschko, Alexander B; Cheng, Susan

    2017-10-01

    Cardiovascular imaging technologies continue to increase in their capacity to capture and store large quantities of data. Modern computational methods, developed in the field of machine learning, offer new approaches to leveraging the growing volume of imaging data available for analyses. Machine learning methods can now address data-related problems ranging from simple analytic queries of existing measurement data to the more complex challenges involved in analyzing raw images. To date, machine learning has been used in 2 broad and highly interconnected areas: automation of tasks that might otherwise be performed by a human and generation of clinically important new knowledge. Most cardiovascular imaging studies have focused on task-oriented problems, but more studies involving algorithms aimed at generating new clinical insights are emerging. Continued expansion in the size and dimensionality of cardiovascular imaging databases is driving strong interest in applying powerful deep learning methods, in particular, to analyze these data. Overall, the most effective approaches will require an investment in the resources needed to appropriately prepare such large data sets for analyses. Notwithstanding current technical and logistical challenges, machine learning and especially deep learning methods have much to offer and will substantially impact the future practice and science of cardiovascular imaging. © 2017 American Heart Association, Inc.

  17. Research on image retrieval using deep convolutional neural network combining L1 regularization and PRelu activation function

    NASA Astrophysics Data System (ADS)

    QingJie, Wei; WenBin, Wang

    2017-06-01

    In this paper, the image retrieval using deep convolutional neural network combined with regularization and PRelu activation function is studied, and improves image retrieval accuracy. Deep convolutional neural network can not only simulate the process of human brain to receive and transmit information, but also contains a convolution operation, which is very suitable for processing images. Using deep convolutional neural network is better than direct extraction of image visual features for image retrieval. However, the structure of deep convolutional neural network is complex, and it is easy to over-fitting and reduces the accuracy of image retrieval. In this paper, we combine L1 regularization and PRelu activation function to construct a deep convolutional neural network to prevent over-fitting of the network and improve the accuracy of image retrieval

  18. Artificial intelligence in radiology.

    PubMed

    Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L

    2018-05-17

    Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.

  19. ESO/ST-ECF Data Analysis Workshop, 5th, Garching, Germany, Apr. 26, 27, 1993, Proceedings

    NASA Astrophysics Data System (ADS)

    Grosbol, Preben; de Ruijsscher, Resy

    1993-01-01

    Various papers on astronomical data analysis are presented. Individual optics addressed include: surface photometry of early-type galaxies, wavelet transform and adaptive filtering, package for surface photometry of galaxies, calibration of large-field mosaics, surface photometry of galaxies with HST, wavefront-supported image deconvolution, seeing effects on elliptical galaxies, multiple algorithms deconvolution program, enhancement of Skylab X-ray images, MIDAS procedures for the image analysis of E-S0 galaxies, photometric data reductions under MIDAS, crowded field photometry with deconvolved images, the DENIS Deep Near Infrared Survey. Also discussed are: analysis of astronomical time series, detection of low-amplitude stellar pulsations, new SOT method for frequency analysis, chaotic attractor reconstruction and applications to variable stars, reconstructing a 1D signal from irregular samples, automatic analysis for time series with large gaps, prospects for content-based image retrieval, redshift survey in the South Galactic Pole Region.

  20. The Great Observatories Origins Deep Survey

    NASA Astrophysics Data System (ADS)

    Dickinson, Mark

    2008-05-01

    Observing the formation and evolution of ordinary galaxies at early cosmic times requires data at many wavelengths in order to recognize, separate and analyze the many physical processes which shape galaxies' history, including the growth of large scale structure, gravitational interactions, star formation, and active nuclei. Extremely deep data, covering an adequately large volume, are needed to detect ordinary galaxies in sufficient numbers at such great distances. The Great Observatories Origins Deep Survey (GOODS) was designed for this purpose as an anthology of deep field observing programs that span the electromagnetic spectrum. GOODS targets two fields, one in each hemisphere. Some of the deepest and most extensive imaging and spectroscopic surveys have been carried out in the GOODS fields, using nearly every major space- and ground-based observatory. Many of these data have been taken as part of large, public surveys (including several Hubble Treasury, Spitzer Legacy, and ESO Large Programs), which have produced large data sets that are widely used by the astronomical community. I will review the history of the GOODS program, highlighting results on the formation and early growth of galaxies and their active nuclei. I will also describe new and upcoming observations, such as the GOODS Herschel Key Program, which will continue to fill out our portrait of galaxies in the young universe.

  1. BreakingNews: Article Annotation by Image and Text Processing.

    PubMed

    Ramisa, Arnau; Yan, Fei; Moreno-Noguer, Francesc; Mikolajczyk, Krystian

    2018-05-01

    Building upon recent Deep Neural Network architectures, current approaches lying in the intersection of Computer Vision and Natural Language Processing have achieved unprecedented breakthroughs in tasks like automatic captioning or image retrieval. Most of these learning methods, though, rely on large training sets of images associated with human annotations that specifically describe the visual content. In this paper we propose to go a step further and explore the more complex cases where textual descriptions are loosely related to the images. We focus on the particular domain of news articles in which the textual content often expresses connotative and ambiguous relations that are only suggested but not directly inferred from images. We introduce an adaptive CNN architecture that shares most of the structure for multiple tasks including source detection, article illustration and geolocation of articles. Deep Canonical Correlation Analysis is deployed for article illustration, and a new loss function based on Great Circle Distance is proposed for geolocation. Furthermore, we present BreakingNews, a novel dataset with approximately 100K news articles including images, text and captions, and enriched with heterogeneous meta-data (such as GPS coordinates and user comments). We show this dataset to be appropriate to explore all aforementioned problems, for which we provide a baseline performance using various Deep Learning architectures, and different representations of the textual and visual features. We report very promising results and bring to light several limitations of current state-of-the-art in this kind of domain, which we hope will help spur progress in the field.

  2. Towards real-time quantitative optical imaging for surgery

    NASA Astrophysics Data System (ADS)

    Gioux, Sylvain

    2017-07-01

    There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively leading to a large number of failures, patient morbidity and increased healthcare cost. Because near-infrared (NIR) optical imaging is safe, does not require contact, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. In this work, we introduce a novel concept that enables the quantitative imaging of endogenous molecular information over large fields-of-view. Because this concept can be implemented in real-time, it is amenable to provide video-rate endogenous information during surgery.

  3. Deep learning in bioinformatics.

    PubMed

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Optimizing Nanoscale Quantitative Optical Imaging of Subfield Scattering Targets

    PubMed Central

    Henn, Mark-Alexander; Barnes, Bryan M.; Zhou, Hui; Sohn, Martin; Silver, Richard M.

    2016-01-01

    The full 3-D scattered field above finite sets of features has been shown to contain a continuum of spatial frequency information, and with novel optical microscopy techniques and electromagnetic modeling, deep-subwavelength geometrical parameters can be determined. Similarly, by using simulations, scattering geometries and experimental conditions can be established to tailor scattered fields that yield lower parametric uncertainties while decreasing the number of measurements and the area of such finite sets of features. Such optimized conditions are reported through quantitative optical imaging in 193 nm scatterfield microscopy using feature sets up to four times smaller in area than state-of-the-art critical dimension targets. PMID:27805660

  5. Magnetic particle imaging for radiation-free, sensitive and high-contrast vascular imaging and cell tracking.

    PubMed

    Zhou, Xinyi Y; Tay, Zhi Wei; Chandrasekharan, Prashant; Yu, Elaine Y; Hensley, Daniel W; Orendorff, Ryan; Jeffris, Kenneth E; Mai, David; Zheng, Bo; Goodwill, Patrick W; Conolly, Steven M

    2018-05-10

    Magnetic particle imaging (MPI) is an emerging ionizing radiation-free biomedical tracer imaging technique that directly images the intense magnetization of superparamagnetic iron oxide nanoparticles (SPIOs). MPI offers ideal image contrast because MPI shows zero signal from background tissues. Moreover, there is zero attenuation of the signal with depth in tissue, allowing for imaging deep inside the body quantitatively at any location. Recent work has demonstrated the potential of MPI for robust, sensitive vascular imaging and cell tracking with high contrast and dose-limited sensitivity comparable to nuclear medicine. To foster future applications in MPI, this new biomedical imaging field is welcoming researchers with expertise in imaging physics, magnetic nanoparticle synthesis and functionalization, nanoscale physics, and small animal imaging applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. THE DIFFERENCE IMAGING PIPELINE FOR THE TRANSIENT SEARCH IN THE DARK ENERGY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kessler, R.; Marriner, J.; Childress, M.

    2015-11-06

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg(2) fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functionsmore » are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are similar to 130 detections per deg(2) per observation in each band, of which only similar to 25% are artifacts. Of the similar to 7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another similar to 30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 "shallow" fields with single-epoch 50% completeness depth similar to 23.5, the SN Ia efficiency falls to 1/2 at redshift z approximate to 0.7; in our 2 "deep" fields with mag-depth similar to 24.5, the efficiency falls to 1/2 at z approximate to 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.« less

  7. The Difference Imaging Pipeline for the Transient Search in the Dark Energy Survey

    DOE PAGES

    Kessler, R.

    2015-09-09

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg 2 fields are repeatedly observed in the g, r, i, zpassbands with a cadence of about 1 week. Our observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functionsmore » are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ~130 detections per deg 2 per observation in each band, of which only ~25% are artifacts. Of the ~7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ~30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. Furthermore, the DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 "shallow" fields with single-epoch 50% completeness depth ~23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 "deep" fields with mag-depth ~24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.« less

  8. Variability-selected active galactic nuclei from supernova search in the Chandra deep field south

    NASA Astrophysics Data System (ADS)

    Trevese, D.; Boutsia, K.; Vagnetti, F.; Cappellaro, E.; Puccetti, S.

    2008-09-01

    Context: Variability is a property shared by virtually all active galactic nuclei (AGNs), and was adopted as a criterion for their selection using data from multi epoch surveys. Low Luminosity AGNs (LLAGNs) are contaminated by the light of their host galaxies, and cannot therefore be detected by the usual colour techniques. For this reason, their evolution in cosmic time is poorly known. Consistency with the evolution derived from X-ray detected samples has not been clearly established so far, also because the low luminosity population consists of a mixture of different object types. LLAGNs can be detected by the nuclear optical variability of extended objects. Aims: Several variability surveys have been, or are being, conducted for the detection of supernovae (SNe). We propose to re-analyse these SNe data using a variability criterion optimised for AGN detection, to select a new AGN sample and study its properties. Methods: We analysed images acquired with the wide field imager at the 2.2 m ESO/MPI telescope, in the framework of the STRESS supernova survey. We selected the AXAF field centred on the Chandra Deep Field South where, besides the deep X-ray survey, various optical data exist, originating in the EIS and COMBO-17 photometric surveys and the spectroscopic database of GOODS. Results: We obtained a catalogue of 132 variable AGN candidates. Several of the candidates are X-ray sources. We compare our results with an HST variability study of X-ray and IR detected AGNs, finding consistent results. The relatively high fraction of confirmed AGNs in our sample (60%) allowed us to extract a list of reliable AGN candidates for spectroscopic follow-up observations. Table [see full text] is only available in electronic form at http://www.aanda.org

  9. A Legacy Archive Program Providing Optical/NIR-selected Multiwavelength Catalogs and High-level Science Products of the HST Frontier Fields

    NASA Astrophysics Data System (ADS)

    Marchesini, Danilo

    2015-10-01

    We propose to construct public multi-wavelength and value-added catalogs for the HST Frontier Fields (HFF), a multi-cycle imaging program of 6 deep fields centered on strong lensing galaxy clusters and 6 deep blank fields. Whereas the main goal of the HFF is to explore the first billion years of galaxy evolution, this dataset has a unique combination of area and depth that will propel forward our knowledge of galaxy evolution down to and including the foreground cluster redshift (z=0.3-0.5). However, such scientific exploitation requires high-quality, homogeneous, multi-wavelength (from the UV to the mid-infrared) photometric catalogs, supplemented by photometric redshifts, rest-frame colors and luminosities, stellar masses, star-formation rates, and structural parameters. We will use our expertise and existing infrastructure - created for the 3D-HST and CANDELS projects - to build such a data product for the 12 fields of the HFF, using all available imaging data (from HST, Spitzer, and ground-based facilities) as well as all available HST grism data (e.g., GLASS). A broad range of research topics will benefit from such a public database, including but not limited to the faint end of the cluster mass function, the field mass function at z>2, and the build-up of the quiescent population at z>4. In addition, our work will provide an essential basis for follow-up studies and future planning with, for example, ALMA and JWST.

  10. Final design of SITELLE: a wide-field imaging Fourier transform spectrometer for the Canada-France-Hawaii Telescope

    NASA Astrophysics Data System (ADS)

    Grandmont, F.; Drissen, L.; Mandar, Julie; Thibault, S.; Baril, Marc

    2012-09-01

    We report here on the current status of SITELLE, an imaging Fourier transform spectrometer to be installed on the Canada-France Hawaii Telescope in 2013. SITELLE is an Integral Field Unit (IFU) spectrograph capable of obtaining the visible (350 nm - 900 nm) spectrum of every pixel of a 2k x 2k CCD imaging a field of view of 11 x 11 arcminutes, with 100% spatial coverage and a spectral resolution ranging from R = 1 (deep panchromatic image) to R < 104 (for gas dynamics). SITELLE will cover a field of view 100 to 1000 times larger than traditional IFUs, such as GMOS-IFU on Gemini or the upcoming MUSE on the VLT. SITELLE follows on the legacy of BEAR, an imaging conversion of the CFHT FTS and the direct successor of SpIOMM, a similar instrument attached to the 1.6-m telescope of the Observatoire du Mont-Mégantic in Québec. SITELLE will be used to study the structure and kinematics of HII regions and ejecta around evolved stars in the Milky Way, emission-line stars in clusters, abundances in nearby gas-rich galaxies, and the star formation rate in distant galaxies.

  11. Robotically-adjustable microstereotactic frames for image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Kratchman, Louis B.; Fitzpatrick, J. Michael

    2013-03-01

    Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.

  12. Deep learning-based fine-grained car make/model classification for visual surveillance

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.

  13. Deep CFHT Y-band Imaging of VVDS-F22 Field. I. Data Products and Photometric Redshifts

    NASA Astrophysics Data System (ADS)

    Liu, Dezi; Yang, Jinyi; Yuan, Shuo; Wu, Xue-Bing; Fan, Zuhui; Shan, Huanyuan; Yan, Haojing; Zheng, Xianzhong

    2017-02-01

    We present our deep Y-band imaging data of a 2 square degree field within the F22 region of the VIMOS VLT Deep Survey. The observations were conducted using the WIRCam instrument mounted at the Canada-France-Hawaii Telescope (CFHT). The total on-sky time was 9 hr, distributed uniformly over 18 tiles. The scientific goals of the project are to select faint quasar candidates at redshift z> 2.2 and constrain the photometric redshifts for quasars and galaxies. In this paper, we present the observation and the image reduction, as well as the photometric redshifts that we derived by combining our Y-band data with the CFHTLenS {u}* g\\prime r\\prime I\\prime z\\prime optical data and UKIDSS DXS JHK near-infrared data. With the J-band image as a reference, a total of ˜80,000 galaxies are detected in the final mosaic down to a Y-band 5σ point-source limiting depth of 22.86 mag. Compared with the ˜3500 spectroscopic redshifts, our photometric redshifts for galaxies with z< 1.5 and I\\prime ≲ 24.0 mag have a small systematic offset of | {{Δ }}z| ≲ 0.2, 1σ scatter 0.03< {σ }{{Δ }z}< 0.06, and less than 4.0% of catastrophic failures. We also compare with the CFHTLenS photometric redshifts and find that ours are more reliable at z≳ 0.6 because of the inclusion of the near-infrared bands. In particular, including the Y-band data can improve the accuracy at z˜ 1.0{--}2.0 because the location of the 4000 Å break is better constrained. The Y-band images, the multiband photometry catalog, and the photometric redshifts are released at http://astro.pku.edu.cn/astro/data/DYI.html.

  14. Two-modality γ detection of blood volume by camera imaging and nonimaging stethoscope for kinetic studies of cardiovascular control in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Eclancher, Bernard; Chambron, Jacques; Dumitresco, Barbu; Karman, Miklos; Pszota, Agnes; Simon, Atilla; Didon-Poncelet, Anna; Demangeat, Jean

    2002-04-01

    The quantification of rapid hemodynamic reactions to wide and slow breathing movements has been performed, by two modalities (gamma) -left ventriculography of 99mTc-labeled blood volume, in anterior oblique incidence on standing and even exercising healthy volunteers and cardiac patients. A highly sensitive stethoscope delivered whole (gamma) -counts acquired at 30 msec intervals in a square field of view including the left ventricle, in a one dimensional low resolution imaging mode for beat to beat analysis. A planar 2D (gamma) -camera imaging of the same cardiac area was then performed without cardiac gating for alternate acquisitions during deep inspiration and deep expiration, completed by a 3D MRI assessment of the stethoscope detection field. Young healthy volunteers displayed wide variations of diastolic times and stroke volumes, as a result of enhanced baroreflex control, together with +/- 16% variations of the stethoscope's background blood volume counts. Any of the components of these responses were shifted, abolished or even inverted as a result of either obesity, hypertension, aging or cardiac pathologies. The assessment of breathing control of the cardiovascular system by the beat to beat (gamma) -ventriculography combined with nuclear 2D and 3D MRI imaging is a kinetic method allowing the detection of functional anomalies in still ambulatory patients.

  15. Condenser for ring-field deep ultraviolet and extreme ultraviolet lithography

    DOEpatents

    Chapman, Henry N.; Nugent, Keith A.

    2002-01-01

    A condenser for use with a ring-field deep ultraviolet or extreme ultraviolet lithography system. A condenser includes a ripple-plate mirror which is illuminated by a collimated or converging beam at grazing incidence. The ripple plate comprises a flat or curved plate mirror into which is formed a series of channels along an axis of the mirror to produce a series of concave surfaces in an undulating pattern. Light incident along the channels of the mirror is reflected onto a series of cones. The distribution of slopes on the ripple plate leads to a distribution of angles of reflection of the incident beam. This distribution has the form of an arc, with the extremes of the arc given by the greatest slope in the ripple plate. An imaging mirror focuses this distribution to a ring-field arc at the mask plane.

  16. Longitudinal in vivo two-photon fluorescence imaging

    PubMed Central

    Crowe, Sarah E.; Ellis-Davies, Graham C.R.

    2014-01-01

    Fluorescence microscopy is an essential technique for the basic sciences, especially biomedical research. Since the invention of laser scanning confocal microscopy in 1980s, that enabled imaging both fixed and living biological tissue with three-dimensional precision, high-resolution fluorescence imaging has revolutionized biological research. Confocal microscopy, by its very nature, has one fundamental limitation. Due to the confocal pinhole, deep tissue fluorescence imaging is not practical. In contrast (no pun intended), two-photon fluorescence microscopy allows, in principle, the collection of all emitted photons from fluorophores in the imaged voxel, dramatically extending our ability to see deep into living tissue. Since the development of transgenic mice with genetically encoded fluorescent protein in neocortical cells in 2000, two-photon imaging has enabled the dynamics of individual synapses to be followed for up to two years. Since the initial landmark contributions to this field in 2002, the technique has been used to understand how neuronal structure are changed by experience, learning and memory and various diseases. Here we provide a basic summary of the crucial elements that are required for such studies, and discuss many applications of longitudinal two-photon fluorescence microscopy that have appeared since 2002. PMID:24214350

  17. Automated Identification of Northern Leaf Blight-Infected Maize Plants from Field Imagery Using Deep Learning.

    PubMed

    DeChant, Chad; Wiesner-Hanks, Tyr; Chen, Siyuan; Stewart, Ethan L; Yosinski, Jason; Gore, Michael A; Nelson, Rebecca J; Lipson, Hod

    2017-11-01

    Northern leaf blight (NLB) can cause severe yield loss in maize; however, scouting large areas to accurately diagnose the disease is time consuming and difficult. We demonstrate a system capable of automatically identifying NLB lesions in field-acquired images of maize plants with high reliability. This approach uses a computational pipeline of convolutional neural networks (CNNs) that addresses the challenges of limited data and the myriad irregularities that appear in images of field-grown plants. Several CNNs were trained to classify small regions of images as containing NLB lesions or not; their predictions were combined into separate heat maps, then fed into a final CNN trained to classify the entire image as containing diseased plants or not. The system achieved 96.7% accuracy on test set images not used in training. We suggest that such systems mounted on aerial- or ground-based vehicles can help in automated high-throughput plant phenotyping, precision breeding for disease resistance, and reduced pesticide use through targeted application across a variety of plant and disease categories.

  18. The Hubble Deep UV Legacy Survey (HDUV): Survey Overview and First Results

    NASA Astrophysics Data System (ADS)

    Oesch, Pascal; Montes, Mireia; HDUV Survey Team

    2015-08-01

    Deep HST imaging has shown that the overall star formation density and UV light density at z>3 is dominated by faint, blue galaxies. Remarkably, very little is known about the equivalent galaxy population at lower redshifts. Understanding how these galaxies evolve across the epoch of peak cosmic star-formation is key to a complete picture of galaxy evolution. Here, we present a new HST WFC3/UVIS program, the Hubble Deep UV (HDUV) legacy survey. The HDUV is a 132 orbit program to obtain deep imaging in two filters (F275W and F336W) over the two CANDELS Deep fields. We will cover ~100 arcmin2, reaching down to 27.5-28.0 mag at 5 sigma. By directly sampling the rest-frame far-UV at z>~0.5, this will provide a unique legacy dataset with exquisite HST multi-wavelength imaging as well as ancillary HST grism NIR spectroscopy for a detailed study of faint, star-forming galaxies at z~0.5-2. The HDUV will enable a wealth of research by the community, which includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. This poster provides an overview of the HDUV survey and presents the reduced data products and catalogs which will be released to the community.

  19. VizieR Online Data Catalog: RR Lyrae population in the Phoenix dwarf galaxy (Ordonez+, 2014)

    NASA Astrophysics Data System (ADS)

    Ordonez, A. J.; Yang, S.-C.; Sarajedini, A.

    2017-06-01

    The HST/WFPC2 images of the two target fields around Phoenix used in this study were retrieved from the Mikulski Archive for Space Telescopes. The original observing campaign (PI: A. Aparicio; GO-8706) was intended to study the spatial structure and the stellar age and metallicity distribution of this dwarf galaxy. Therefore, it provides deep time-series photometry with fairly good quality for detecting legitimate RR Lyrae variable candidates. Images were taken in both the F555W and F814W filters. A total of two fields were observed: one centered on Phoenix itself, and the other on the outskirts of the galaxy 2.7' from the centered field. The total observed field of view with these observations is equal to 11.4 arcmin2 on the sky. (3 data files).

  20. Space Science

    NASA Image and Video Library

    2002-04-01

    This picture of the galaxy UGC 10214 was was taken by the Advanced Camera for Surveys (ACS), which was installed aboard the Hubble Space Telescope (HST) in March 2002 during HST Servicing Mission 3B (STS-109 mission). Dubbed the "Tadpole," this spiral galaxy is unlike the textbook images of stately galaxies. Its distorted shape was caused by a small interloper, a very blue, compact galaxy visible in the upper left corner of the more massive Tadpole. The Tadpole resides about 420 million light-years away in the constellation Draco. Seen shining through the Tadpole's disk, the tiny intruder is likely a hit-and-run galaxy that is now leaving the scene of the accident. Strong gravitational forces from the interaction created the long tail of debris, consisting of stars and gas that stretch our more than 280,000 light-years. The galactic carnage and torrent of star birth are playing out against a spectacular backdrop: a "wallpaper pattern" of 6,000 galaxies. These galaxies represent twice the number of those discovered in the legendary Hubble Deep Field, the orbiting observatory's "deepest" view of the heavens, taken in 1995 by the Wide Field and planetary camera 2. The ACS picture, however, was taken in one-twelfth of the time it took to observe the original HST Deep Field. In blue light, ACS sees even fainter objects than were seen in the "deep field." The galaxies in the ACS picture, like those in the deep field, stretch back to nearly the begirning of time. Credit: NASA, H. Ford (JHU), G. Illingworth (USCS/LO), M. Clampin (STScI), G. Hartig (STScI), the ACS Science Team, and ESA.

  1. Recent machine learning advancements in sensor-based mobility analysis: Deep learning for Parkinson's disease assessment.

    PubMed

    Eskofier, Bjoern M; Lee, Sunghoon I; Daneault, Jean-Francois; Golabchi, Fatemeh N; Ferreira-Carvalho, Gabriela; Vergara-Diaz, Gloria; Sapienza, Stefano; Costante, Gianluca; Klucken, Jochen; Kautz, Thomas; Bonato, Paolo

    2016-08-01

    The development of wearable sensors has opened the door for long-term assessment of movement disorders. However, there is still a need for developing methods suitable to monitor motor symptoms in and outside the clinic. The purpose of this paper was to investigate deep learning as a method for this monitoring. Deep learning recently broke records in speech and image classification, but it has not been fully investigated as a potential approach to analyze wearable sensor data. We collected data from ten patients with idiopathic Parkinson's disease using inertial measurement units. Several motor tasks were expert-labeled and used for classification. We specifically focused on the detection of bradykinesia. For this, we compared standard machine learning pipelines with deep learning based on convolutional neural networks. Our results showed that deep learning outperformed other state-of-the-art machine learning algorithms by at least 4.6 % in terms of classification rate. We contribute a discussion of the advantages and disadvantages of deep learning for sensor-based movement assessment and conclude that deep learning is a promising method for this field.

  2. Corrosion-Fatigue Assessment Program

    DTIC Science & Technology

    2008-03-31

    22 Figure 3.2.1-4 Deep -focus image of Specimen 598-7 – Crack 1...at Feature #2 .........................22 Figure 3.2.1-5 Deep -focus image of Specimen 598-7 – Crack 2 at Feature #5 .........................23...Figure 3.2.1-6 Deep -focus image of Specimen 598-7 – Crack 3 at Feature #3 .........................23 Figure 3.2.1-7 Deep -focus image of Specimen 598-7

  3. Seeing through Musculoskeletal Tissues: Improving In Situ Imaging of Bone and the Lacunar Canalicular System through Optical Clearing

    PubMed Central

    Berke, Ian M.; Miola, Joseph P.; David, Michael A.; Smith, Melanie K.; Price, Christopher

    2016-01-01

    In situ, cells of the musculoskeletal system reside within complex and often interconnected 3-D environments. Key to better understanding how 3-D tissue and cellular environments regulate musculoskeletal physiology, homeostasis, and health is the use of robust methodologies for directly visualizing cell-cell and cell-matrix architecture in situ. However, the use of standard optical imaging techniques is often of limited utility in deep imaging of intact musculoskeletal tissues due to the highly scattering nature of biological tissues. Drawing inspiration from recent developments in the deep-tissue imaging field, we describe the application of immersion based optical clearing techniques, which utilize the principle of refractive index (RI) matching between the clearing/mounting media and tissue under observation, to improve the deep, in situ imaging of musculoskeletal tissues. To date, few optical clearing techniques have been applied specifically to musculoskeletal tissues, and a systematic comparison of the clearing ability of optical clearing agents in musculoskeletal tissues has yet to be fully demonstrated. In this study we tested the ability of eight different aqueous and non-aqueous clearing agents, with RIs ranging from 1.45 to 1.56, to optically clear murine knee joints and cortical bone. We demonstrated and quantified the ability of these optical clearing agents to clear musculoskeletal tissues and improve both macro- and micro-scale imaging of musculoskeletal tissue across several imaging modalities (stereomicroscopy, spectroscopy, and one-, and two-photon confocal microscopy) and investigational techniques (dynamic bone labeling and en bloc tissue staining). Based upon these findings we believe that optical clearing, in combination with advanced imaging techniques, has the potential to complement classical musculoskeletal analysis techniques; opening the door for improved in situ investigation and quantification of musculoskeletal tissues. PMID:26930293

  4. Seeing through Musculoskeletal Tissues: Improving In Situ Imaging of Bone and the Lacunar Canalicular System through Optical Clearing.

    PubMed

    Berke, Ian M; Miola, Joseph P; David, Michael A; Smith, Melanie K; Price, Christopher

    2016-01-01

    In situ, cells of the musculoskeletal system reside within complex and often interconnected 3-D environments. Key to better understanding how 3-D tissue and cellular environments regulate musculoskeletal physiology, homeostasis, and health is the use of robust methodologies for directly visualizing cell-cell and cell-matrix architecture in situ. However, the use of standard optical imaging techniques is often of limited utility in deep imaging of intact musculoskeletal tissues due to the highly scattering nature of biological tissues. Drawing inspiration from recent developments in the deep-tissue imaging field, we describe the application of immersion based optical clearing techniques, which utilize the principle of refractive index (RI) matching between the clearing/mounting media and tissue under observation, to improve the deep, in situ imaging of musculoskeletal tissues. To date, few optical clearing techniques have been applied specifically to musculoskeletal tissues, and a systematic comparison of the clearing ability of optical clearing agents in musculoskeletal tissues has yet to be fully demonstrated. In this study we tested the ability of eight different aqueous and non-aqueous clearing agents, with RIs ranging from 1.45 to 1.56, to optically clear murine knee joints and cortical bone. We demonstrated and quantified the ability of these optical clearing agents to clear musculoskeletal tissues and improve both macro- and micro-scale imaging of musculoskeletal tissue across several imaging modalities (stereomicroscopy, spectroscopy, and one-, and two-photon confocal microscopy) and investigational techniques (dynamic bone labeling and en bloc tissue staining). Based upon these findings we believe that optical clearing, in combination with advanced imaging techniques, has the potential to complement classical musculoskeletal analysis techniques; opening the door for improved in situ investigation and quantification of musculoskeletal tissues.

  5. A MULTIWAVELENGTH STUDY OF TADPOLE GALAXIES IN THE HUBBLE ULTRA DEEP FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straughn, Amber N.; Eufrasio, Rafael T.; Gardner, Jonathan P.

    2015-12-01

    Multiwavelength data are essential in order to provide a complete picture of galaxy evolution and to inform studies of galaxies’ morphological properties across cosmic time. Here we present the results of a multiwavelength investigation of the morphologies of “tadpole” galaxies at intermediate redshift (0.314 < z < 3.175) in the Hubble Ultra Deep Field. These galaxies were previously selected from deep Hubble Space Telescope (HST) F775W data based on their distinct asymmetric knot-plus-tail morphologies. Here we use deep Wide Field Camera 3 near-infrared imaging in addition to the HST optical data in order to study the rest-frame UV/optical morphologies ofmore » these galaxies across the redshift range 0.3 < z < 3.2. This study reveals that the majority of these galaxies do retain their general asymmetric morphology in the rest-frame optical over this redshift range, if not the distinct “tadpole” shape. The average stellar mass of tadpole galaxies is lower than that of field galaxies, with the effect being slightly greater at higher redshift within the errors. Estimated from spectral energy distribution fits, the average age of tadpole galaxies is younger than that of field galaxies in the lower-redshift bin, and the average metallicity is lower (whereas the specific star formation rate for tadpoles is roughly the same as field galaxies across the redshift range probed here). These average effects combined support the conclusion that this subset of galaxies is in an active phase of assembly, either late-stage merging or cold gas accretion causing localized clumpy star formation.« less

  6. Conditional random field modelling of interactions between findings in mammography

    NASA Astrophysics Data System (ADS)

    Kooi, Thijs; Mordang, Jan-Jurre; Karssemeijer, Nico

    2017-03-01

    Recent breakthroughs in training deep neural network architectures, in particular deep Convolutional Neural Networks (CNNs), made a big impact on vision research and are increasingly responsible for advances in Computer Aided Diagnosis (CAD). Since many natural scenes and medical images vary in size and are too large to feed to the networks as a whole, two stage systems are typically employed, where in the first stage, small regions of interest in the image are located and presented to the network as training and test data. These systems allow us to harness accurate region based annotations, making the problem easier to learn. However, information is processed purely locally and context is not taken into account. In this paper, we present preliminary work on the employment of a Conditional Random Field (CRF) that is trained on top the CNN to model contextual interactions such as the presence of other suspicious regions, for mammography CAD. The model can easily be extended to incorporate other sources of information, such as symmetry, temporal change and various patient covariates and is general in the sense that it can have application in other CAD problems.

  7. The Hyper Suprime-Cam SSP Survey: Overview and survey design

    NASA Astrophysics Data System (ADS)

    Aihara, Hiroaki; Arimoto, Nobuo; Armstrong, Robert; Arnouts, Stéphane; Bahcall, Neta A.; Bickerton, Steven; Bosch, James; Bundy, Kevin; Capak, Peter L.; Chan, James H. H.; Chiba, Masashi; Coupon, Jean; Egami, Eiichi; Enoki, Motohiro; Finet, Francois; Fujimori, Hiroki; Fujimoto, Seiji; Furusawa, Hisanori; Furusawa, Junko; Goto, Tomotsugu; Goulding, Andy; Greco, Johnny P.; Greene, Jenny E.; Gunn, James E.; Hamana, Takashi; Harikane, Yuichi; Hashimoto, Yasuhiro; Hattori, Takashi; Hayashi, Masao; Hayashi, Yusuke; Hełminiak, Krzysztof G.; Higuchi, Ryo; Hikage, Chiaki; Ho, Paul T. P.; Hsieh, Bau-Ching; Huang, Kuiyun; Huang, Song; Ikeda, Hiroyuki; Imanishi, Masatoshi; Inoue, Akio K.; Iwasawa, Kazushi; Iwata, Ikuru; Jaelani, Anton T.; Jian, Hung-Yu; Kamata, Yukiko; Karoji, Hiroshi; Kashikawa, Nobunari; Katayama, Nobuhiko; Kawanomoto, Satoshi; Kayo, Issha; Koda, Jin; Koike, Michitaro; Kojima, Takashi; Komiyama, Yutaka; Konno, Akira; Koshida, Shintaro; Koyama, Yusei; Kusakabe, Haruka; Leauthaud, Alexie; Lee, Chien-Hsiu; Lin, Lihwai; Lin, Yen-Ting; Lupton, Robert H.; Mandelbaum, Rachel; Matsuoka, Yoshiki; Medezinski, Elinor; Mineo, Sogo; Miyama, Shoken; Miyatake, Hironao; Miyazaki, Satoshi; Momose, Rieko; More, Anupreeta; More, Surhud; Moritani, Yuki; Moriya, Takashi J.; Morokuma, Tomoki; Mukae, Shiro; Murata, Ryoma; Murayama, Hitoshi; Nagao, Tohru; Nakata, Fumiaki; Niida, Mana; Niikura, Hiroko; Nishizawa, Atsushi J.; Obuchi, Yoshiyuki; Oguri, Masamune; Oishi, Yukie; Okabe, Nobuhiro; Okamoto, Sakurako; Okura, Yuki; Ono, Yoshiaki; Onodera, Masato; Onoue, Masafusa; Osato, Ken; Ouchi, Masami; Price, Paul A.; Pyo, Tae-Soo; Sako, Masao; Sawicki, Marcin; Shibuya, Takatoshi; Shimasaku, Kazuhiro; Shimono, Atsushi; Shirasaki, Masato; Silverman, John D.; Simet, Melanie; Speagle, Joshua; Spergel, David N.; Strauss, Michael A.; Sugahara, Yuma; Sugiyama, Naoshi; Suto, Yasushi; Suyu, Sherry H.; Suzuki, Nao; Tait, Philip J.; Takada, Masahiro; Takata, Tadafumi; Tamura, Naoyuki; Tanaka, Manobu M.; Tanaka, Masaomi; Tanaka, Masayuki; Tanaka, Yoko; Terai, Tsuyoshi; Terashima, Yuichi; Toba, Yoshiki; Tominaga, Nozomu; Toshikawa, Jun; Turner, Edwin L.; Uchida, Tomohisa; Uchiyama, Hisakazu; Umetsu, Keiichi; Uraguchi, Fumihiro; Urata, Yuji; Usuda, Tomonori; Utsumi, Yousuke; Wang, Shiang-Yu; Wang, Wei-Hao; Wong, Kenneth C.; Yabe, Kiyoto; Yamada, Yoshihiko; Yamanoi, Hitomi; Yasuda, Naoki; Yeh, Sherry; Yonehara, Atsunori; Yuma, Suraphong

    2018-01-01

    Hyper Suprime-Cam (HSC) is a wide-field imaging camera on the prime focus of the 8.2-m Subaru telescope on the summit of Mauna Kea in Hawaii. A team of scientists from Japan, Taiwan, and Princeton University is using HSC to carry out a 300-night multi-band imaging survey of the high-latitude sky. The survey includes three layers: the Wide layer will cover 1400 deg2 in five broad bands (grizy), with a 5 σ point-source depth of r ≈ 26. The Deep layer covers a total of 26 deg2 in four fields, going roughly a magnitude fainter, while the UltraDeep layer goes almost a magnitude fainter still in two pointings of HSC (a total of 3.5 deg2). Here we describe the instrument, the science goals of the survey, and the survey strategy and data processing. This paper serves as an introduction to a special issue of the Publications of the Astronomical Society of Japan, which includes a large number of technical and scientific papers describing results from the early phases of this survey.

  8. VizieR Online Data Catalog: Redshifts of 65 CANDELS supernovae (Rodney+, 2014)

    NASA Astrophysics Data System (ADS)

    Rodney, S. A.; Riess, A. G.; Strolger, L.-G.; Dahlen, T.; Graur, O.; Casertano, S.; Dickinson, M. E.; Ferguson, H. C.; Garnavich, P.; Hayden, B.; Jha, S. W.; Jones, D. O.; Kirshner, R. P.; Koekemoer, A. M.; McCully, C.; Mobasher, B.; Patel, B.; Weiner, B. J.; Cenko, S. B.; Clubb, K. I.; Cooper, M.; Filippenko, A. V.; Frederiksen, T. F.; Hjorth, J.; Leibundgut, B.; Matheson, T.; Nayyeri, H.; Penner, K.; Trump, J.; Silverman, J. M.; U, V.; Azalee Bostroem, K.; Challis, P.; Rajan, A.; Wolff, S.; Faber, S. M.; Grogin, N. A.; Kocevski, D.

    2015-01-01

    In this paper we present a measurement of the Type Ia supernova explosion rate as a function of redshift (SNR(z)) from a sample of 65 supernovae discovered in the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) supernova program. This supernova survey is a joint operation of two Hubble Space Telescope (HST) Multi-Cycle Treasury (MCT) programs: CANDELS (PIs: Faber and Ferguson; Grogin et al., 2011ApJS..197...35G; Koekemoer et al., 2011ApJS..197...36K), and the Cluster Lensing and Supernovae search with Hubble (CLASH; PI: Postman; Postman et al. 2012, cat. J/ApJS/199/25). The supernova discovery and follow-up for both programs were allocated to the HST MCT supernova program (PI: Riess). The results presented here are based on the full five fields and ~0.25deg2 of the CANDELS program, observed from 2010 to 2013. A companion paper presents the SN Ia rates from the CLASH sample (Graur et al., 2014ApJ...783...28G). A composite analysis that combines the CANDELS+CLASH supernova sample and revisits past HST surveys will be presented in a future paper. The three-year CANDELS program was designed to probe galaxy evolution out to z~8 with deep infrared and optical imaging of five well-studied extragalactic fields: GOODS-S, GOODS-N (the Great Observatories Origins Deep Survey South and North; Giavalisco et al. 2004, cat. II/261), COSMOS (the Cosmic Evolution Survey, Scoville et al., 2007ApJS..172....1S; Koekemoer et al., 2007ApJS..172..196K), UDS (the UKIDSS Ultra Deep Survey; Lawrence et al. 2007, cat. II/314; Cirasuolo et al., 2007MNRAS.380..585C), EGS (the Extended Groth Strip; Davis et al. 2007, cat. III/248). As described fully in Grogin et al. (2011ApJS..197...35G), the CANDELS program includes both "wide" and "deep" fields. The wide component of CANDELS comprises the COSMOS, UDS, and EGS fields, plus one-third of the GOODS-S field and one half of the GOODS-N field--a total survey area of 730 arcmin2. The "deep" component of CANDELS came from the central 67arcmin2 of each of the GOODS-S and GOODS-N fields. The CANDELS fields analyzed in this work are described in Table 1. (6 data files).

  9. Light-field and holographic three-dimensional displays [Invited].

    PubMed

    Yamaguchi, Masahiro

    2016-12-01

    A perfect three-dimensional (3D) display that satisfies all depth cues in human vision is possible if a light field can be reproduced exactly as it appeared when it emerged from a real object. The light field can be generated based on either light ray or wavefront reconstruction, with the latter known as holography. This paper first provides an overview of the advances of ray-based and wavefront-based 3D display technologies, including integral photography and holography, and the integration of those technologies with digital information systems. Hardcopy displays have already been used in some applications, whereas the electronic display of a light field is under active investigation. Next, a fundamental question in this technology field is addressed: what is the difference between ray-based and wavefront-based methods for light-field 3D displays? In considering this question, it is of particular interest to look at the technology of holographic stereograms. The phase information in holography contributes to the resolution of a reconstructed image, especially for deep 3D images. Moreover, issues facing the electronic display system of light fields are discussed, including the resolution of the spatial light modulator, the computational techniques of holography, and the speckle in holographic images.

  10. The Chandra Deep Field South as a test case for Global Multi Conjugate Adaptive Optics

    NASA Astrophysics Data System (ADS)

    Portaluri, E.; Viotto, V.; Ragazzoni, R.; Gullieuszik, M.; Bergomi, M.; Greggio, D.; Biondi, F.; Dima, M.; Magrin, D.; Farinato, J.

    2017-04-01

    The era of the next generation of giant telescopes requires not only the advent of new technologies but also the development of novel methods, in order to exploit fully the extraordinary potential they are built for. Global Multi Conjugate Adaptive Optics (GMCAO) pursues this approach, with the goal of achieving good performance over a field of view of a few arcmin and an increase in sky coverage. In this article, we show the gain offered by this technique to an astrophysical application, such as the photometric survey strategy applied to the Chandra Deep Field South as a case study. We simulated a close-to-real observation of a 500 × 500 arcsec2 extragalactic deep field with a 40-m class telescope that implements GMCAO. We analysed mock K-band images of 6000 high-redshift (up to z = 2.75) galaxies therein as if they were real to recover the initial input parameters. We attained 94.5 per cent completeness for source detection with SEXTRACTOR. We also measured the morphological parameters of all the sources with the two-dimensional fitting tools GALFIT. The agreement we found between recovered and intrinsic parameters demonstrates GMCAO as a reliable approach to assist extremely large telescope (ELT) observations of extragalactic interest.

  11. Wishart Deep Stacking Network for Fast POLSAR Image Classification.

    PubMed

    Jiao, Licheng; Liu, Fang

    2016-05-11

    Inspired by the popular deep learning architecture - Deep Stacking Network (DSN), a specific deep model for polarimetric synthetic aperture radar (POLSAR) image classification is proposed in this paper, which is named as Wishart Deep Stacking Network (W-DSN). First of all, a fast implementation of Wishart distance is achieved by a special linear transformation, which speeds up the classification of POLSAR image and makes it possible to use this polarimetric information in the following Neural Network (NN). Then a single-hidden-layer neural network based on the fast Wishart distance is defined for POLSAR image classification, which is named as Wishart Network (WN) and improves the classification accuracy. Finally, a multi-layer neural network is formed by stacking WNs, which is in fact the proposed deep learning architecture W-DSN for POLSAR image classification and improves the classification accuracy further. In addition, the structure of WN can be expanded in a straightforward way by adding hidden units if necessary, as well as the structure of the W-DSN. As a preliminary exploration on formulating specific deep learning architecture for POLSAR image classification, the proposed methods may establish a simple but clever connection between POLSAR image interpretation and deep learning. The experiment results tested on real POLSAR image show that the fast implementation of Wishart distance is very efficient (a POLSAR image with 768000 pixels can be classified in 0.53s), and both the single-hidden-layer architecture WN and the deep learning architecture W-DSN for POLSAR image classification perform well and work efficiently.

  12. A Chandra Survey of high-redshift (0.7 < z < 0.8) clusters selected in the 100 deg^2 SPT-Pol Deep Field

    NASA Astrophysics Data System (ADS)

    Garmire, Gordon

    2016-09-01

    We propose to observe a complete sample of 10 galaxy clusters at 1e14 < M500 < 5e14 and 0.7 < z < 0.8. These systems were selected from the 100 deg^2 deep field of the SPT-Pol SZ survey. This survey are has significant complementary data, including uniform depth ATCA, Herschel, Spitzer, and DES imaging, enabling a wide variety of astrophysical and cosmological studies. This sample complements the successful SPT-XVP survey, which has a broad redshift range and a narrow mass range, by including clusters over a narrow redshift range and broad mass range. These systems are such low mass and high redshift that they will not be detected in the eRosita all-sky survey.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holcomb, R.T.; Moore, J.G.; Lipman, P.W.

    The GLORIA long-range sonar imaging system has revealed fields of large lava flows in the Hawaiian Trough east and south of Hawaii in water as deep as 5.5 km. Flows in the most extensive field (110 km long) have erupted from the deep submarine segment of Kilauea's east rift zone. Other flows have been erupted from Loihi and Mauna Loa. This discovery confirms a suspicion, long held from subaerial studies, that voluminous submarine flows are erupted from Hawaiian volcanoes, and it supports an inference that summit calderas repeatedly collapse and fill at intervals of centuries to millenia owing to voluminousmore » eruptions. These extensive flows differ greatly in form from pillow lavas found previously along shallower segments of the rift zones; therefore, revision of concepts of volcano stratigraphy and structure may be required.« less

  14. Hybrid system for in vivo real-time planar fluorescence and volumetric optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Chen, Zhenyue; Deán-Ben, Xosé Luís.; Gottschalk, Sven; Razansky, Daniel

    2018-02-01

    Fluorescence imaging is widely employed in all fields of cell and molecular biology due to its high sensitivity, high contrast and ease of implementation. However, the low spatial resolution and lack of depth information, especially in strongly-scattering samples, restrict its applicability for deep-tissue imaging applications. On the other hand, optoacoustic imaging is known to deliver a unique set of capabilities such as high spatial and temporal resolution in three dimensions, deep penetration and spectrally-enriched imaging contrast. Since fluorescent substances can generate contrast in both modalities, simultaneous fluorescence and optoacoustic readings can provide new capabilities for functional and molecular imaging of living organisms. Optoacoustic images can further serve as valuable anatomical references based on endogenous hemoglobin contrast. Herein, we propose a hybrid system for in vivo real-time planar fluorescence and volumetric optoacoustic tomography, both operating in reflection mode, which synergistically combines the advantages of stand-alone systems. Validation of the spatial resolution and sensitivity of the system were first carried out in tissue mimicking phantoms while in vivo imaging was further demonstrated by tracking perfusion of an optical contrast agent in a mouse brain in the hybrid imaging mode. Experimental results show that the proposed system effectively exploits the contrast mechanisms of both imaging modalities, making it especially useful for accurate monitoring of fluorescence-based signal dynamics in highly scattering samples.

  15. A demonstration of position angle-only weak lensing shear estimators on the GREAT3 simulations

    NASA Astrophysics Data System (ADS)

    Whittaker, Lee; Brown, Michael L.; Battye, Richard A.

    2015-12-01

    We develop and apply the position angle-only shear estimator of Whittaker, Brown & Battye to realistic galaxy images. This is done by demonstrating the method on the simulations of the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, which include contributions from anisotropic point spread functions (PSFs). We measure the position angles of the galaxies using three distinct methods - the integrated light method, quadrupole moments of surface brightness, and using model-based ellipticity measurements provided by IM3SHAPE. A weighting scheme is adopted to address biases in the position angle measurements which arise in the presence of an anisotropic PSF. Biases on the shear estimates, due to measurement errors on the position angles and correlations between the measurement errors and the true position angles, are corrected for using simulated galaxy images and an iterative procedure. The properties of the simulations are estimated using the deep field images provided as part of the challenge. A method is developed to match the distributions of galaxy fluxes and half-light radii from the deep fields to the corresponding distributions in the field of interest. We recover angle-only shear estimates with a performance close to current well-established model and moments-based methods for all three angle measurement techniques. The Q-values for all three methods are found to be Q ˜ 400. The code is freely available online at http://www.jb.man.ac.uk/mbrown/angle_only_shear/.

  16. Magnetotelluric images of deep crustal structure of the Rehai geothermal field near Tengchong, southern China

    NASA Astrophysics Data System (ADS)

    Bai, Denghai; Meju, Maxwell A.; Liao, Zhijie

    2001-12-01

    Broadband (0.004-4096s) magnetotelluric (MT) soundings have been applied to the determination of the deep structure across the Rehai geothermal field in a Quaternary volcanic area near the Indo-Eurasian collisional margin. Tensorial analysis of the data show evidence of weak to strong 3-D effects but for approximate 2-D imaging, we obtained dual-mode MT responses for an assumed strike direction coincident with the trend of the regional-scale faults and with the principal impedance azimuth at long periods. The data were subsequently inverted using different approaches. The rapid relaxation inversion models are comparable to the sections constructed from depth-converted invariant impedance phase data. The results from full-domain 2-D conjugate-gradient inversion with different initial models are concordant and evoke a picture of a dome-like structure consisting of a conductive (<10 Ωm) core zone, c . 2km wide, and a resistive (>50-1000 Ωm) cap which is about 5-6km thick in the central part of the known geothermal field and thickens outwards to about 15-20km. The anomalous structure rests on a mid-crustal zone of 20-30 Ωm resistivity extending down to about 25km depth where there appears to be a moderately resistive (>30 Ωm) substratum. The MT images are shown to be in accord with published geological, isotopic and geochemical results that suggested the presence of a magma body underneath the area of study.

  17. Studying Galaxy Formation with the Hubble, Spitzer and James Webb Space Telescopes

    NASA Technical Reports Server (NTRS)

    Gardner, Jonathan P.

    2009-01-01

    The deepest optical to infrared observations of the universe include the Hubble Deep Fields, the Great Observatories Origins Deep Survey and the recent Hubble Ultra-Deep Field. Galaxies are seen in these surveys at redshifts z greater than 6, less than 1 Gyr after the Big Bang, at the end of a period when light from the galaxies has reionized Hydrogen in the inter-galactic medium. These observations, combined with theoretical understanding, indicate that the first stars and galaxies formed at z greater than 10, beyond the reach of the Hubble and Spitzer Space Telescopes. To observe the first galaxies, NASA is planning the James Webb Space Telescope (JWST), a large (6.5m), cold (less than 50K), infrared-optimized observatory to be launched early in the next decade into orbit around the second Earth-Sun Lagrange point. JWST will have four instruments: The Near-Infrared Camera, the Near-Infrared multi-object Spectrograph, and the Tunable Filter Imager will cover the wavelength range 0.6 to 5 microns, while the Mid-Infrared Instrument will do both imaging and spectroscopy from 5 to 28.5 microns. In addition to JWST's ability to study the formation and evolution of galaxies, I will also briefly review its expected contributions to studies of the formation of stars and planetary systems, and discuss recent progress in constructing the observatory.

  18. Hyper Suprime-Camera Survey of the Akari NEP Wide Field

    NASA Astrophysics Data System (ADS)

    Goto, Tomotsugu; Toba, Yoshiki; Utsumi, Yousuke; Oi, Nagisa; Takagi, Toshinobu; Malkan, Matt; Ohayma, Youichi; Murata, Kazumi; Price, Paul; Karouzos, Marios; Matsuhara, Hideo; Nakagawa, Takao; Wada, Takehiko; Serjeant, Steve; Burgarella, Denis; Buat, Veronique; Takada, Masahiro; Miyazaki, Satoshi; Oguri, Masamune; Miyaji, Takamitsu; Oyabu, Shinki; White, Glenn; Takeuchi, Tsutomu; Inami, Hanae; Perason, Chris; Malek, Katarzyna; Marchetti, Lucia; Lee, Hyung Mok; Im, Myung; Kim, Seong Jin; Koptelova, Ekaterina; Chao, Dani; Wu, Yi-Han; AKARI NEP Survey Team; AKARI All Sky Survey Team

    2017-03-01

    The extragalactic background suggests half the energy generated by stars was reprocessed into the infrared (IR) by dust. At z ∼1.3, 90% of star formation is obscured by dust. To fully understand the cosmic star formation history, it is critical to investigate infrared emission. AKARI has made deep mid-IR observation using its continuous 9-band filters in the NEP field (5.4 deg^2), using ∼10% of the entire pointed observations available throughout its lifetime. However, there remain 11,000 AKARI infrared sources undetected with the previous CFHT/Megacam imaging (r ∼25.9ABmag). Redshift and IR luminosity of these sources are unknown. These sources may contribute significantly to the cosmic star-formation rate density (CSFRD). For example, if they all lie at 1 < z < 2, the CSFRD will be twice as high at the epoch. We are carrying out deep imaging of the NEP field in 5 broad bands (g,r,i,z, and y) using Hyper Suprime-Camera (HSC), which has 1.5 deg field of view in diameter on Subaru 8m telescope. This will provide photometric redshift information, and thereby IR luminosity for the previously-undetected 11,000 faint AKARI IR sources. Combined with AKARI's mid-IR AGN/SF diagnosis, and accurate mid-IR luminosity measurement, this will allow a complete census of cosmic star-formation/AGN accretion history obscured by dust.

  19. Recent advancements in the SQUID magnetospinogram system

    NASA Astrophysics Data System (ADS)

    Adachi, Yoshiaki; Kawai, Jun; Haruta, Yasuhiro; Miyamoto, Masakazu; Kawabata, Shigenori; Sekihara, Kensuke; Uehara, Gen

    2017-06-01

    In this study, a new superconducting quantum interference device (SQUID) biomagnetic measurement system known as magnetospinogram (MSG) is developed. The MSG system is used for observation of a weak magnetic field distribution induced by the neural activity of the spinal cord over the body surface. The current source reconstruction for the observed magnetic field distribution provides noninvasive functional imaging of the spinal cord, which enables medical personnel to diagnose spinal cord diseases more accurately. The MSG system is equipped with a uniquely shaped cryostat and a sensor array of vector-type SQUID gradiometers that are designed to detect the magnetic field from deep sources across a narrow observation area over the body surface of supine subjects. The latest prototype of the MSG system is already applied in clinical studies to develop a diagnosis protocol for spinal cord diseases. Advancements in hardware and software for MSG signal processing and cryogenic components aid in effectively suppressing external magnetic field noise and reducing the cost of liquid helium that act as barriers with respect to the introduction of the MSG system to hospitals. The application of the MSG system is extended to various biomagnetic applications in addition to spinal cord functional imaging given the advantages of the MSG system for investigating deep sources. The study also includes a report on the recent advancements of the SQUID MSG system including its peripheral technologies and wide-spread applications.

  20. Artificial Intelligence in planetary spectroscopy

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo

    2017-10-01

    The field of exoplanetary spectroscopy is as fast moving as it is new. Analysing currently available observations of exoplanetary atmospheres often invoke large and correlated parameter spaces that can be difficult to map or constrain. This is true for both: the data analysis of observations as well as the theoretical modelling of their atmospheres.Issues of low signal-to-noise data and large, non-linear parameter spaces are nothing new and commonly found in many fields of engineering and the physical sciences. Recent years have seen vast improvements in statistical data analysis and machine learning that have revolutionised fields as diverse as telecommunication, pattern recognition, medical physics and cosmology.In many aspects, data mining and non-linearity challenges encountered in other data intensive fields are directly transferable to the field of extrasolar planets. In this conference, I will discuss how deep neural networks can be designed to facilitate solving said issues both in exoplanet atmospheres as well as for atmospheres in our own solar system. I will present a deep belief network, RobERt (Robotic Exoplanet Recognition), able to learn to recognise exoplanetary spectra and provide artificial intelligences to state-of-the-art atmospheric retrieval algorithms. Furthermore, I will present a new deep convolutional network that is able to map planetary surface compositions using hyper-spectral imaging and demonstrate its uses on Cassini-VIMS data of Saturn.

  1. Ultrashort Microwave-Pumped Real-Time Thermoacoustic Breast Tumor Imaging System.

    PubMed

    Ye, Fanghao; Ji, Zhong; Ding, Wenzheng; Lou, Cunguang; Yang, Sihua; Xing, Da

    2016-03-01

    We report the design of a real-time thermoacoustic (TA) scanner dedicated to imaging deep breast tumors and investigate its imaging performance. The TA imaging system is composed of an ultrashort microwave pulse generator and a ring transducer array with 384 elements. By vertically scanning the transducer array that encircles the breast phantom, we achieve real-time, 3D thermoacoustic imaging (TAI) with an imaging speed of 16.7 frames per second. The stability of the microwave energy and its distribution in the cling-skin acoustic coupling cup are measured. The results indicate that there is a nearly uniform electromagnetic field in each XY-imaging plane. Three plastic tubes filled with salt water are imaged dynamically to evaluate the real-time performance of our system, followed by 3D imaging of an excised breast tumor embedded in a breast phantom. Finally, to demonstrate the potential for clinical applications, the excised breast of a ewe embedded with an ex vivo human breast tumor is imaged clearly with a contrast of about 1:2.8. The high imaging speed, large field of view, and 3D imaging performance of our dedicated TAI system provide the potential for clinical routine breast screening.

  2. Observing Optical Plasmons on a Single Nanometer Scale

    PubMed Central

    Cohen, Moshik; Shavit, Reuven; Zalevsky, Zeev

    2014-01-01

    The exceptional capability of plasmonic structures to confine light into deep subwavelength volumes has fashioned rapid expansion of interest from both fundamental and applicative perspectives. Surface plasmon nanophotonics enables to investigate light - matter interaction in deep nanoscale and harness electromagnetic and quantum properties of materials, thus opening pathways for tremendous potential applications. However, imaging optical plasmonic waves on a single nanometer scale is yet a substantial challenge mainly due to size and energy considerations. Here, for the first time, we use Kelvin Probe Force Microscopy (KPFM) under optical illumination to image and characterize plasmonic modes. We experimentally demonstrate unprecedented spatial resolution and measurement sensitivity both on the order of a single nanometer. By comparing experimentally obtained images with theoretical calculation results, we show that KPFM maps may provide valuable information on the phase of the optical near field. Additionally, we propose a theoretical model for the relation between surface plasmons and the material workfunction measured by KPFM. Our findings provide the path for using KPFM for high resolution measurements of optical plasmons, prompting the scientific frontier towards quantum plasmonic imaging on submolecular scales. PMID:24556874

  3. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor

    PubMed Central

    Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung

    2017-01-01

    The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods. PMID:28665361

  4. Phytoplankton off the West Coast of Africa

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Just off the coast of West Africa, persistent northeasterly trade winds often churn up deep ocean water. When the nutrients in these deep waters reach the ocean's surface, they often give rise to large blooms of phytoplankton. This image of the Mauritanian coast shows swirls of phytoplankton fed by the upwelling of nutrient-rich water. The scene was acquired by the Medium Resolution Imaging Spectrometer (MERIS) aboard the European Space Agency's ENVISAT. MERIS will monitor changes in phytoplankton across Earth's oceans and seas, both for the purpose of managing fisheries and conducting global change research. NASA scientists will use data from this European instrument in the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) program. The mission of SIMBIOS is to construct a consistent long-term dataset of ocean color (phytoplankton abundance) measurements made by multiple satellite instruments, including the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and the Moderate-Resolution Imaging Spectroradiometer (MODIS). For more information about MERIS and ENVISAT, visit the ENVISAT home page. Image copyright European Space Agency

  5. Ultrahigh-resolution optical coherence elastography through a micro-endoscope: towards in vivo imaging of cellular-scale mechanics

    PubMed Central

    Fang, Qi; Curatolo, Andrea; Wijesinghe, Philip; Yeow, Yen Ling; Hamzah, Juliana; Noble, Peter B.; Karnowski, Karol; Sampson, David D.; Ganss, Ruth; Kim, Jun Ki; Lee, Woei M.; Kennedy, Brendan F.

    2017-01-01

    In this paper, we describe a technique capable of visualizing mechanical properties at the cellular scale deep in living tissue, by incorporating a gradient-index (GRIN)-lens micro-endoscope into an ultrahigh-resolution optical coherence elastography system. The optical system, after the endoscope, has a lateral resolution of 1.6 µm and an axial resolution of 2.2 µm. Bessel beam illumination and Gaussian mode detection are used to provide an extended depth-of-field of 80 µm, which is a 4-fold improvement over a fully Gaussian beam case with the same lateral resolution. Using this system, we demonstrate quantitative elasticity imaging of a soft silicone phantom containing a stiff inclusion and a freshly excised malignant murine pancreatic tumor. We also demonstrate qualitative strain imaging below the tissue surface on in situ murine muscle. The approach we introduce here can provide high-quality extended-focus images through a micro-endoscope with potential to measure cellular-scale mechanics deep in tissue. We believe this tool is promising for studying biological processes and disease progression in vivo. PMID:29188108

  6. Classification of C2C12 cells at differentiation by convolutional neural network of deep learning using phase contrast images.

    PubMed

    Niioka, Hirohiko; Asatani, Satoshi; Yoshimura, Aina; Ohigashi, Hironori; Tagawa, Seiichi; Miyake, Jun

    2018-01-01

    In the field of regenerative medicine, tremendous numbers of cells are necessary for tissue/organ regeneration. Today automatic cell-culturing system has been developed. The next step is constructing a non-invasive method to monitor the conditions of cells automatically. As an image analysis method, convolutional neural network (CNN), one of the deep learning method, is approaching human recognition level. We constructed and applied the CNN algorithm for automatic cellular differentiation recognition of myogenic C2C12 cell line. Phase-contrast images of cultured C2C12 are prepared as input dataset. In differentiation process from myoblasts to myotubes, cellular morphology changes from round shape to elongated tubular shape due to fusion of the cells. CNN abstract the features of the shape of the cells and classify the cells depending on the culturing days from when differentiation is induced. Changes in cellular shape depending on the number of days of culture (Day 0, Day 3, Day 6) are classified with 91.3% accuracy. Image analysis with CNN has a potential to realize regenerative medicine industry.

  7. Spectrally-Based Bathymetric Mapping of a Dynamic, Sand-Bedded Channel: Niobrara River, Nebraska, USA

    NASA Astrophysics Data System (ADS)

    Dilbone, Elizabeth K.

    Methods for spectrally-based bathymetric mapping of rivers mainly have been developed and tested on clear-flowing, gravel bedded channels, with limited application to turbid, sand-bedded rivers. Using hyperspectral images of the Niobrara River, Nebraska, and field-surveyed depth data, this study evaluated three methods of retrieving depth from remotely sensed data in a dynamic, sand-bedded channel. The first regression-based approach paired in situ depth measurements and image pixel values to predict depth via Optimal Band Ratio Analysis (OBRA). The second approach used ground-based reflectance measurements to calibrate an OBRA relationship. For this approach, CASI images were atmospherically corrected to units of apparent surface reflectance using an empirical line calibration. For the final technique, we used Image-to-Depth Quantile Transformation (IDQT) to predict depth by linking the cumulative distribution function (CDF) of depth to the CDF of an image derived variable. OBRA yielded the lowest overall depth retrieval error (0.0047 m) and highest observed versus predicted R2 (0.81). Although misalignment between field and image data were not problematic to OBRA's performance in this study, such issues present potential limitations to standard regression-based approaches like OBRA in dynamic, sand-bedded rivers. Field spectroscopy-based maps exhibited a slight shallow bias (0.0652 m) but provided reliable depth estimates for most of the study reach. IDQT had a strong deep bias, but still provided informative relative depth maps that portrayed general patterns of shallow and deep areas of the channel. The over-prediction of depth by IDQT highlights the need for an unbiased sampling strategy to define the CDF of depth. While each of the techniques tested in this study demonstrated the potential to provide accurate depth estimates in sand-bedded rivers, each method also was subject to certain constraints and limitations.

  8. Performance evaluation of 2D and 3D deep learning approaches for automatic segmentation of multiple organs on CT images

    NASA Astrophysics Data System (ADS)

    Zhou, Xiangrong; Yamada, Kazuma; Kojima, Takuya; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi

    2018-02-01

    The purpose of this study is to evaluate and compare the performance of modern deep learning techniques for automatically recognizing and segmenting multiple organ regions on 3D CT images. CT image segmentation is one of the important task in medical image analysis and is still very challenging. Deep learning approaches have demonstrated the capability of scene recognition and semantic segmentation on nature images and have been used to address segmentation problems of medical images. Although several works showed promising results of CT image segmentation by using deep learning approaches, there is no comprehensive evaluation of segmentation performance of the deep learning on segmenting multiple organs on different portions of CT scans. In this paper, we evaluated and compared the segmentation performance of two different deep learning approaches that used 2D- and 3D deep convolutional neural networks (CNN) without- and with a pre-processing step. A conventional approach that presents the state-of-the-art performance of CT image segmentation without deep learning was also used for comparison. A dataset that includes 240 CT images scanned on different portions of human bodies was used for performance evaluation. The maximum number of 17 types of organ regions in each CT scan were segmented automatically and compared to the human annotations by using ratio of intersection over union (IU) as the criterion. The experimental results demonstrated the IUs of the segmentation results had a mean value of 79% and 67% by averaging 17 types of organs that segmented by a 3D- and 2D deep CNN, respectively. All the results of the deep learning approaches showed a better accuracy and robustness than the conventional segmentation method that used probabilistic atlas and graph-cut methods. The effectiveness and the usefulness of deep learning approaches were demonstrated for solving multiple organs segmentation problem on 3D CT images.

  9. Science Observations of Deep Space One

    NASA Technical Reports Server (NTRS)

    Nelson, Robert M.; Baganal, Fran; Boice, Daniel C.; Britt, Daniel T.; Brown, Robert H.; Buratti, Bonnie J.; Creary, Frank; Ip, Wing-Huan; Meier, Roland; Oberst, Juergen

    1999-01-01

    During the Deep Space One (DS1) primary mission, the spacecraft will fly by asteroid 1992 KD and possibly comet Borrelly. There are two technologies being validated on DS1 that will provide science observations of these targets, the Miniature Integrated Camera Spectrometer (MICAS) and the Plasma Experiment for Planetary Exploration (PEPE). MICAS encompasses a camera, an ultraviolet imaging spectrometer and an infrared imaging spectrometer. PEPE combines an ion and electron analyzer designed to determine the three-dimensional distribution of plasma over its field of view. MICAS includes two visible wavelength imaging channels, an ultraviolet imaging spectrometer, and an infrared imaging spectrometer all of which share a single 10-cm diameter telescope. Two types of visible wavelength detectors, both operating between about 500 and 1000 nm are used: a CCD with 13-microrad pixels and an 18-microrad-per-pixel, metal-on-silicon active pixel sensor (APS). Unlike the CCD the APS includes the timing and control electronics on the chip along with the detector. The UV spectrometer spans 80 to 185 nm with 0.64-nm spectral resolution and 316-microrad pixels. The IR spectrometer covers the range from 1200 to 2400 nm with 6.6-nm resolution and 54-microrad pixels PEPE includes a very low-power, low-mass micro-calorimeter to help understand plasma-surface interactions and a plasma analyzer to identify de individual molecules and atoms in the immediate vicinity of the spacecraft that have been eroded off the surface of asteroid 1992 KD. It employs common apertures with separate electrostatic energy analyzers. It measures electron and ion energies spanning a range of 3 eV to 30 keV, with a resolution of five percent. and measures ion mass from one to 135 atomic mass units with 5 percent resolution. It electrostatically sweeps its field of view both in elevation and azimuth. Both MICAS and PEPE represent a new direction for the evolution of science instruments for interplanetary spacecraft. These two instruments incorporate a large fraction of the capability of five instruments that had typically flown on NASA's deep space missions The Deep Space One science team acknowledges the support of Philip Varghese, David H. Lehman, Leslie Livesay, and Marc Rayman for providing invaluable assistance in making the science observations possible.

  10. 3D deeply supervised network for automated segmentation of volumetric medical images.

    PubMed

    Dou, Qi; Yu, Lequan; Chen, Hao; Jin, Yueming; Yang, Xin; Qin, Jing; Heng, Pheng-Ann

    2017-10-01

    While deep convolutional neural networks (CNNs) have achieved remarkable success in 2D medical image segmentation, it is still a difficult task for CNNs to segment important organs or structures from 3D medical images owing to several mutually affected challenges, including the complicated anatomical environments in volumetric images, optimization difficulties of 3D networks and inadequacy of training samples. In this paper, we present a novel and efficient 3D fully convolutional network equipped with a 3D deep supervision mechanism to comprehensively address these challenges; we call it 3D DSN. Our proposed 3D DSN is capable of conducting volume-to-volume learning and inference, which can eliminate redundant computations and alleviate the risk of over-fitting on limited training data. More importantly, the 3D deep supervision mechanism can effectively cope with the optimization problem of gradients vanishing or exploding when training a 3D deep model, accelerating the convergence speed and simultaneously improving the discrimination capability. Such a mechanism is developed by deriving an objective function that directly guides the training of both lower and upper layers in the network, so that the adverse effects of unstable gradient changes can be counteracted during the training procedure. We also employ a fully connected conditional random field model as a post-processing step to refine the segmentation results. We have extensively validated the proposed 3D DSN on two typical yet challenging volumetric medical image segmentation tasks: (i) liver segmentation from 3D CT scans and (ii) whole heart and great vessels segmentation from 3D MR images, by participating two grand challenges held in conjunction with MICCAI. We have achieved competitive segmentation results to state-of-the-art approaches in both challenges with a much faster speed, corroborating the effectiveness of our proposed 3D DSN. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. The Arctic Gakkel Vents (AGAVE) Expedition: Technology Development and the Search for Deep-Sea Hydrothermal Vent Fields Under the Arctic Ice Cap

    NASA Astrophysics Data System (ADS)

    Reves-Sohn, R. A.; Singh, H.; Humphris, S.; Shank, T.; Jakuba, M.; Kunz, C.; Murphy, C.; Willis, C.

    2007-12-01

    Deep-sea hydrothermal fields on the Gakkel Ridge beneath the Arctic ice cap provide perhaps the best terrestrial analogue for volcanically-hosted chemosynthetic biological communities that may exist beneath the ice-covered ocean of Europa. In both cases the key enabling technologies are robotic (untethered) vehicles that can swim freely under the ice and the supporting hardware and software. The development of robotic technology for deep- sea research beneath ice-covered oceans thus has relevance to both polar oceanography and future astrobiological missions to Europa. These considerations motivated a technology development effort under the auspices of NASA's ASTEP program and NSF's Office of Polar Programs that culminated in the AGAVE expedition aboard the icebreaker Oden from July 1 - August 10, 2007. The scientific objective was to study hydrothermal processes on the Gakkel Ridge, which is a key target for global studies of deep-sea vent fields. We developed two new autonomous underwater vehicles (AUVs) for the project, and deployed them to search for vent fields beneath the ice. We conducted eight AUV missions (four to completion) during the 40-day long expedition, which also included ship-based bathymetric surveys, CTD/rosette water column surveys, and wireline photographic and sampling surveys of remote sections of the Gakkel Ridge. The AUV missions, which lasted 16 hours on average and achieved operational depths of 4200 meters, returned sensor data that showed clear evidence of hydrothermal venting, but for a combination of technical reasons and time constraints, the AUVs did not ultimately return images of deep-sea vent fields. Nevertheless we used our wireline system to obtain images and samples of extensive microbial mats that covered fresh volcanic surfaces on a newly discovered set of volcanoes. The microbes appear to be living in regions where reducing and slightly warm fluids are seeping through cracks in the fresh volcanic terrain. These discoveries shed new light on the nature of volcanic and hydrothermal processes in the Arctic basin, and also demonstrate the importance of new technologies for advancing science beneath ice-covered oceans. Operationally, the AUV missions pushed the envelope of deep-sea technology. The recoveries were particularly difficult as it was necessary to have the vehicle find small pools of open water next to the ship, but in some cases the ice was in a state of regional compression such that no open water could be found or created. In these cases a well-calibrated, ship-based, short-baseline acoustic system was essential for successful vehicle recoveries. In all we were able to achieve a variety of operational and technological advances that provide stepping stones for future under-ice robotic missions, both on Earth and perhaps eventually on Europa.

  12. Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.

    PubMed

    Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B

    2018-02-01

    Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.

  13. Photoacoustic imaging probe for detecting lymph nodes and spreading of cancer at various depths

    NASA Astrophysics Data System (ADS)

    Lee, Yong-Jae; Jeong, Eun-Ju; Song, Hyun-Woo; Ahn, Chang-Geun; Noh, Hyung Wook; Sim, Joo Yong; Song, Dong Hoon; Jeon, Min Yong; Lee, Susung; Kim, Heewon; Zhang, Meihua; Kim, Bong Kyu

    2017-09-01

    We propose a compact and easy to use photoacoustic imaging (PAI) probe structure using a single strand of optical fiber and a beam combiner doubly reflecting acoustic waves for convenient detection of lymph nodes and cancers. Conventional PAI probes have difficulty detecting lymph nodes just beneath the skin or simultaneously investigating lymph nodes located in shallow as well as deep regions from skin without any supplementary material because the light and acoustic beams are intersecting obliquely in the probe. To overcome the limitations and improve their convenience, we propose a probe structure in which the illuminated light beam axis coincides with the axis of the ultrasound. The developed PAI probe was able to simultaneously achieve a wide range of images positioned from shallow to deep regions without the use of any supplementary material. Moreover, the proposed probe had low transmission losses for the light and acoustic beams. Therefore, the proposed PAI probe will be useful to easily detect lymph nodes and cancers in real clinical fields.

  14. Redshifts for Spitzer-detected galaxies at z 6 - old stars in the first Gyr

    NASA Astrophysics Data System (ADS)

    Lacy, Mark; Stanway, Elizabeth; Chiu, Kuenley; Douglas, Laura; Eyles, Laurence; Bunker, Andrew

    2008-02-01

    We have identified a population of star-forming galaxies at z 6 through the i-drop Lyman-break technique using HST/ACS. Using Spitzer/IRAC imaging (tracing the rest-frame optical), we discovered from SED-fitting that some of this population harbour relatively old stars (300-500Myr) with significant Balmer breaks, implying formation epochs of z 10. Our work suggests that UV photons from star formation at z 10 may play a key role in reionizing the Universe. However, these conclusions are drawn from the only field (GOODS-South) which has both deep Spitzer/IRAC imaging and many i-drop spectroscopic redshifts. Hence the global conclusions are compromised by cosmic variance. We have 72-hours on Spitzer to image 6 other sight-lines with deep ACS data; we propose to use GMOS multiobject mode to obtain spectroscopic redshifts, which are crucial to reduce the large uncertainties in fitting the stellar ages and masses, and hence inferring the preceding star formation history and the contribution to reionization.

  15. A top-down manner-based DCNN architecture for semantic image segmentation.

    PubMed

    Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin

    2017-01-01

    Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.

  16. Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging.

    PubMed

    Jiang, Shaowei; Liao, Jun; Bian, Zichao; Guo, Kaikai; Zhang, Yongbing; Zheng, Guoan

    2018-04-01

    A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.

  17. VizieR Online Data Catalog: Variability-selected AGN in Chandra DFS (Trevese+, 2008)

    NASA Astrophysics Data System (ADS)

    Trevese, D.; Boutsia, K.; Vagnetti, F.; Cappellaro, E.; Puccetti, S.

    2008-11-01

    Variability is a property shared by virtually all active galactic nuclei (AGNs), and was adopted as a criterion for their selection using data from multi epoch surveys. Low Luminosity AGNs (LLAGNs) are contaminated by the light of their host galaxies, and cannot therefore be detected by the usual colour techniques. For this reason, their evolution in cosmic time is poorly known. Consistency with the evolution derived from X-ray detected samples has not been clearly established so far, also because the low luminosity population consists of a mixture of different object types. LLAGNs can be detected by the nuclear optical variability of extended objects. Several variability surveys have been, or are being, conducted for the detection of supernovae (SNe). We propose to re-analyse these SNe data using a variability criterion optimised for AGN detection, to select a new AGN sample and study its properties. We analysed images acquired with the wide field imager at the 2.2m ESO/MPI telescope, in the framework of the STRESS supernova survey. We selected the AXAF field centred on the Chandra Deep Field South where, besides the deep X-ray survey, various optical data exist, originating in the EIS and COMBO-17 photometric surveys and the spectroscopic database of GOODS. (1 data file).

  18. Weak Lensing Study in VOICE Survey II: Shear Bias Calibrations

    NASA Astrophysics Data System (ADS)

    Liu, Dezi; Fu, Liping; Liu, Xiangkun; Radovich, Mario; Wang, Chao; Pan, Chuzhong; Fan, Zuhui; Covone, Giovanni; Vaccari, Mattia; Botticella, Maria Teresa; Capaccioli, Massimo; De Cicco, Demetra; Grado, Aniello; Miller, Lance; Napolitano, Nicola; Paolillo, Maurizio; Pignata, Giuliano

    2018-05-01

    The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is proposed to obtain deep optical ugri imaging of the CDFS and ES1 fields using the VLT Survey Telescope (VST). At present, the observations for the CDFS field have been completed, and comprise in total about 4.9 deg2 down to rAB ˜ 26 mag. In the companion paper by Fu et al. (2018), we present the weak lensing shear measurements for r-band images with seeing ≤ 0.9 arcsec. In this paper, we perform image simulations to calibrate possible biases of the measured shear signals. Statistically, the properties of the simulated point spread function (PSF) and galaxies show good agreements with those of observations. The multiplicative bias is calibrated to reach an accuracy of ˜3.0%. We study the bias sensitivities to the undetected faint galaxies and to the neighboring galaxies. We find that undetected galaxies contribute to the multiplicative bias at the level of ˜0.3%. Further analysis shows that galaxies with lower signal-to-noise ratio (SNR) are impacted more significantly because the undetected galaxies skew the background noise distribution. For the neighboring galaxies, we find that although most have been rejected in the shape measurement procedure, about one third of them still remain in the final shear sample. They show a larger ellipticity dispersion and contribute to ˜0.2% of the multiplicative bias. Such a bias can be removed by further eliminating these neighboring galaxies. But the effective number density of the galaxies can be reduced considerably. Therefore efficient methods should be developed for future weak lensing deep surveys.

  19. Clusters, Groups, and Filaments in the Chandra Deep Field-South up to Redshift 1

    NASA Astrophysics Data System (ADS)

    Dehghan, S.; Johnston-Hollitt, M.

    2014-03-01

    We present a comprehensive structure detection analysis of the 0.3 deg2 area of the MUSYC-ACES field, which covers the Chandra Deep Field-South (CDFS). Using a density-based clustering algorithm on the MUSYC and ACES photometric and spectroscopic catalogs, we find 62 overdense regions up to redshifts of 1, including clusters, groups, and filaments. We also present the detection of a relatively small void of ~10 Mpc2 at z ~ 0.53. All structures are confirmed using the DBSCAN method, including the detection of nine structures previously reported in the literature. We present a catalog of all structures present, including their central position, mean redshift, velocity dispersions, and classification based on their morphological and spectroscopic distributions. In particular, we find 13 galaxy clusters and 6 large groups/small clusters. Comparison of these massive structures with published XMM-Newton imaging (where available) shows that 80% of these structures are associated with diffuse, soft-band (0.4-1 keV) X-ray emission, including 90% of all objects classified as clusters. The presence of soft-band X-ray emission in these massive structures (M 200 >= 4.9 × 1013 M ⊙) provides a strong independent confirmation of our methodology and classification scheme. In the closest two clusters identified (z < 0.13) high-quality optical imaging from the Deep2c field of the Garching-Bonn Deep Survey reveals the cD galaxies and demonstrates that they sit at the center of the detected X-ray emission. Nearly 60% of the clusters, groups, and filaments are detected in the known enhanced density regions of the CDFS at z ~= 0.13, 0.52, 0.68, and 0.73. Additionally, all of the clusters, bar the most distant, are found in these overdense redshift regions. Many of the clusters and groups exhibit signs of ongoing formation seen in their velocity distributions, position within the detected cosmic web, and in one case through the presence of tidally disrupted central galaxies exhibiting trails of stars. These results all provide strong support for hierarchical structure formation up to redshifts of 1.

  20. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.

    PubMed

    Pang, Shuchao; Yu, Zhezhou; Orgun, Mehmet A

    2017-03-01

    Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  1. Anomalies of rupture velocity in deep earthquakes

    NASA Astrophysics Data System (ADS)

    Suzuki, M.; Yagi, Y.

    2010-12-01

    Explaining deep seismicity is a long-standing challenge in earth science. Deeper than 300 km, the occurrence rate of earthquakes with depth remains at a low level until ~530 km depth, then rises until ~600 km, finally terminate near 700 km. Given the difficulty of estimating fracture properties and observing the stress field in the mantle transition zone (410-660 km), the seismic source processes of deep earthquakes are the most important information for understanding the distribution of deep seismicity. However, in a compilation of seismic source models of deep earthquakes, the source parameters for individual deep earthquakes are quite varied [Frohlich, 2006]. Rupture velocities for deep earthquakes estimated using seismic waveforms range from 0.3 to 0.9Vs, where Vs is the shear wave velocity, a considerably wider range than the velocities for shallow earthquakes. The uncertainty of seismic source models prevents us from determining the main characteristics of the rupture process and understanding the physical mechanisms of deep earthquakes. Recently, the back projection method has been used to derive a detailed and stable seismic source image from dense seismic network observations [e.g., Ishii et al., 2005; Walker et al., 2005]. Using this method, we can obtain an image of the seismic source process from the observed data without a priori constraints or discarding parameters. We applied the back projection method to teleseismic P-waveforms of 24 large, deep earthquakes (moment magnitude Mw ≥ 7.0, depth ≥ 300 km) recorded since 1994 by the Data Management Center of the Incorporated Research Institutions for Seismology (IRIS-DMC) and reported in the U.S. Geological Survey (USGS) catalog, and constructed seismic source models of deep earthquakes. By imaging the seismic rupture process for a set of recent deep earthquakes, we found that the rupture velocities are less than about 0.6Vs except in the depth range of 530 to 600 km. This is consistent with the depth variation of deep seismicity: it peaks between about 530 and 600 km, where the fast rupture earthquakes (greater than 0.7Vs) are observed. Similarly, aftershock productivity is particularly low from 300 to 550 km depth and increases markedly at depth greater than 550 km [e.g., Persh and Houston, 2004]. We propose that large fracture surface energy (Gc) value for deep earthquakes generally prevent the acceleration of dynamic rupture propagation and generation of earthquakes between 300 and 700 km depth, whereas small Gc value in the exceptional depth range promote dynamic rupture propagation and explain the seismicity peak near 600 km.

  2. What do you gain from deconvolution? - Observing faint galaxies with the Hubble Space Telescope Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Schade, David J.; Elson, Rebecca A. W.

    1993-01-01

    We describe experiments with deconvolutions of simulations of deep HST Wide Field Camera images containing faint, compact galaxies to determine under what circumstances there is a quantitative advantage to image deconvolution, and explore whether it is (1) helpful for distinguishing between stars and compact galaxies, or between spiral and elliptical galaxies, and whether it (2) improves the accuracy with which characteristic radii and integrated magnitudes may be determined. The Maximum Entropy and Richardson-Lucy deconvolution algorithms give the same results. For medium and low S/N images, deconvolution does not significantly improve our ability to distinguish between faint stars and compact galaxies, nor between spiral and elliptical galaxies. Measurements from both raw and deconvolved images are biased and must be corrected; it is easier to quantify and remove the biases for cases that have not been deconvolved. We find no benefit from deconvolution for measuring luminosity profiles, but these results are limited to low S/N images of very compact (often undersampled) galaxies.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gwyn, Stephen D. J., E-mail: Stephen.Gwyn@nrc-cnrc.gc.ca

    This paper describes the image stacks and catalogs of the Canada-France-Hawaii Telescope Legacy Survey produced using the MegaPipe data pipeline at the Canadian Astronomy Data Centre. The Legacy Survey is divided into two parts. The Deep Survey consists of four fields each of 1 deg{sup 2}, with magnitude limits (50% completeness for point sources) of u = 27.5, g = 27.9, r = 27.7, i = 27.4, and z = 26.2. It contains 1.6 Multiplication-Sign 10{sup 6} sources. The Wide Survey consists of 150 deg{sup 2} split over four fields, with magnitude limits of u = 26.0, g = 26.5,more » r = 25.9, i = 25.7, and z = 24.6. It contains 3 Multiplication-Sign 10{sup 7} sources. This paper describes the calibration, image stacking, and catalog generation process. The images and catalogs are available on the web through several interfaces: normal image and text file catalog downloads, a 'Google Sky' interface, an image cutout service, and a catalog database query service.« less

  4. High-contrast 3D microscopic imaging of deep layers in a biological medium

    NASA Astrophysics Data System (ADS)

    Faridian, Ahmad; Pedrini, Giancarlo; Osten, Wolfgang

    2014-03-01

    Multilayer imaging of biological specimens is a demanding field of research, but scattering is one of the major obstacles in imaging the internal layers of a specimen. Although in many studies the biological object is assumed to be a weak scatterer, this condition is hardly satisfied for sub-millimeter sized organisms. The scattering medium is inhomogeneously distributed inside the specimen. Therefore, the scattering which occurs in the upper layers of a given internal layer of interest is different from the lower layers. That results in a different amount of collectable information for a specific point in the layer from each view. An opposed view dark-field digital holographic microscope (DHM) has been implemented in this work to collect the information concurrently from both views and increase the image quality. Implementing a DHM system gives the possibility to perform digital refocusing process and obtain multilayer images from each side without depth scanning of the object. The results have been presented and discussed here for a Drosophila embryo.

  5. Intelligent Detection of Structure from Remote Sensing Images Based on Deep Learning Method

    NASA Astrophysics Data System (ADS)

    Xin, L.

    2018-04-01

    Utilizing high-resolution remote sensing images for earth observation has become the common method of land use monitoring. It requires great human participation when dealing with traditional image interpretation, which is inefficient and difficult to guarantee the accuracy. At present, the artificial intelligent method such as deep learning has a large number of advantages in the aspect of image recognition. By means of a large amount of remote sensing image samples and deep neural network models, we can rapidly decipher the objects of interest such as buildings, etc. Whether in terms of efficiency or accuracy, deep learning method is more preponderant. This paper explains the research of deep learning method by a great mount of remote sensing image samples and verifies the feasibility of building extraction via experiments.

  6. Intraoperative MR-guided DBS implantation for treating PD and ET

    NASA Astrophysics Data System (ADS)

    Liu, Haiying; Maxwell, Robert E.; Truwit, Charles L.

    2001-05-01

    Deep brain stimulator (DBS) implantation is a promising treatment alternative for suppressing the motor tremor symptoms in Parkinson disease (PD) patient. The main objective is to develop a minimally invasive approach using high spatial resolution and soft-tissue contrast MR imaging techniques to guide the surgical placement of DBS. In the MR-guided procedure, the high spatial resolution MR images were obtained intra-operatively and used to target stereotactically a specific deep brain location. The neurosurgery for craniotomy was performed in the front of the magnet outside of the 10 Gauss line. Aided with positional registration assembly for the stereotactic head frame, the target location (VIM or GPi or STN) in deep brain areas was identified and measured from the MR images in reference to the markers in the calibration assembly of the head frame before the burrhole prep. In 20 patients, MR- guided DBS implantations have been performed according to the new methodology. MR-guided DBS implantation at high magnetic field strength has been shown to be feasible and desirable. In addition to the improved outcome, this offers a new surgical approach in which intra-operative visualization is possible during intervention, and any complications such as bleeding can be assessed in situ immediately prior to dural closure.

  7. Internal-illumination photoacoustic computed tomography

    NASA Astrophysics Data System (ADS)

    Li, Mucong; Lan, Bangxin; Liu, Wei; Xia, Jun; Yao, Junjie

    2018-03-01

    We report a photoacoustic computed tomography (PACT) system using a customized optical fiber with a cylindrical diffuser to internally illuminate deep targets. The traditional external light illumination in PACT usually limits the penetration depth to a few centimeters from the tissue surface, mainly due to strong optical attenuation along the light propagation path from the outside in. By contrast, internal light illumination, with external ultrasound detection, can potentially detect much deeper targets. Different from previous internal illumination PACT implementations using forward-looking optical fibers, our internal-illumination PACT system uses a customized optical fiber with a 3-cm-long conoid needle diffuser attached to the fiber tip, which can homogeneously illuminate the surrounding space and substantially enlarge the field of view. We characterized the internal illumination distribution and PACT system performance. We performed tissue phantom and in vivo animal studies to further demonstrate the superior imaging depth using internal illumination over external illumination. We imaged a 7.5-cm-deep leaf target embedded in optically scattering medium and the beating heart of a mouse overlaid with 3.7-cm-thick chicken tissue. Our results have collectively demonstrated that the internal light illumination combined with external ultrasound detection might be a useful strategy to improve the penetration depth of PACT in imaging deep organs of large animals and humans.

  8. Violent Interaction Detection in Video Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin

    2017-06-01

    Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.

  9. Resolving the fine-scale velocity structure of continental hyperextension at the Deep Galicia Margin using full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Davy, R. G.; Morgan, J. V.; Minshull, T. A.; Bayrakci, G.; Bull, J. M.; Klaeschen, D.; Reston, T. J.; Sawyer, D. S.; Lymer, G.; Cresswell, D.

    2018-01-01

    Continental hyperextension during magma-poor rifting at the Deep Galicia Margin is characterized by a complex pattern of faulting, thin continental fault blocks and the serpentinization, with local exhumation, of mantle peridotites along the S-reflector, interpreted as a detachment surface. In order to understand fully the evolution of these features, it is important to image seismically the structure and to model the velocity structure to the greatest resolution possible. Traveltime tomography models have revealed the long-wavelength velocity structure of this hyperextended domain, but are often insufficient to match accurately the short-wavelength structure observed in reflection seismic imaging. Here, we demonstrate the application of 2-D time-domain acoustic full-waveform inversion (FWI) to deep-water seismic data collected at the Deep Galicia Margin, in order to attain a high-resolution velocity model of continental hyperextension. We have used several quality assurance procedures to assess the velocity model, including comparison of the observed and modeled waveforms, checkerboard tests, testing of parameter and inversion strategy and comparison with the migrated reflection image. Our final model exhibits an increase in the resolution of subsurface velocities, with particular improvement observed in the westernmost continental fault blocks, with a clear rotation of the velocity field to match steeply dipping reflectors. Across the S-reflector, there is a sharpening in the velocity contrast, with lower velocities beneath S indicative of preferential mantle serpentinization. This study supports the hypothesis that normal faulting acts to hydrate the upper-mantle peridotite, observed as a systematic decrease in seismic velocities, consistent with increased serpentinization. Our results confirm the feasibility of applying the FWI method to sparse, deep-water crustal data sets.

  10. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    NASA Astrophysics Data System (ADS)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  11. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.

    PubMed

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-02-11

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  12. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy

    PubMed Central

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-01-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794

  13. VizieR Online Data Catalog: ALMA submm galaxies multi-wavelength data (Simpson+, 2017)

    NASA Astrophysics Data System (ADS)

    Simpson, J. M.; Smail, I.; Swinbank, A. M.; Ivison, R. J.; Dunlop, J. S.; Geach, J. E.; Almaini, O.; Arumugam, V.; Bremer, M. N.; Chen, C.-C.; Conselice, C.; Coppin, K. E. K.; Farrah, D.; Ibar, E.; Hartley, W. G.; Ma, C. J.; Michalowski, M. J.; Scott, D.; Spaans, M.; Thomson, A. P.; van der Werf, P. P.

    2017-11-01

    In previous work, we presented the source catalog, number counts, and far-infrared morphologies of the 52 SMGs that were detected in 30 ALMA maps (see Simpson+ 2015ApJ...799...81S, 2015ApJ...807..128S). The UKIDSS observations of the ~0.8deg2 UDS comprise four Wide-Field Camera (WFCAM) pointings in the J-, H-, and K-bands. In this paper, we use the images and catalogs released as part of the UKIDSS data release 8 (DR8). The DR8 release contains data taken between 2005 and 2010, and the final J-, H-, and K-band mosaics have a median 5σ depth (2" apertures) of J=24.9, H=24.2, and K=24.6, respectively. Deep observations of the UDS have also been taken in the U-band with Megacam at the Canada-France-Hawaii Telescope (CFHT) and in the B, V, R, i', and z' bands with Suprime-cam at the Subaru telescope. Furthermore, deep Spitzer data, obtained as part of the SpUDS program (PI: J. Dunlop) provides imaging reaching a 5σ depth of m3.6=24.2 and m4.5=24.0 at 3.6um and 4.5um, respectively. The UDS field was observed at 250, 350, and 500um with the Spectral and Photometric Imaging Receiver (SPIRE) onboard the Herschel Space Observatory as part of the Herschel Multi-tiered Extragalactic Survey (HerMES). The UDS field was observed by the VLA at 1.4GHz as part of the project UDS20 (V. Arumugam et al. 2017, in preparation). A total of 14 pointings were used to mosaic an area of ~1.3deg2 centered on the UDS field. (2 data files).

  14. Gemini Observations of Galaxies in Rich Early Environments (GOGREEN) I: survey description

    NASA Astrophysics Data System (ADS)

    Balogh, Michael L.; Gilbank, David G.; Muzzin, Adam; Rudnick, Gregory; Cooper, Michael C.; Lidman, Chris; Biviano, Andrea; Demarco, Ricardo; McGee, Sean L.; Nantais, Julie B.; Noble, Allison; Old, Lyndsay; Wilson, Gillian; Yee, Howard K. C.; Bellhouse, Callum; Cerulo, Pierluigi; Chan, Jeffrey; Pintos-Castro, Irene; Simpson, Rane; van der Burg, Remco F. J.; Zaritsky, Dennis; Ziparo, Felicia; Alonso, María Victoria; Bower, Richard G.; De Lucia, Gabriella; Finoguenov, Alexis; Lambas, Diego Garcia; Muriel, Hernan; Parker, Laura C.; Rettura, Alessandro; Valotto, Carlos; Wetzel, Andrew

    2017-10-01

    We describe a new Large Program in progress on the Gemini North and South telescopes: Gemini Observations of Galaxies in Rich Early Environments (GOGREEN). This is an imaging and deep spectroscopic survey of 21 galaxy systems at 1 < z < 1.5, selected to span a factor >10 in halo mass. The scientific objectives include measuring the role of environment in the evolution of low-mass galaxies, and measuring the dynamics and stellar contents of their host haloes. The targets are selected from the SpARCS, SPT, COSMOS, and SXDS surveys, to be the evolutionary counterparts of today's clusters and groups. The new red-sensitive Hamamatsu detectors on GMOS, coupled with the nod-and-shuffle sky subtraction, allow simultaneous wavelength coverage over λ ˜ 0.6-1.05 μm, and this enables a homogeneous and statistically complete redshift survey of galaxies of all types. The spectroscopic sample targets galaxies with AB magnitudes z΄ < 24.25 and [3.6] μm < 22.5, and is therefore statistically complete for stellar masses M* ≳ 1010.3 M⊙, for all galaxy types and over the entire redshift range. Deep, multiwavelength imaging has been acquired over larger fields for most systems, spanning u through K, in addition to deep IRAC imaging at 3.6 μm. The spectroscopy is ˜50 per cent complete as of semester 17A, and we anticipate a final sample of ˜500 new cluster members. Combined with existing spectroscopy on the brighter galaxies from GCLASS, SPT, and other sources, GOGREEN will be a large legacy cluster and field galaxy sample at this redshift that spectroscopically covers a wide range in stellar mass, halo mass, and clustercentric radius.

  15. The DEEP2 Galaxy Redshift Survey: Design, Observations, Data Reduction, and Redshifts

    NASA Technical Reports Server (NTRS)

    Newman, Jeffrey A.; Cooper, Michael C.; Davis, Marc; Faber, S. M.; Coil, Alison L; Guhathakurta, Puraga; Koo, David C.; Phillips, Andrew C.; Conroy, Charlie; Dutton, Aaron A.; hide

    2013-01-01

    We describe the design and data analysis of the DEEP2 Galaxy Redshift Survey, the densest and largest high-precision redshift survey of galaxies at z approx. 1 completed to date. The survey was designed to conduct a comprehensive census of massive galaxies, their properties, environments, and large-scale structure down to absolute magnitude MB = -20 at z approx. 1 via approx.90 nights of observation on the Keck telescope. The survey covers an area of 2.8 Sq. deg divided into four separate fields observed to a limiting apparent magnitude of R(sub AB) = 24.1. Objects with z approx. < 0.7 are readily identifiable using BRI photometry and rejected in three of the four DEEP2 fields, allowing galaxies with z > 0.7 to be targeted approx. 2.5 times more efficiently than in a purely magnitude-limited sample. Approximately 60% of eligible targets are chosen for spectroscopy, yielding nearly 53,000 spectra and more than 38,000 reliable redshift measurements. Most of the targets that fail to yield secure redshifts are blue objects that lie beyond z approx. 1.45, where the [O ii] 3727 Ang. doublet lies in the infrared. The DEIMOS 1200 line mm(exp -1) grating used for the survey delivers high spectral resolution (R approx. 6000), accurate and secure redshifts, and unique internal kinematic information. Extensive ancillary data are available in the DEEP2 fields, particularly in the Extended Groth Strip, which has evolved into one of the richest multiwavelength regions on the sky. This paper is intended as a handbook for users of the DEEP2 Data Release 4, which includes all DEEP2 spectra and redshifts, as well as for the DEEP2 DEIMOS data reduction pipelines. Extensive details are provided on object selection, mask design, biases in target selection and redshift measurements, the spec2d two-dimensional data-reduction pipeline, the spec1d automated redshift pipeline, and the zspec visual redshift verification process, along with examples of instrumental signatures or other artifacts that in some cases remain after data reduction. Redshift errors and catastrophic failure rates are assessed through more than 2000 objects with duplicate observations. Sky subtraction is essentially photon-limited even under bright OH sky lines; we describe the strategies that permitted this, based on high image stability, accurate wavelength solutions, and powerful B-spline modeling methods. We also investigate the impact of targets that appear to be single objects in ground-based targeting imaging but prove to be composite in Hubble Space Telescope data; they constitute several percent of targets at z approx. 1, approaching approx. 5%-10% at z > 1.5. Summary data are given that demonstrate the superiority of DEEP2 over other deep high-precision redshift surveys at z approx. 1 in terms of redshift accuracy, sample number density, and amount of spectral information. We also provide an overview of the scientific highlights of the DEEP2 survey thus far.

  16. A tribute to Peter A. Rona: A Russian Perspective

    NASA Astrophysics Data System (ADS)

    Sagalevich, Anatoly; Lutz, Richard A.

    2015-11-01

    In July 1985 Peter Rona led a cruise of the National Oceanic and Atmospheric Administration (NOAA) ship Researcher as part of the NOAA Vents Program and discovered, for the first time, black smokers, massive sulfide deposits and vent biota in the Atlantic Ocean. The site of the venting phenomena was the Trans-Atlantic Geotraverse (TAG) Hydrothermal Field on the east wall of the rift valley of the Mid-Atlantic Ridge at 26°08‧N; 44°50‧W (Rona, 1985; Rona et al., 1986). In 1986, Peter and an international research team carried out multidisciplnary investigations of both active and inactive hydrothermal zones of the TAG field using the R/V Atlantis and DSV Alvin, discovering two new species of shrimp (Rimicaris exoculata and Chorocaris chacei) (Williams and Rona, 1986) and a hexagonal-shaped form (Paleodictyon nodosum) thought to be extinct (Rona et al., 2009). In 1991 a Russian crew aboard the R/V Akademik Mstislav Keldysh, with two deep-diving, human-occupied submersibles (Mir-1 and Mir-2) (Fig. 1), had the honor of having Peter Rona and a Canadian IMAX film crew from the Stephen Low Company on board to visit the TAG hydrothermal vent field. This was the first of many deep-sea interactions between Russian deep-sea scientists and their colleagues from both the U.S. and Canada. This expedition to the TAG site was part of a major Russian undersea program aimed at exploring extreme deep-sea environments; between 1988 and 2005, the Mir submersibles visited hydrothermal vents and cold seep areas in 20 deep-sea regions throughout the world's oceans (Sagalevich, 2002). Images of several of these areas (the TAG, Snake Pit, Lost City and 9°50‧N vent fields) were obtained using an IMAX camera system emplaced for the first time within the spheres of the Mir submersibles and DSV Alvin in conjunction with the filming of science documentaries (e.g., ;Volcanoes of the Deep Sea;) produced by the Stephen Low Company in conjunction with Emory Kristof of National Geographic and Peter Rona. The initial test of this submersible-emplaced camera system was conducted during the 1991 expedition to the TAG hydrothermal vent field.

  17. Magnetic Resonance Imaging Distortion and Targeting Errors from Strong Rare Earth Metal Magnetic Dental Implant Requiring Revision.

    PubMed

    Seong-Cheol, Park; Chong Sik, Lee; Seok Min, Kim; Eu Jene, Choi; Do Hee, Lee; Jung Kyo, Lee

    2016-12-22

    Recently, the use of magnetic dental implants has been re-popularized with the introduction of strong rare earth metal, for example, neodymium, magnets. Unrecognized magnetic dental implants can cause critical magnetic resonance image distortions. We report a case involving surgical failure caused by a magnetic dental implant. A 62-year-old man underwent deep brain stimulation for medically insufficiently controlled Parkinson's disease. Stereotactic magnetic resonance imaging performed for the first deep brain stimulation showed that the overdenture was removed. However, a dental implant remained and contained a neodymium magnet, which was unrecognized at the time of imaging; the magnet caused localized non-linear distortions that were the largest around the dental magnets. In the magnetic field, the subthalamic area was distorted by a 4.6 mm right shift and counter clockwise rotation. However, distortions were visually subtle in the operation field and small for distant stereotactic markers, with approximately 1-2 mm distortions. The surgeon considered the distortion to be normal asymmetry or variation. Stereotactic marker distortion was calculated to be in the acceptable range in the surgical planning software. Targeting errors, approximately 5 mm on the right side and 2 mm on the left side, occurred postoperatively. Both leads were revised after the removal of dental magnets. Dental magnets may cause surgical failures and should be checked and removed before stereotactic surgery. Our findings should be considered when reviewing surgical precautions and making distortion-detection algorithm improvements.

  18. Photometric Calibrations of Gemini Images of NGC 6253

    NASA Astrophysics Data System (ADS)

    Pearce, Sean; Jeffery, Elizabeth

    2017-01-01

    We present preliminary results of our analysis of the metal-rich open cluster NGC 6253 using imaging data from GMOS on the Gemini-South Observatory. These data are part of a larger project to observe the effects of high metallicity on white dwarf cooling processes, especially the white dwarf cooling age, which have important implications on the processes of stellar evolution. To standardize the Gemini photometry, we have also secured imaging data of both the cluster and standard star fields using the 0.6-m SARA Observatory at CTIO. By analyzing and comparing the standard star fields of both the SARA data and the published Gemini zero-points of the standard star fields, we will calibrate the data obtained for the cluster. These calibrations are an important part of the project to obtain a standardized deep color-magnitude diagram to analyze the cluster. We present the process of verifying our standardization process. With a standardized CMD, we also present an analysis of the cluster's main sequence turn off age.

  19. Automated analysis of high-content microscopy data with deep learning.

    PubMed

    Kraus, Oren Z; Grys, Ben T; Ba, Jimmy; Chong, Yolanda; Frey, Brendan J; Boone, Charles; Andrews, Brenda J

    2017-04-18

    Existing computational pipelines for quantitative analysis of high-content microscopy data rely on traditional machine learning approaches that fail to accurately classify more than a single dataset without substantial tuning and training, requiring extensive analysis. Here, we demonstrate that the application of deep learning to biological image data can overcome the pitfalls associated with conventional machine learning classifiers. Using a deep convolutional neural network (DeepLoc) to analyze yeast cell images, we show improved performance over traditional approaches in the automated classification of protein subcellular localization. We also demonstrate the ability of DeepLoc to classify highly divergent image sets, including images of pheromone-arrested cells with abnormal cellular morphology, as well as images generated in different genetic backgrounds and in different laboratories. We offer an open-source implementation that enables updating DeepLoc on new microscopy datasets. This study highlights deep learning as an important tool for the expedited analysis of high-content microscopy data. © 2017 The Authors. Published under the terms of the CC BY 4.0 license.

  20. Present-day stress state in the Outokumpu deep drill hole, Finland

    NASA Astrophysics Data System (ADS)

    Pierdominici, Simona; Ask, Maria; Kukkonen, Ilmo; Kueck, Jochem

    2017-04-01

    This study aims to investigate the present-day stress field in the Outokumpu area, eastern Finland, using interpretation of borehole failure on acoustic image logs in a 2516 m deep hole. Two main objectives of this study are: i. to constrain the orientation of maximum horizontal stress by mapping the occurrence of stress-induced deformation features using two sets of borehole televiewer data, which were collected in 2006 and 2011; and ii. to investigate whether any time dependent deformation of the borehole wall has occurred (creep). The Outokumpu deep hole was drilled during 2004-2005 to study deep structures and seismic reflectors within the Outokumpu formation and conducted within the International Continental Scientific Drilling Program (ICDP). The hole was continuously core-drilled into Paleoproterozoic formation of metasediments, ophiolite-derived altered ultrabasic rocks and pegmatitic granite. In 2006 and 2011 two downhole logging campaigns were performed by the Operational Support Group of ICDP to acquire a set of geophysical data. Here we focus on a specific downhole logging measurement, the acoustic borehole televiewer (BHTV), to determine the present-day stress field in the Outokumpu area. We constrain the orientation and magnitude of in situ stress tensor based on borehole wall failures detected along a 2516 m deep hole. Horizontal stress orientation was determined by interpreting borehole breakouts (BBs) and drilling-induced tensile fractures (DIFs) from BHTV logs. BBs are stress-induced enlargements of the borehole cross section and occur in two opposite zones at angles around the borehole where the wellbore stress concentration (hoop stress) exceeds the value required to cause compressive failure of intact rock. DIFs are caused by tensile failure of the borehole wall and form at two opposite spots on the borehole where the stress concentration is lower than the tensile strength of the rock. This occurs at angles 90° apart from the center of the breakout zone. Acoustic imaging logs provide a high-resolution oriented picture of the borehole wall that allows for the direct observation of BBs, which appear as two almost vertical swaths on the borehole image separated by 180°. BBs show poor sonic reflectivity and long travel times due to the many small brittle fractures and the resulting spalling. DIFs appear as two narrow stripes of low reflectivity separated by 180° and typically sub-parallel or slightly inclined to the borehole axis. The analysis of these images shows a distinct compressive failure area consistent with major geological and tectonic lineaments of the area. Deviations from this trend reflect local structural perturbations. Additionally, the 2006 and 2011 dataset are used to compare the changes of breakout geometry and to quantify the growth of the breakouts in this time span from differences in width, length and depth to estimate the magnitude of the horizontal stress tensors. Our study contributes to understand the structure of the shallow crust in the Outokumpu area by defining the current stress field. Furthermore, a detailed understanding of the regional stress field is a fundamental contribution in several research areas such as exploration and exploitation of underground resources, and geothermal reservoir studies.

  1. A new towed platform for the unobtrusive surveying of benthic habitats and organisms

    USGS Publications Warehouse

    Zawada, David G.; Thompson, P.R.; Butcher, J.

    2008-01-01

    Maps of coral ecosystems are needed to support many conservation and management objectives, as well as research activities. Examples include ground-truthing aerial and satellite imagery, characterizing essential habitat, assessing changes, and monitoring the progress of restoration efforts. To address some of these needs, the U.S. Geological Survey developed the Along-Track Reef-Imaging System (ATRIS), a boat-based sensor package for mapping shallow-water benthic environments. ATRIS consists of a digital still camera, a video camera, and an acoustic depth sounder affixed to a moveable pole. This design, however, restricts its deployment to clear waters less than 10 m deep. To overcome this limitation, a towed version has been developed, referred to as Deep ATRIS. The system is based on a light-weight, computer-controlled, towed vehicle that is capable of following a programmed diving profile. The vehicle is 1.3 m long with a 63-cm wing span and can carry a wide variety of research instruments, including CTDs, fluorometers, transmissometers, and cameras. Deep ATRIS is currently equipped with a high-speed (20 frames · s-1) digital camera, custom-built light-emitting-diode lights, a compass, a 3-axis orientation sensor, and a nadir-looking altimeter. The vehicle dynamically adjusts its altitude to maintain a fixed height above the seafloor. The camera has a 29° x 22° field-of-view and captures color images that are 1360 x 1024 pixels in size. GPS coordinates are recorded for each image. A gigabit ethernet connection enables the images to be displayed and archived in real time on the surface computer. Deep ATRIS has a maximum tow speed of 2.6 m · s-1and a theoretical operating tow-depth limit of 27 m. With an improved tow cable, the operating depth can be extended to 90 m. Here, we present results from the initial sea trials in the Gulf of Mexico and Biscayne National Park, Florida, USA, and discuss the utility of Deep ATRIS for map-ping coral reef habitats. Several example mosaics illustrate the high-quality imagery that can be obtained with this system. The images also reveal the potential for unobtrusive animal observations; fish and sea turtles are unperturbed by the presence of Deep ATRIS

  2. A Chandra Survey of low-mass clusters at 0.8 < z < 0.9 selected in the 100 deg^2 SPT-Pol Deep Field

    NASA Astrophysics Data System (ADS)

    Kraft, Ralph

    2016-09-01

    We propose to observe a complete sample of 4 galaxy clusters at 1e14 < M500 < 3e14 and 0.8 < z < 0.9. These systems were selected from the 100 deg^2 deep field of the SPT-Pol SZ survey. This survey are has significant complementary data, including uniform depth ATCA, Herschel, Spitzer, and DES imaging, enabling a wide variety of astrophysical and cosmological studies. This sample complements the successful SPT-XVP survey, which has a broad redshift range and a narrow mass range, by including clusters over a narrow redshift range and broad mass range. These systems are such low mass and high redshift that they will not be detected in the eRosita all-sky survey.

  3. A Chandra Survey of low-mass clusters at 0.7 < z < 0.8 selected in the 100 deg^2 SPT-Pol Deep Field

    NASA Astrophysics Data System (ADS)

    Kraft, Ralph

    2016-09-01

    We propose to observe a complete sample of 4 galaxy clusters at 1e14 < M500 < 3e14 and 0.7 < z < 0.8. These systems were selected from the 100 deg^2 deep field of the SPT-Pol SZ survey. This survey are has significant complementary data, including uniform depth ATCA, Herschel, Spitzer, and DES imaging, enabling a wide variety of astrophysical and cosmological studies. This sample complements the successful SPT-XVP survey, which has a broad redshift range and a narrow mass range, by including clusters over a narrow redshift range and broad mass range. These systems are such low mass and high redshift that they will not be detected in the eRosita all-sky survey.

  4. Deep Tissue Fluorescent Imaging in Scattering Specimens Using Confocal Microscopy

    PubMed Central

    Clendenon, Sherry G.; Young, Pamela A.; Ferkowicz, Michael; Phillips, Carrie; Dunn, Kenneth W.

    2015-01-01

    In scattering specimens, multiphoton excitation and nondescanned detection improve imaging depth by a factor of 2 or more over confocal microscopy; however, imaging depth is still limited by scattering. We applied the concept of clearing to deep tissue imaging of highly scattering specimens. Clearing is a remarkably effective approach to improving image quality at depth using either confocal or multiphoton microscopy. Tissue clearing appears to eliminate the need for multiphoton excitation for deep tissue imaging. PMID:21729357

  5. AzTEC half square degree survey of the SHADES fields - I. Maps, catalogues and source counts

    NASA Astrophysics Data System (ADS)

    Austermann, J. E.; Dunlop, J. S.; Perera, T. A.; Scott, K. S.; Wilson, G. W.; Aretxaga, I.; Hughes, D. H.; Almaini, O.; Chapin, E. L.; Chapman, S. C.; Cirasuolo, M.; Clements, D. L.; Coppin, K. E. K.; Dunne, L.; Dye, S.; Eales, S. A.; Egami, E.; Farrah, D.; Ferrusca, D.; Flynn, S.; Haig, D.; Halpern, M.; Ibar, E.; Ivison, R. J.; van Kampen, E.; Kang, Y.; Kim, S.; Lacey, C.; Lowenthal, J. D.; Mauskopf, P. D.; McLure, R. J.; Mortier, A. M. J.; Negrello, M.; Oliver, S.; Peacock, J. A.; Pope, A.; Rawlings, S.; Rieke, G.; Roseboom, I.; Rowan-Robinson, M.; Scott, D.; Serjeant, S.; Smail, I.; Swinbank, A. M.; Stevens, J. A.; Velazquez, M.; Wagg, J.; Yun, M. S.

    2010-01-01

    We present the first results from the largest deep extragalactic mm-wavelength survey undertaken to date. These results are derived from maps covering over 0.7deg2, made at λ = 1.1mm, using the AzTEC continuum camera mounted on the James Clerk Maxwell Telescope. The maps were made in the two fields originally targeted at λ = 850μm with the Submillimetre Common-User Bolometer Array (SCUBA) in the SCUBA Half-Degree Extragalactic Survey (SHADES) project, namely the Lockman Hole East (mapped to a depth of 0.9-1.3 mJy rms) and the Subaru/XMM-Newton Deep Field (mapped to a depth of 1.0-1.7 mJy rms). The wealth of existing and forthcoming deep multifrequency data in these two fields will allow the bright mm source population revealed by these new wide-area 1.1mm images to be explored in detail in subsequent papers. Here, we present the maps themselves, a catalogue of 114 high-significance submillimetre galaxy detections, and a thorough statistical analysis leading to the most robust determination to date of the 1.1mm source number counts. These new maps, covering an area nearly three times greater than the SCUBA SHADES maps, currently provide the largest sample of cosmological volumes of the high-redshift Universe in the mm or sub-mm. Through careful comparison, we find that both the Cosmic Evolution Survey (COSMOS) and the Great Observatories Origins Deep Survey (GOODS) North fields, also imaged with AzTEC, contain an excess of mm sources over the new 1.1mm source-count baseline established here. In particular, our new AzTEC/SHADES results indicate that very luminous high-redshift dust enshrouded starbursts (S1.1mm > 3mJy) are 25-50 per cent less common than would have been inferred from these smaller surveys, thus highlighting the potential roles of cosmic variance and clustering in such measurements. We compare number count predictions from recent models of the evolving mm/sub-mm source population to these sub-mm bright galaxy surveys, which provide important constraints for the ongoing refinement of semi-analytic and hydrodynamical models of galaxy formation, and find that all available models overpredict the number of bright submillimetre galaxies found in this survey.

  6. Application of Deep Learning in Automated Analysis of Molecular Images in Cancer: A Survey

    PubMed Central

    Xue, Yong; Chen, Shihui; Liu, Yong

    2017-01-01

    Molecular imaging enables the visualization and quantitative analysis of the alterations of biological procedures at molecular and/or cellular level, which is of great significance for early detection of cancer. In recent years, deep leaning has been widely used in medical imaging analysis, as it overcomes the limitations of visual assessment and traditional machine learning techniques by extracting hierarchical features with powerful representation capability. Research on cancer molecular images using deep learning techniques is also increasing dynamically. Hence, in this paper, we review the applications of deep learning in molecular imaging in terms of tumor lesion segmentation, tumor classification, and survival prediction. We also outline some future directions in which researchers may develop more powerful deep learning models for better performance in the applications in cancer molecular imaging. PMID:29114182

  7. Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Lu, Guolan; Wang, Dongsheng; Wang, Xu; Chen, Zhuo Georgia; Muller, Susan; Chen, Amy; Fei, Baowei

    2017-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality that can provide a noninvasive tool for cancer detection and image-guided surgery. HSI acquires high-resolution images at hundreds of spectral bands, providing big data to differentiating different types of tissue. We proposed a deep learning based method for the detection of head and neck cancer with hyperspectral images. Since the deep learning algorithm can learn the feature hierarchically, the learned features are more discriminative and concise than the handcrafted features. In this study, we adopt convolutional neural networks (CNN) to learn the deep feature of pixels for classifying each pixel into tumor or normal tissue. We evaluated our proposed classification method on the dataset containing hyperspectral images from 12 tumor-bearing mice. Experimental results show that our method achieved an average accuracy of 91.36%. The preliminary study demonstrated that our deep learning method can be applied to hyperspectral images for detecting head and neck tumors in animal models.

  8. SPECTRAL DOMAIN VERSUS SWEPT SOURCE OPTICAL COHERENCE TOMOGRAPHY ANGIOGRAPHY OF THE RETINAL CAPILLARY PLEXUSES IN SICKLE CELL MACULOPATHY.

    PubMed

    Jung, Jesse J; Chen, Michael H; Frambach, Caroline R; Rofagha, Soraya; Lee, Scott S

    2018-01-01

    To compare the spectral domain and swept source optical coherence tomography angiography findings in two cases of sickle cell maculopathy. A 53-year-old man and a 24-year-old man both with sickle cell disease (hemoglobin SS) presented with no visual complaints; Humphrey visual field testing demonstrated asymptomatic paracentral scotomas that extended nasally in the involved eyes. Clinical examination and multimodal imaging including spectral domain and swept source optical coherence tomography, and spectral domain optical coherence tomography angiography and swept source optical coherence tomography angiography (Carl Zeiss Meditec Inc, Dublin, CA) were performed. Fundus examination of both patients revealed subtle thinning of the macula. En-face swept source optical coherence tomography confirmed the extent of the thinning correlating with the functional paracentral scotomas on Humphrey visual field. Swept source optical coherence tomography B-scan revealed multiple confluent areas of inner nuclear thinning and significant temporal retinal atrophy. En-face 6 × 6-mm spectral domain optical coherence tomography angiography of the macula demonstrated greater loss of the deep capillary plexus compared with the superficial capillary plexus. Swept source optical coherence tomography angiography 12 × 12-mm imaging captured the same macular findings and loss of both plexuses temporally outside the macula. In these two cases of sickle cell maculopathy, deep capillary plexus ischemia is more extensive within the macula, whereas both the superficial capillary plexus and deep capillary plexus are involved outside the macula likely due to the greater oxygen demands and watershed nature of these areas. Swept source optical coherence tomography angiography clearly demonstrates the angiographic extent of the disease correlating with the Humphrey visual field scotomas and confluent areas of inner nuclear atrophy.

  9. Low-temperature field ion microscopy of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Ksenofontov, V. A.; Gurin, V. A.; Gurin, I. V.; Kolosenko, V. V.; Mikhailovskij, I. M.; Sadanov, E. V.; Mazilova, T. I.; Velikodnaya, O. A.

    2007-10-01

    The methods of high-resolution field ion microscopy with sample cooling to liquid helium temperature are used in a study of the products of gas-phase catalytic pyrolysis of hydrocarbons in the form of graphitized fibers containing carbon nanotubes. Full atomic resolution of the end cap of closed carbon nanotubes is achieved for the first time. It is found that the atomic structure of the tops of the caps of subnanometer carbon tubes consists predominantly of hexagonal rings. A possible reason for the improvement of the resolution of field ion images of nanotubes upon deep cooling is discussed.

  10. Variable field-of-view visible and near-infrared polarization compound-eye endoscope.

    PubMed

    Kagawa, K; Shogenji, R; Tanaka, E; Yamada, K; Kawahito, S; Tanida, J

    2012-01-01

    A multi-functional compound-eye endoscope enabling variable field-of-view and polarization imaging as well as extremely deep focus is presented, which is based on a compact compound-eye camera called TOMBO (thin observation module by bound optics). Fixed and movable mirrors are introduced to control the field of view. Metal-wire-grid polarizer thin film applicable to both of visible and near-infrared lights is attached to the lenses in TOMBO and light sources. Control of the field-of-view, polarization and wavelength of the illumination realizes several observation modes such as three-dimensional shape measurement, wide field-of-view, and close-up observation of the superficial tissues and structures beneath the skin.

  11. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.

    PubMed

    Xiang, Lei; Wang, Qian; Nie, Dong; Zhang, Lichi; Jin, Xiyao; Qiao, Yu; Shen, Dinggang

    2018-07-01

    Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image. Copyright © 2018. Published by Elsevier B.V.

  12. Hyperspectral image analysis for the determination of alteration minerals in geothermal fields: Çürüksu (Denizli) Graben, Turkey

    NASA Astrophysics Data System (ADS)

    Uygur, Merve; Karaman, Muhittin; Kumral, Mustafa

    2016-04-01

    Çürüksu (Denizli) Graben hosts various geothermal fields such as Kızıldere, Yenice, Gerali, Karahayıt, and Tekkehamam. Neotectonic activities, which are caused by extensional tectonism, and deep circulation in sub-volcanic intrusions are heat sources of hydrothermal solutions. The temperature of hydrothermal solutions is between 53 and 260 degree Celsius. Phyllic, argillic, silicic, and carbonatization alterations and various hydrothermal minerals have been identified in various research studies of these areas. Surfaced hydrothermal alteration minerals are one set of potential indicators of geothermal resources. Developing the exploration tools to define the surface indicators of geothermal fields can assist in the recognition of geothermal resources. Thermal and hyperspectral imaging and analysis can be used for defining the surface indicators of geothermal fields. This study tests the hypothesis that hyperspectral image analysis based on EO-1 Hyperion images can be used for the delineation and definition of surfaced hydrothermal alteration in geothermal fields. Hyperspectral image analyses were applied to images covering the geothermal fields whose alteration characteristic are known. To reduce data dimensionality and identify spectral endmembers, Kruse's multi-step process was applied to atmospherically and geometrically-corrected hyperspectral images. Minimum Noise Fraction was used to reduce the spectral dimensions and isolate noise in the images. Extreme pixels were identified from high order MNF bands using the Pixel Purity Index. n-Dimensional Visualization was utilized for unique pixel identification. Spectral similarities between pixel spectral signatures and known endmember spectrum (USGS Spectral Library) were compared with Spectral Angle Mapper Classification. EO-1 Hyperion hyperspectral images and hyperspectral analysis are sensitive to hydrothermal alteration minerals, as their diagnostic spectral signatures span the visible and shortwave infrared seen in geothermal fields. Hyperspectral analysis results indicated that kaolinite, smectite, illite, montmorillonite, and sepiolite minerals were distributed in a wide area, which covered the hot spring outlet. Rectorite, lizardite, richterite, dumortierite, nontronite, erionite, and clinoptilolite were observed occasionally.

  13. Charge Diffusion Variations in Pan-STARRS1 CCDs

    NASA Astrophysics Data System (ADS)

    Magnier, Eugene A.; Tonry, J. L.; Finkbeiner, D.; Schlafly, E.; Burgett, W. S.; Chambers, K. C.; Flewelling, H. A.; Hodapp, K. W.; Kaiser, N.; Kudritzki, R.-P.; Metcalfe, N.; Wainscoat, R. J.; Waters, C. Z.

    2018-06-01

    Thick back-illuminated deep-depletion CCDs have superior quantum efficiency over previous generations of thinned and traditional thick CCDs. As a result, they are being used for wide-field imaging cameras in several major projects. We use observations from the Pan-STARRS 3π survey to characterize the behavior of the deep-depletion devices used in the Pan-STARRS 1 Gigapixel Camera. We have identified systematic spatial variations in the photometric measurements and stellar profiles that are similar in pattern to the so-called “tree rings” identified in devices used by other wide-field cameras (e.g., DECam and Hypersuprime Camera). The tree-ring features identified in these other cameras result from lateral electric fields that displace the electrons as they are transported in the silicon to the pixel location. In contrast, we show that the photometric and morphological modifications observed in the GPC1 detectors are caused by variations in the vertical charge transportation rate and resulting charge diffusion variations.

  14. Deep convolutional neural network for classifying Fusarium wilt of radish from unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Ha, Jin Gwan; Moon, Hyeonjoon; Kwak, Jin Tae; Hassan, Syed Ibrahim; Dang, Minh; Lee, O. New; Park, Han Yong

    2017-10-01

    Recently, unmanned aerial vehicles (UAVs) have gained much attention. In particular, there is a growing interest in utilizing UAVs for agricultural applications such as crop monitoring and management. We propose a computerized system that is capable of detecting Fusarium wilt of radish with high accuracy. The system adopts computer vision and machine learning techniques, including deep learning, to process the images captured by UAVs at low altitudes and to identify the infected radish. The whole radish field is first segmented into three distinctive regions (radish, bare ground, and mulching film) via a softmax classifier and K-means clustering. Then, the identified radish regions are further classified into healthy radish and Fusarium wilt of radish using a deep convolutional neural network (CNN). In identifying radish, bare ground, and mulching film from a radish field, we achieved an accuracy of ≥97.4%. In detecting Fusarium wilt of radish, the CNN obtained an accuracy of 93.3%. It also outperformed the standard machine learning algorithm, obtaining 82.9% accuracy. Therefore, UAVs equipped with computational techniques are promising tools for improving the quality and efficiency of agriculture today.

  15. DeepPap: Deep Convolutional Networks for Cervical Cell Classification.

    PubMed

    Zhang, Ling; Le Lu; Nogues, Isabella; Summers, Ronald M; Liu, Shaoxiong; Yao, Jianhua

    2017-11-01

    Automation-assisted cervical screening via Pap smear or liquid-based cytology (LBC) is a highly effective cell imaging based cancer detection tool, where cells are partitioned into "abnormal" and "normal" categories. However, the success of most traditional classification methods relies on the presence of accurate cell segmentations. Despite sixty years of research in this field, accurate segmentation remains a challenge in the presence of cell clusters and pathologies. Moreover, previous classification methods are only built upon the extraction of hand-crafted features, such as morphology and texture. This paper addresses these limitations by proposing a method to directly classify cervical cells-without prior segmentation-based on deep features, using convolutional neural networks (ConvNets). First, the ConvNet is pretrained on a natural image dataset. It is subsequently fine-tuned on a cervical cell dataset consisting of adaptively resampled image patches coarsely centered on the nuclei. In the testing phase, aggregation is used to average the prediction scores of a similar set of image patches. The proposed method is evaluated on both Pap smear and LBC datasets. Results show that our method outperforms previous algorithms in classification accuracy (98.3%), area under the curve (0.99) values, and especially specificity (98.3%), when applied to the Herlev benchmark Pap smear dataset and evaluated using five-fold cross validation. Similar superior performances are also achieved on the HEMLBC (H&E stained manual LBC) dataset. Our method is promising for the development of automation-assisted reading systems in primary cervical screening.

  16. Earth Science

    NASA Image and Video Library

    1996-01-13

    The Near Earth Asteroid Rendezvous (NEAR) spacecraft undergoing preflight preparation in the Spacecraft Assembly Encapsulation Facility-2 (SAEF-2) at Kennedy Space Center (KSC). NEAR will perform two critical mission events - Mathilde flyby and the Deep-Space maneuver. NEAR will fly-by Mathilde, a 38-mile (61-km) diameter C-type asteroid, making use of its imaging system to obtain useful optical navigation images. The primary science instrument will be the camera, but measurements of magnetic fields and mass also will be made. The Deep-Space Maneuver (DSM) will be executed about a week after the Mathilde fly-by. The DSM represents the first of two major burns during the NEAR mission of the 100-pound bi-propellant (Hydrazine/nitrogen tetroxide) thruster. This maneuver is necessary to lower the perihelion distance of NEAR's trajectory. The DSM will be conducted in two segments to minimize the possibility of an overburn situation.

  17. Perylene-diimide-based nanoparticles as highly efficient photoacoustic agents for deep brain tumor imaging in living mice

    DOE PAGES

    Fan, Quli; Cheng, Kai; Yang, Zhen; ...

    2014-11-06

    In order to promote preclinical and clinical applications of photoacoustic imaging, novel photoacoustic contrast agents are highly desired for molecular imaging of diseases, especially for deep tumor imaging. In this paper, perylene-3,4,9,10-tetracarboxylic diiimide-based near-infrared-absorptive organic nanoparticles are reported as an efficient agent for photoacoustic imaging of deep brain tumors in living mice with enhanced permeability and retention effect

  18. Impact of multi-focused images on recognition of soft biometric traits

    NASA Astrophysics Data System (ADS)

    Chiesa, V.; Dugelay, J. L.

    2016-09-01

    In video surveillance semantic traits estimation as gender and age has always been debated topic because of the uncontrolled environment: while light or pose variations have been largely studied, defocused images are still rarely investigated. Recently the emergence of new technologies, as plenoptic cameras, yields to deal with these problems analyzing multi-focus images. Thanks to a microlens array arranged between the sensor and the main lens, light field cameras are able to record not only the RGB values but also the information related to the direction of light rays: the additional data make possible rendering the image with different focal plane after the acquisition. For our experiments, we use the GUC Light Field Face Database that includes pictures from the First Generation Lytro camera. Taking advantage of light field images, we explore the influence of defocusing on gender recognition and age estimation problems. Evaluations are computed on up-to-date and competitive technologies based on deep learning algorithms. After studying the relationship between focus and gender recognition and focus and age estimation, we compare the results obtained by images defocused by Lytro software with images blurred by more standard filters in order to explore the difference between defocusing and blurring effects. In addition we investigate the impact of deblurring on defocused images with the goal to better understand the different impacts of defocusing and standard blurring on gender and age estimation.

  19. Scientific Discovery through Citizen Science via Popular Amateur Astrophotography

    NASA Astrophysics Data System (ADS)

    Nemiroff, Robert J.; Bonnell, Jerry T.; Allen, Alice

    2015-01-01

    Can popular astrophotography stimulate real astronomical discovery? Perhaps surprisingly, in some cases, the answer is yes. Several examples are given using the Astronomy Picture of the Day (APOD) site as an example venue. One reason is angular -- popular wide and deep images sometimes complement professional images which typically span a more narrow field. Another reason is temporal -- an amateur is at the right place and time to take a unique and illuminating image. Additionally, popular venues can be informational -- alerting professionals to cutting-edge amateur astrophotography about which they might not have known previously. Methods of further encouraging this unusual brand of citizen science are considered.

  20. Intelligent correction of laser beam propagation through turbulent media using adaptive optics

    NASA Astrophysics Data System (ADS)

    Ko, Jonathan; Wu, Chensheng; Davis, Christopher C.

    2014-10-01

    Adaptive optics methods have long been used by researchers in the astronomy field to retrieve correct images of celestial bodies. The approach is to use a deformable mirror combined with Shack-Hartmann sensors to correct the slightly distorted image when it propagates through the earth's atmospheric boundary layer, which can be viewed as adding relatively weak distortion in the last stage of propagation. However, the same strategy can't be easily applied to correct images propagating along a horizontal deep turbulence path. In fact, when turbulence levels becomes very strong (Cn 2>10-13 m-2/3), limited improvements have been made in correcting the heavily distorted images. We propose a method that reconstructs the light field that reaches the camera, which then provides information for controlling a deformable mirror. An intelligent algorithm is applied that provides significant improvement in correcting images. In our work, the light field reconstruction has been achieved with a newly designed modified plenoptic camera. As a result, by actively intervening with the coherent illumination beam, or by giving it various specific pre-distortions, a better (less turbulence affected) image can be obtained. This strategy can also be expanded to much more general applications such as correcting laser propagation through random media and can also help to improve designs in free space optical communication systems.

  1. The TESS camera: modeling and measurements with deep depletion devices

    NASA Astrophysics Data System (ADS)

    Woods, Deborah F.; Vanderspek, Roland; MacDonald, Robert; Morgan, Edward; Villasenor, Joel; Thayer, Carolyn; Burke, Barry; Chesbrough, Christian; Chrisp, Michael; Clark, Kristin; Furesz, Gabor; Gonzales, Alexandria; Nguyen, Tam; Prigozhin, Gregory; Primeau, Brian; Ricker, George; Sauerwein, Timothy; Suntharalingam, Vyshnavi

    2016-07-01

    The Transiting Exoplanet Survey Satellite, a NASA Explorer-class mission in development, will discover planets around nearby stars, most notably Earth-like planets with potential for follow up characterization. The all-sky survey requires a suite of four wide field-of-view cameras with sensitivity across a broad spectrum. Deep depletion CCDs with a silicon layer of 100 μm thickness serve as the camera detectors, providing enhanced performance in the red wavelengths for sensitivity to cooler stars. The performance of the camera is critical for the mission objectives, with both the optical system and the CCD detectors contributing to the realized image quality. Expectations for image quality are studied using a combination of optical ray tracing in Zemax and simulations in Matlab to account for the interaction of the incoming photons with the 100 μm silicon layer. The simulations include a probabilistic model to determine the depth of travel in the silicon before the photons are converted to photo-electrons, and a Monte Carlo approach to charge diffusion. The charge diffusion model varies with the remaining depth for the photo-electron to traverse and the strength of the intermediate electric field. The simulations are compared with laboratory measurements acquired by an engineering unit camera with the TESS optical design and deep depletion CCDs. In this paper we describe the performance simulations and the corresponding measurements taken with the engineering unit camera, and discuss where the models agree well in predicted trends and where there are differences compared to observations.

  2. Computational ghost imaging using deep learning

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  3. Focusing and depth of field in photography: application in dermatology practice.

    PubMed

    Taheri, Arash; Yentzer, Brad A; Feldman, Steven R

    2013-11-01

    Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. ESO imaging survey: optical deep public survey

    NASA Astrophysics Data System (ADS)

    Mignano, A.; Miralles, J.-M.; da Costa, L.; Olsen, L. F.; Prandoni, I.; Arnouts, S.; Benoist, C.; Madejsky, R.; Slijkhuis, R.; Zaggia, S.

    2007-02-01

    This paper presents new five passbands (UBVRI) optical wide-field imaging data accumulated as part of the DEEP Public Survey (DPS) carried out as a public survey by the ESO Imaging Survey (EIS) project. Out of the 3 square degrees originally proposed, the survey covers 2.75 square degrees, in at least one band (normally R), and 1.00 square degrees in five passbands. The median seeing, as measured in the final stacked images, is 0.97 arcsec, ranging from 0.75 arcsec to 2.0 arcsec. The median limiting magnitudes (AB system, 2´´ aperture, 5σ detection limit) are UAB=25.65, BAB=25.54, VAB=25.18, RAB = 24.8 and IAB =24.12 mag, consistent with those proposed in the original survey design. The paper describes the observations and data reduction using the EIS Data Reduction System and its associated EIS/MVM library. The quality of the individual images were inspected, bad images discarded and the remaining used to produce final image stacks in each passband, from which sources have been extracted. Finally, the scientific quality of these final images and associated catalogs was assessed qualitatively by visual inspection and quantitatively by comparison of statistical measures derived from these data with those of other authors as well as model predictions, and from direct comparison with the results obtained from the reduction of the same dataset using an independent (hands-on) software system. Finally to illustrate one application of this survey, the results of a preliminary effort to identify sub-mJy radio sources are reported. To the limiting magnitude reached in the R and I passbands the success rate ranges from 66 to 81% (depending on the fields). These data are publicly available at CDS. Based on observations carried out at the European Southern Observatory, La Silla, Chile under program Nos. 164.O-0561, 169.A-0725, and 267.A-5729. Appendices A, B and C are only available in electronic form at http://www.aanda.org

  5. Pulsed Magneto-motive Ultrasound Imaging Using Ultrasmall Magnetic Nanoprobes

    PubMed Central

    Mehrmohammadi, Mohammad; Oh, Junghwan; Mallidi, Srivalleesha; Emelianov, Stanislav Y.

    2011-01-01

    Nano-sized particles are widely regarded as a tool to study biologic events at the cellular and molecular levels. However, only some imaging modalities can visualize interaction between nanoparticles and living cells. We present a new technique, pulsed magneto-motive ultrasound imaging, which is capable of in vivo imaging of magnetic nanoparticles in real time and at sufficient depth. In pulsed magneto-motive ultrasound imaging, an external high-strength pulsed magnetic field is applied to induce the motion within the magnetically labeled tissue and ultrasound is used to detect the induced internal tissue motion. Our experiments demonstrated a sufficient contrast between normal and iron-laden cells labeled with ultrasmall magnetic nanoparticles. Therefore, pulsed magneto-motive ultrasound imaging could become an imaging tool capable of detecting magnetic nanoparticles and characterizing the cellular and molecular composition of deep-lying structures. PMID:21439255

  6. Big World of Small Neutrinos

    Science.gov Websites

    electron neutrino, muon neutrino, or tau neutrino. The three different neutrinos are complemented by anti of the neutrinos we detect will look different (have a different flavor) compared to the time they Big World of Small Neutrinos Neutrinos will find you! Fig 1: Hubble image of the deep field

  7. Ultra-high magnetic resonance imaging (MRI): a potential examination for deep brain stimulation devices and the limitation study concerning MRI-related heating injury.

    PubMed

    Chen, Ying-Chuan; Li, Jun-Ju; Zhu, Guan-Yu; Shi, Lin; Yang, An-Chao; Jiang, Yin; Zhang, Xin; Zhang, Jian-Guo

    2017-03-01

    Nowadays, the patients with deep brain stimulation (DBS) devices are restricted to undertake 1.5T magnetic resonance imaging (MRI) according to the guideline. Nevertheless, we conducted an experiment to test pathological change near the leads in different field-strength MRI. Twenty-four male New Zealand rabbits were assigned to Group 1 (G1, n = 6, 7.0T, DBS), Group 2 (G2, n = 6, 3.0T, DBS), Group 3 (G3, n = 6, 1.5T, DBS), and Group 4 (G4, n = 6, 1.5T, paracentesis). DBS leads were implanted in G1, G2 and G3, targeting left nucleus ventralis posterior thalami. Paracentesis was performed in G4. 24 h after MRI scan, all animals were killed for examining pathological alternation (at different distance from lead) via transmission electron microscopy. Our results suggest that the severity of tissue injury correlates with the distance to electrode instead of field strength of MRI. Up to now, the reason for the restriction of MRI indicated no significantly different pathological change.

  8. GOODS Far Infrared Imaging with Herschel

    NASA Astrophysics Data System (ADS)

    Frayer, David T.; Elbaz, D.; Dickinson, M.; GOODS-Herschel Team

    2010-01-01

    Most of the stars in galaxies formed at high redshift in dusty environments, where their energy was absorbed and re-radiated at infrared wavelengths. Similarly, much of the growth of nuclear black holes in active galactic nuclei (AGN) was also obscured from direct view at UV/optical and X-ray wavelengths. The Great Observatories Origins Deep Survey Herschel (GOODS-H) open time key program will obtain the deepest far-infrared view of the distant universe, mapping the history of galaxy growth and AGN activity over a broad swath of cosmic time. GOODS-H will image the GOODS-North field with the PACS and SPIRE instruments at 100 to 500 microns, matching the deep survey of GOODS-South in the guaranteed time key program. GOODS-H will also observe an ultradeep sub-field within GOODS-South with PACS, reaching the deepest flux limits planned for Herschel (0.6 mJy at 100 microns with S/N=5). GOODS-H data will detect thousands of luminous and ultraluminous infrared galaxies out to z=4 or beyond, measuring their far-infrared luminosities and spectral energy distributions, and providing the best constraints on star formation rates and AGN activity during this key epoch of galaxy and black hole growth in the young universe.

  9. Application of Deep Learning in GLOBELAND30-2010 Product Refinement

    NASA Astrophysics Data System (ADS)

    Liu, T.; Chen, X.

    2018-04-01

    GlobeLand30, as one of the best Global Land Cover (GLC) product at 30-m resolution, has been widely used in many research fields. Due to the significant spectral confusion among different land cover types and limited textual information of Landsat data, the overall accuracy of GlobeLand30 is about 80 %. Although such accuracy is much higher than most other global land cover products, it cannot satisfy various applications. There is still a great need of an effective method to improve the quality of GlobeLand30. The explosive high-resolution satellite images and remarkable performance of Deep Learning on image classification provide a new opportunity to refine GlobeLand30. However, the performance of deep leaning depends on quality and quantity of training samples as well as model training strategy. Therefore, this paper 1) proposed an automatic training sample generation method via Google earth to build a large training sample set; and 2) explore the best training strategy for land cover classification using GoogleNet (Inception V3), one of the most widely used deep learning network. The result shows that the fine-tuning from first layer of Inception V3 using rough large sample set is the best strategy. The retrained network was then applied in one selected area from Xi'an city as a case study of GlobeLand30 refinement. The experiment results indicate that the proposed approach with Deep Learning and google earth imagery is a promising solution for further improving accuracy of GlobeLand30.

  10. VizieR Online Data Catalog: SL2S galaxy-scale sample of lens candidates (Gavazzi+, 2014)

    NASA Astrophysics Data System (ADS)

    Gavazzi, R.; Marshall, P. J.; Treu, T.; Sonnenfeld, A.

    2017-06-01

    The CFHTLS5 is a major photometric survey of more than 450 nights over 5 yr (started on 2003 June 1) using the MegaCam wide-field imager, which covers ~1 deg2 on the sky, with a pixel size of 0.186". The CFHTLS has two components aimed at extragalactic studies: a Deep component consisting of four pencil-beam fields of 1 deg2 and a wide component consisting of four mosaics covering 150 deg2 in total. Both surveys are imaged through five broadband filters. The data are pre-reduced at CFHT with the Elixir pipeline (http://www.cfht.hawaii.edu/Instruments/Elixir/), which removes the instrumental artifacts in individual exposures. The CFHTLS images are then astrometrically calibrated, photometrically inter-calibrated, resampled and stacked by the Terapix group at the Institut d'Astrophysique de Paris, and finally archived at the Canadian Astronomy Data Centre. (2 data files).

  11. Receiver Function Imaging of Crustal and Lithospheric Structure Beneath the Jalisco Block and Western Michoacan, Mexico.

    NASA Astrophysics Data System (ADS)

    Reyes Alfaro, G.; Cruz-Atienza, V. M.; Perez-Campos, X.; Reyes Dávila, G. A.

    2014-12-01

    We used a receiver function technique for imaging western Mexico, a unique area with several active seismic and volcanic zones like the triple junction of Rivera, Cocos and North American plates and the Colima volcano complex (CVC), the most active in Mexico. Clear images of the distribution of the crust and the lithosphere-asthenosphere boundary are obtained using P-to-S receiver functions (RF) from around ~80 broadband stations recorded by the Mapping the Rivera Subduction Zone (MARS), the Colima Volcano Deep Seismic Experiment (CODEX) and a local network (RESCO) that allowed us to considerably increase the teleseismic database used in the project. For imaging, we constructed several 2-D profiles of depth transformed RFs to delineate the seismic discontinuities of the region. Low seismic velocities associated with the Michoacan-Guanajuato and the Mascota-Ayutla-Tapalpa volcanic fields are also observed. Most impressive, a large and well delineated magma body 100 km underneath CVC is recognized along a surely related depression of the moho discontinuity just above it. We bring more tools for a better understanding of the deep processes that ultimately control eruptive behavior in the region.

  12. Rapid Object Detection Systems, Utilising Deep Learning and Unmanned Aerial Systems (uas) for Civil Engineering Applications

    NASA Astrophysics Data System (ADS)

    Griffiths, D.; Boehm, J.

    2018-05-01

    With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.

  13. Galaxy evolution at high-redshift: Millimeter-wavelength surveys with the AzTEC camera

    NASA Astrophysics Data System (ADS)

    Scott, Kimberly S.

    Galaxies detected by their thermal dust emission at submillimeter (submm) and millimeter (mm) wavelengths comprise a population of massive, intensely star-forming systems in the early Universe. These "submm/mm- galaxies", or SMGs, likely represent an important phase in the assembly and/or evolution of massive galaxies and are thought to be the progenitors of massive elliptical galaxies. While their projected number density as a function of source brightness provides key constraints on models of galaxy evolution, SMG surveys carried out over the past twelve years with the first generation of submm/mm-wavelength cameras have not imaged a large enough area to sufficient depths to provide the statistical power needed to discriminate between competing galaxy evolution scenarios. In this dissertation, we present the results from SMG surveys carried out over the past four years using the new sensitive mm-wavelength camera AzTEC. With the improved mapping speed of the AzTEC camera combined with dedicated telescope time devoted to deep, large-area extragalactic surveys, we have tripled both the area surveyed towards blank- fields (that is, regions with no known galaxy over-densities) at submm/mm wavelengths and the total number of detected SMGs. Here, we describe the properties and performance of the AzTEC instrument while operating on the James Clerk Maxwell Telescope (JCMT) and the Atacama Submillimeter Telescope Experiment (ASTE). We then present the results from two of the blank-field regions imaged with AzTEC: the JCMT/COSMOS field, which we discovered is over- dense in the number of very bright SMGs, and the ASTE survey of the Great Observatories Origins Deep-South field, which represents one of the deepest surveys ever carried out at submm/mm wavelengths. Finally, we combine the results from all of the blank-fields imaged with AzTEC while operating on the JCMT and the ASTE to calculate the most accurate measurements to date of the SMG number counts.

  14. Optical And Near-infrared Variability Among Distant Galactic Nuclei Of The CANDELS EGS Field

    NASA Astrophysics Data System (ADS)

    Grogin, Norman A.; Dahlen, T.; Donley, J.; Koekemoer, A. M.; Salvato, M.; CANDELS Collaboration

    2014-01-01

    The CANDELS HST Multi-cycle Treasury Program completed its observations of the EGS field in May 2013. The coverage comprises WFC3/IR exposures in J-band and H-band across a contiguous 200 square arcminutes, and coordinated parallel ACS/WFC exposures in V-band and I-band across a contiguous 270 square arcminutes that largely overlaps the WFC3/IR coverage. These observations were split between two epochs with 52-day spacing for the primary purpose of high-redshift supernovae (SNe) detection and follow-up. However, this combination of sensitivity, high resolution, and time spacing is also well-suited to detect optical and near-infrared variability ("ONIV") among moderate- to high-redshift galaxy nuclei (H<25AB mag; I<26AB mag). These data are sensitive to rest-frame variability time-scales of up to several weeks, and in combination with the original EGS ACS imaging from 2004, to time-scales of up to several years in the V- and I-bands. The overwhelming majority of these variable galaxy nuclei will be AGN; the small fraction arising from SNe have already been meticulously culled by the CANDELS high-redshift SNe search effort. These ONIV galaxy nuclei potentially represent a significant addition to the census of distant lower-luminosity AGN subject to multi-wavelength scrutiny with CANDELS. We present the preliminary results of our EGS variability analysis, including a comparison of the HST ONIVs with the known AGN candidates in the field from deep Spitzer and Chandra imaging, and from extensive ground-based optical spectroscopy as well as HST IR-grism spectroscopy. We also assess the redshift distribution of the ONIVs from both spectroscopy and from robust SED-fitting incorporating ancillary deep ground-based imaging along with the CANDELS VIJH photometry. We compare these results with our prior variability analysis of the similarly-observed CANDELS UDS field from 2011 and CANDELS COSMOS field from 2012.

  15. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  16. Application of deep learning to the classification of images from colposcopy.

    PubMed

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-03-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images.

  17. Application of deep learning to the classification of images from colposcopy

    PubMed Central

    Sato, Masakazu; Horie, Koji; Hara, Aki; Miyamoto, Yuichiro; Kurihara, Kazuko; Tomio, Kensuke; Yokota, Harushige

    2018-01-01

    The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images. PMID:29456725

  18. Fast Calcium Imaging with Optical Sectioning via HiLo Microscopy.

    PubMed

    Lauterbach, Marcel A; Ronzitti, Emiliano; Sternberg, Jenna R; Wyart, Claire; Emiliani, Valentina

    2015-01-01

    Imaging intracellular calcium concentration via reporters that change their fluorescence properties upon binding of calcium, referred to as calcium imaging, has revolutionized our way to probe neuronal activity non-invasively. To reach neurons densely located deep in the tissue, optical sectioning at high rate of acquisition is necessary but difficult to achieve in a cost effective manner. Here we implement an accessible solution relying on HiLo microscopy to provide robust optical sectioning with a high frame rate in vivo. We show that large calcium signals can be recorded from dense neuronal populations at high acquisition rates. We quantify the optical sectioning capabilities and demonstrate the benefits of HiLo microscopy compared to wide-field microscopy for calcium imaging and 3D reconstruction. We apply HiLo microscopy to functional calcium imaging at 100 frames per second deep in biological tissues. This approach enables us to discriminate neuronal activity of motor neurons from different depths in the spinal cord of zebrafish embryos. We observe distinct time courses of calcium signals in somata and axons. We show that our method enables to remove large fluctuations of the background fluorescence. All together our setup can be implemented to provide efficient optical sectioning in vivo at low cost on a wide range of existing microscopes.

  19. Fast Calcium Imaging with Optical Sectioning via HiLo Microscopy

    PubMed Central

    Sternberg, Jenna R.; Wyart, Claire; Emiliani, Valentina

    2015-01-01

    Imaging intracellular calcium concentration via reporters that change their fluorescence properties upon binding of calcium, referred to as calcium imaging, has revolutionized our way to probe neuronal activity non-invasively. To reach neurons densely located deep in the tissue, optical sectioning at high rate of acquisition is necessary but difficult to achieve in a cost effective manner. Here we implement an accessible solution relying on HiLo microscopy to provide robust optical sectioning with a high frame rate in vivo. We show that large calcium signals can be recorded from dense neuronal populations at high acquisition rates. We quantify the optical sectioning capabilities and demonstrate the benefits of HiLo microscopy compared to wide-field microscopy for calcium imaging and 3D reconstruction. We apply HiLo microscopy to functional calcium imaging at 100 frames per second deep in biological tissues. This approach enables us to discriminate neuronal activity of motor neurons from different depths in the spinal cord of zebrafish embryos. We observe distinct time courses of calcium signals in somata and axons. We show that our method enables to remove large fluctuations of the background fluorescence. All together our setup can be implemented to provide efficient optical sectioning in vivo at low cost on a wide range of existing microscopes. PMID:26625116

  20. Enabling automated magnetic resonance imaging-based targeting assessment during dipole field navigation

    NASA Astrophysics Data System (ADS)

    Latulippe, Maxime; Felfoul, Ouajdi; Dupont, Pierre E.; Martel, Sylvain

    2016-02-01

    The magnetic navigation of drugs in the vascular network promises to increase the efficacy and reduce the secondary toxicity of cancer treatments by targeting tumors directly. Recently, dipole field navigation (DFN) was proposed as the first method achieving both high field and high navigation gradient strengths for whole-body interventions in deep tissues. This is achieved by introducing large ferromagnetic cores around the patient inside a magnetic resonance imaging (MRI) scanner. However, doing so distorts the static field inside the scanner, which prevents imaging during the intervention. This limitation constrains DFN to open-loop navigation, thus exposing the risk of a harmful toxicity in case of a navigation failure. Here, we are interested in periodically assessing drug targeting efficiency using MRI even in the presence of a core. We demonstrate, using a clinical scanner, that it is in fact possible to acquire, in specific regions around a core, images of sufficient quality to perform this task. We show that the core can be moved inside the scanner to a position minimizing the distortion effect in the region of interest for imaging. Moving the core can be done automatically using the gradient coils of the scanner, which then also enables the core to be repositioned to perform navigation to additional targets. The feasibility and potential of the approach are validated in an in vitro experiment demonstrating navigation and assessment at two targets.

  1. High-throughput isotropic mapping of whole mouse brain using multi-view light-sheet microscopy

    NASA Astrophysics Data System (ADS)

    Nie, Jun; Li, Yusha; Zhao, Fang; Ping, Junyu; Liu, Sa; Yu, Tingting; Zhu, Dan; Fei, Peng

    2018-02-01

    Light-sheet fluorescence microscopy (LSFM) uses an additional laser-sheet to illuminate selective planes of the sample, thereby enabling three-dimensional imaging at high spatial-temporal resolution. These advantages make LSFM a promising tool for high-quality brain visualization. However, even by the use of LSFM, the spatial resolution remains insufficient to resolve the neural structures across a mesoscale whole mouse brain in three dimensions. At the same time, the thick-tissue scattering prevents a clear observation from the deep of brain. Here we use multi-view LSFM strategy to solve this challenge, surpassing the resolution limit of standard light-sheet microscope under a large field-of-view (FOV). As demonstrated by the imaging of optically-cleared mouse brain labelled with thy1-GFP, we achieve a brain-wide, isotropic cellular resolution of 3μm. Besides the resolution enhancement, multi-view braining imaging can also recover complete signals from deep tissue scattering and attenuation. The identification of long distance neural projections across encephalic regions can be identified and annotated as a result.

  2. Star Formation in Distant Red Galaxies: Spitzer Observations in the Hubble Deep Field-South

    NASA Astrophysics Data System (ADS)

    Webb, Tracy M. A.; van Dokkum, Pieter; Egami, Eiichi; Fazio, Giovanni; Franx, Marijn; Gawiser, Eric; Herrera, David; Huang, Jiasheng; Labbé, Ivo; Lira, Paulina; Marchesini, Danilo; Maza, José; Quadri, Ryan; Rudnick, Gregory; van der Werf, Paul

    2006-01-01

    We present Spitzer 24 μm imaging of 1.5~40 μJy and conclude that the bulk of the DRG population is dusty active galaxies. A mid-infrared (MIR) color analysis with IRAC data suggests that the MIR fluxes are not dominated by buried AGNs, and we interpret the high detection rate as evidence for a high average star formation rate of =130+/-30 Msolar yr-1. From this, we infer that DRGs are important contributors to the cosmic star formation rate density at z~2, at a level of ~0.02 Msolar yr-1 Mpc-3 to our completeness limit of KAB=22.9 mag.

  3. The Origin of Clusters and Large-Scale Structures: Panoramic View of the High-z Universe

    NASA Astrophysics Data System (ADS)

    Ouchi, Masami

    We will report results of our on-going survey for proto-clusters and large-scale structures at z=3-6. We carried out very wide and deep optical imaging down to i=27 for a 1 deg^2 field of the Subaru/XMM Deep Field with 8.2m Subaru Telescope. We obtain maps of the Universe traced by ~1,000 Ly-a galaxies at z=3, 4, and 6 and by ~10,000 Lyman break galaxies at z=3-6. These cosmic maps have a transverse dimension of ~150 Mpc x 150 Mpc in comoving units at these redshifts, and provide us, for the first time, a panoramic view of the high-z Universe from the scales of galaxies, clusters to large-scale structures. Major results and implications will be presented in our talk. (Part of this work is subject to press embargo.)

  4. Investigation of Parallel Radiofrequency Transmission for the Reduction of Heating in Long Conductive Leads in 3 Tesla Magnetic Resonance Imaging

    PubMed Central

    McElcheran, Clare E.; Yang, Benson; Anderson, Kevan J. T.; Golenstani-Rad, Laleh; Graham, Simon J.

    2015-01-01

    Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In the present work, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Electromagnetic simulations were performed for three pTx coil configurations with 2, 4, and 8-elements, respectively. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined for all configurations using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. In simulation, 95-99% reduction of the electric field at the tip of the lead was observed between the various pTx coil configurations and the birdcage coil. Maximal reduction in E-field was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. In experiment, a temperature increase of 2±0.15°C was observed at the tip of the wire using the birdcage coil, whereas negligible increase (0.2±0.15°C) was observed with the optimized pTx system. Although further research is required, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise. PMID:26237218

  5. Investigation of Parallel Radiofrequency Transmission for the Reduction of Heating in Long Conductive Leads in 3 Tesla Magnetic Resonance Imaging.

    PubMed

    McElcheran, Clare E; Yang, Benson; Anderson, Kevan J T; Golenstani-Rad, Laleh; Graham, Simon J

    2015-01-01

    Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In the present work, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Electromagnetic simulations were performed for three pTx coil configurations with 2, 4, and 8-elements, respectively. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined for all configurations using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. In simulation, 95-99% reduction of the electric field at the tip of the lead was observed between the various pTx coil configurations and the birdcage coil. Maximal reduction in E-field was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. In experiment, a temperature increase of 2±0.15°C was observed at the tip of the wire using the birdcage coil, whereas negligible increase (0.2±0.15°C) was observed with the optimized pTx system. Although further research is required, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise.

  6. SITELLE: a wide-field imaging Fourier transform spectrometer for the Canada-France-Hawaii Telescope

    NASA Astrophysics Data System (ADS)

    Drissen, L.; Bernier, A.-P.; Rousseau-Nepton, L.; Alarie, A.; Robert, C.; Joncas, G.; Thibault, S.; Grandmont, F.

    2010-07-01

    We describe the concept of a new instrument for the Canada-France-Hawaii telescope (CFHT), SITELLE (Spectromètre Imageur à Transformée de Fourier pour l'Etude en Long et en Large de raies d'Emission), as well as a science case and a technical study of its preliminary design. SITELLE will be an imaging Fourier transform spectrometer capable of obtaining the visible (350 nm - 950 nm) spectrum of every source of light in a field of view of 15 arcminutes, with 100% spatial coverage and a spectral resolution ranging from R = 1 (deep panchromatic image) to R = 104 (for gas dynamics). SITELLE will cover a field of view 100 to 1000 times larger than traditional integral field spectrographs, such as GMOS-IFU on Gemini or the future MUSE on the VLT. It is a legacy from BEAR, the first imaging FTS installed on the CFHT and the direct successor of SpIOMM, a similar instrument attached to the 1.6-m telescope of the Observatoire du Mont-Mégantic in Québec. SITELLE will be used to study the structure and kinematics of HII regions and ejecta around evolved stars in the Milky Way, emission-line stars in clusters, abundances in nearby gas-rich galaxies, and the star formation rate in distant galaxies.

  7. X-UDS: The Chandra Legacy Survey of the UKIDSS Ultra Deep Survey Field

    NASA Astrophysics Data System (ADS)

    Kocevski, Dale D.; Hasinger, Guenther; Brightman, Murray; Nandra, Kirpal; Georgakakis, Antonis; Cappelluti, Nico; Civano, Francesca; Li, Yuxuan; Li, Yanxia; Aird, James; Alexander, David M.; Almaini, Omar; Brusa, Marcella; Buchner, Johannes; Comastri, Andrea; Conselice, Christopher J.; Dickinson, Mark A.; Finoguenov, Alexis; Gilli, Roberto; Koekemoer, Anton M.; Miyaji, Takamitsu; Mullaney, James R.; Papovich, Casey; Rosario, David; Salvato, Mara; Silverman, John D.; Somerville, Rachel S.; Ueda, Yoshihiro

    2018-06-01

    We present the X-UDS survey, a set of wide and deep Chandra observations of the Subaru-XMM Deep/UKIDSS Ultra Deep Survey (SXDS/UDS) field. The survey consists of 25 observations that cover a total area of 0.33 deg2. The observations are combined to provide a nominal depth of ∼600 ks in the central 100 arcmin2 region of the field that has been imaged with Hubble/WFC3 by the CANDELS survey and ∼200 ks in the remainder of the field. In this paper, we outline the survey’s scientific goals, describe our observing strategy, and detail our data reduction and point source detection algorithms. Our analysis has resulted in a total of 868 band-merged point sources detected with a false-positive Poisson probability of <1 × 10‑4. In addition, we present the results of an X-ray spectral analysis and provide best-fitting neutral hydrogen column densities, N H, as well as a sample of 51 Compton-thick active galactic nucleus candidates. Using this sample, we find the intrinsic Compton-thick fraction to be 30%–35% over a wide range in redshift (z = 0.1–3), suggesting the obscured fraction does not evolve very strongly with epoch. However, if we assume that the Compton-thick fraction is dependent on luminosity, as is seen for Compton-thin sources, then our results are consistent with a rise in the obscured fraction out to z ∼ 3. Finally, an examination of the host morphologies of our Compton-thick candidates shows a high fraction of morphological disturbances, in agreement with our previous results. All data products described in this paper are made available via a public website.

  8. A projective reconstruction method of underground or hidden structures using atmospheric muon absorption data

    NASA Astrophysics Data System (ADS)

    Bonechi, L.; D'Alessandro, R.; Mori, N.; Viliani, L.

    2015-02-01

    Muon absorption radiography is an imaging technique based on the analysis of the attenuation of the cosmic-ray muon flux after traversing an object under examination. While this technique is now reaching maturity in the field of volcanology for the imaging of the innermost parts of the volcanic cones, its applicability to other fields of research has not yet been proved. In this paper we present a study concerning the application of the muon absorption radiography technique to the field of archaeology, and we propose a method for the search of underground cavities and structures hidden a few metres deep in the soil (patent [1]). An original geometric treatment of the reconstructed muon tracks, based on the comparison of the measured flux with a reference simulated flux, and the preliminary results of specific simulations are discussed in details.

  9. Fully convolutional network with cluster for semantic segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Xiao; Chen, Zhongbi; Zhang, Jianlin

    2018-04-01

    At present, image semantic segmentation technology has been an active research topic for scientists in the field of computer vision and artificial intelligence. Especially, the extensive research of deep neural network in image recognition greatly promotes the development of semantic segmentation. This paper puts forward a method based on fully convolutional network, by cluster algorithm k-means. The cluster algorithm using the image's low-level features and initializing the cluster centers by the super-pixel segmentation is proposed to correct the set of points with low reliability, which are mistakenly classified in great probability, by the set of points with high reliability in each clustering regions. This method refines the segmentation of the target contour and improves the accuracy of the image segmentation.

  10. DeepVel: Deep learning for the estimation of horizontal velocities at the solar surface

    NASA Astrophysics Data System (ADS)

    Asensio Ramos, A.; Requerey, I. S.; Vitas, N.

    2017-07-01

    Many phenomena taking place in the solar photosphere are controlled by plasma motions. Although the line-of-sight component of the velocity can be estimated using the Doppler effect, we do not have direct spectroscopic access to the components that are perpendicular to the line of sight. These components are typically estimated using methods based on local correlation tracking. We have designed DeepVel, an end-to-end deep neural network that produces an estimation of the velocity at every single pixel, every time step, and at three different heights in the atmosphere from just two consecutive continuum images. We confront DeepVel with local correlation tracking, pointing out that they give very similar results in the time and spatially averaged cases. We use the network to study the evolution in height of the horizontal velocity field in fragmenting granules, supporting the buoyancy-braking mechanism for the formation of integranular lanes in these granules. We also show that DeepVel can capture very small vortices, so that we can potentially expand the scaling cascade of vortices to very small sizes and durations. The movie attached to Fig. 3 is available at http://www.aanda.org

  11. New constraints on the average escape fraction of Lyman continuum radiation in z 4 galaxies from the VIMOS Ultra Deep Survey (VUDS)

    NASA Astrophysics Data System (ADS)

    Marchi, F.; Pentericci, L.; Guaita, L.; Ribeiro, B.; Castellano, M.; Schaerer, D.; Hathi, N. P.; Lemaux, B. C.; Grazian, A.; Le Fèvre, O.; Garilli, B.; Maccagni, D.; Amorin, R.; Bardelli, S.; Cassata, P.; Fontana, A.; Koekemoer, A. M.; Le Brun, V.; Tasca, L. A. M.; Thomas, R.; Vanzella, E.; Zamorani, G.; Zucca, E.

    2017-05-01

    Context. Determining the average fraction of Lyman continuum (LyC) photons escaping high redshift galaxies is essential for understanding how reionization proceeded in the z> 6 Universe. Aims: We want to measure the LyC signal from a sample of sources in the Chandra Deep Field South (CDFS) and COSMOS fields for which ultra-deep VIMOS spectroscopy as well as multi-wavelength Hubble Space Telescope (HST) imaging are available. Methods: We select a sample of 46 galaxies at z 4 from the VIMOS Ultra Deep Survey (VUDS) database, such that the VUDS spectra contain the LyC part, that is, the rest-frame range 880-910 Å. Taking advantage of the HST imaging, we apply a careful cleaning procedure and reject all the sources showing nearby clumps with different colours, that could potentially be lower-redshift interlopers. After this procedure, the sample is reduced to 33 galaxies. We measure the ratio between ionizing flux (LyC at 895 Å) and non-ionizing emission (at 1500 Å) for all individual sources. We also produce a normalized stacked spectrum of all sources. Results: Assuming an intrinsic average Lν(1470) /Lν(895) of 3, we estimate the individual and average relative escape fraction. We do not detect ionizing radiation from any individual source, although we identify a possible LyC emitter with very high Lyα equivalent width (EW). From the stacked spectrum and assuming a mean transmissivity for the sample, we measure a relative escape fraction . We also look for correlations between the limits in the LyC flux and source properties and find a tentative correlation between LyC flux and the EW of the Lyα emission line. Conclusions: Our results imply that the LyC flux emitted by V = 25-26 star-forming galaxies at z 4 is at most very modest, in agreement with previous upper limits from studies based on broad and narrow band imaging. Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, under Large Program 185.A-0791.

  12. SchNet - A deep learning architecture for molecules and materials

    NASA Astrophysics Data System (ADS)

    Schütt, K. T.; Sauceda, H. E.; Kindermans, P.-J.; Tkatchenko, A.; Müller, K.-R.

    2018-06-01

    Deep learning has led to a paradigm shift in artificial intelligence, including web, text, and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics. Machine learning, in general, and deep learning, in particular, are ideally suitable for representing quantum-mechanical interactions, enabling us to model nonlinear potential-energy surfaces or enhancing the exploration of chemical compound space. Here we present the deep learning architecture SchNet that is specifically designed to model atomistic systems by making use of continuous-filter convolutional layers. We demonstrate the capabilities of SchNet by accurately predicting a range of properties across chemical space for molecules and materials, where our model learns chemically plausible embeddings of atom types across the periodic table. Finally, we employ SchNet to predict potential-energy surfaces and energy-conserving force fields for molecular dynamics simulations of small molecules and perform an exemplary study on the quantum-mechanical properties of C20-fullerene that would have been infeasible with regular ab initio molecular dynamics.

  13. Applications of Deep Learning and Reinforcement Learning to Biological Data.

    PubMed

    Mahmud, Mufti; Kaiser, Mohammed Shamim; Hussain, Amir; Vassanelli, Stefano

    2018-06-01

    Rapid advances in hardware-based technologies during the past decades have opened up new possibilities for life scientists to gather multimodal data in various application domains, such as omics, bioimaging, medical imaging, and (brain/body)-machine interfaces. These have generated novel opportunities for development of dedicated data-intensive machine learning techniques. In particular, recent research in deep learning (DL), reinforcement learning (RL), and their combination (deep RL) promise to revolutionize the future of artificial intelligence. The growth in computational power accompanied by faster and increased data storage, and declining computing costs have already allowed scientists in various fields to apply these techniques on data sets that were previously intractable owing to their size and complexity. This paper provides a comprehensive survey on the application of DL, RL, and deep RL techniques in mining biological data. In addition, we compare the performances of DL techniques when applied to different data sets across various application domains. Finally, we outline open issues in this challenging research area and discuss future development perspectives.

  14. Geological characteristics of the Shinkai Seep Field, a serpentinite-hosted ecosystem in the Southern Mariana Forearc

    NASA Astrophysics Data System (ADS)

    Ohara, Y.; Stern, R. J.; Martinez, F.; Michibayashi, K.; Reagan, M. K.; Fujikura, K.; Watanabe, H.; Ishii, T.; Kelley, K. A.

    2012-12-01

    Most hydrothermal vents along mid-ocean spreading ridges are high-temperature, sulfide-rich, and low pH (acidic environments). For this reason, the discovery of the Lost City hydrothermal field on the Mid-Atlantic Ridge has stimulated interest in the role of serpentinization of peridotite in generating H2- and CH4-rich fluids and associated carbonate chimneys, as well as in the biological communities adapted to highly reduced, alkaline environments. A new serpentinite-hosted ecosystem, the Shinkai Seep Field (SSF), was discovered by a Shinkai 6500 dive in the inner trench slope of the southern Mariana Trench, near the Challenger Deep, during YK10-12 cruise of R/V Yokosuka in September 2010. Abundant chemosynthetic biological communities, principally consisting of vesicomyid clams are associated with serpentinized peridotite in the SSF. Serpentinization beneath several hydrothermal sites on the Mid-Atlantic Ridge is controlled by interacting seawater and peridotite, variably influenced by magmatic heat. In contrast, the SSF is located in a deep inner trench slope where magmatic heat contribution is unlikely. Instead, serpentinization reactions feeding the SSF may be controlled by persistent fluid flow from the subducting slab. Slab-derived fluid flow is probably controlled by flow through fractures because no serpentinite mud volcano can be discerned along the southern Mariana forearc. Deep-towed IMI-30 sonar backscatter imaging during TN273 cruise of R/V Thomas G. Thompson in January 2012 indicates that the SSF is associated with a small, low backscatter feature that may be a small mound. There are 20 or more of these features in the imaged area, the size of which is ~200 m width and ~200 m to ~700 m long. Since the southern Mariana forearc is heavily faulted, with a deep geology that is dominated by peridotite, more SSF-type seeps are likely to exist along the forearc above the Challenger Deep. The discovery of the SSF suggests that serpentinite-hosted vents may be more widespread on the ocean floor than presently known. The discovery further indicates that such serpentinite-hosted low-temperature fluid vents can sustain high-biomass communities and has implications for the chemical budget of the oceans and the distribution of abyssal chemosynthetic life. Since we know nothing about the chemistry and microbiology of the SSF, we hope to return for further studies with Shinkai 6500 in 2013.

  15. A novel biomedical image indexing and retrieval system via deep preference learning.

    PubMed

    Pang, Shuchao; Orgun, Mehmet A; Yu, Zhezhou

    2018-05-01

    The traditional biomedical image retrieval methods as well as content-based image retrieval (CBIR) methods originally designed for non-biomedical images either only consider using pixel and low-level features to describe an image or use deep features to describe images but still leave a lot of room for improving both accuracy and efficiency. In this work, we propose a new approach, which exploits deep learning technology to extract the high-level and compact features from biomedical images. The deep feature extraction process leverages multiple hidden layers to capture substantial feature structures of high-resolution images and represent them at different levels of abstraction, leading to an improved performance for indexing and retrieval of biomedical images. We exploit the current popular and multi-layered deep neural networks, namely, stacked denoising autoencoders (SDAE) and convolutional neural networks (CNN) to represent the discriminative features of biomedical images by transferring the feature representations and parameters of pre-trained deep neural networks from another domain. Moreover, in order to index all the images for finding the similarly referenced images, we also introduce preference learning technology to train and learn a kind of a preference model for the query image, which can output the similarity ranking list of images from a biomedical image database. To the best of our knowledge, this paper introduces preference learning technology for the first time into biomedical image retrieval. We evaluate the performance of two powerful algorithms based on our proposed system and compare them with those of popular biomedical image indexing approaches and existing regular image retrieval methods with detailed experiments over several well-known public biomedical image databases. Based on different criteria for the evaluation of retrieval performance, experimental results demonstrate that our proposed algorithms outperform the state-of-the-art techniques in indexing biomedical images. We propose a novel and automated indexing system based on deep preference learning to characterize biomedical images for developing computer aided diagnosis (CAD) systems in healthcare. Our proposed system shows an outstanding indexing ability and high efficiency for biomedical image retrieval applications and it can be used to collect and annotate the high-resolution images in a biomedical database for further biomedical image research and applications. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Classification of CITES-listed and other neotropical Meliaceae wood images using convolutional neural networks.

    PubMed

    Ravindran, Prabu; Costa, Adriana; Soares, Richard; Wiedenhoeft, Alex C

    2018-01-01

    The current state-of-the-art for field wood identification to combat illegal logging relies on experienced practitioners using hand lenses, specialized identification keys, atlases of woods, and field manuals. Accumulation of this expertise is time-consuming and access to training is relatively rare compared to the international demand for field wood identification. A reliable, consistent and cost effective field screening method is necessary for effective global scale enforcement of international treaties such as the Convention on the International Trade in Endagered Species (CITES) or national laws (e.g. the US Lacey Act) governing timber trade and imports. We present highly effective computer vision classification models, based on deep convolutional neural networks, trained via transfer learning, to identify the woods of 10 neotropical species in the family Meliaceae, including CITES-listed Swietenia macrophylla , Swietenia mahagoni , Cedrela fissilis , and Cedrela odorata . We build and evaluate models to classify the 10 woods at the species and genus levels, with image-level model accuracy ranging from 87.4 to 97.5%, with the strongest performance by the genus-level model. Misclassified images are attributed to classes consistent with traditional wood anatomical results, and our species-level accuracy greatly exceeds the resolution of traditional wood identification. The end-to-end trained image classifiers that we present discriminate the woods based on digital images of the transverse surface of solid wood blocks, which are surfaces and images that can be prepared and captured in the field. Hence this work represents a strong proof-of-concept for using computer vision and convolutional neural networks to develop practical models for field screening timber and wood products to combat illegal logging.

  17. GIS-based technology for marine geohazards in LW3-1 Gas Field of the South China Sea

    NASA Astrophysics Data System (ADS)

    Su, Tianyun; Liu, Lejun; Li, Xishuang; Hu, Guanghai; Liu, Haixing; Zhou, Lin

    2013-04-01

    The exploration and exploitation of deep-water oil-gas are apt to be suffered from high-risk geo-hazards such as submarine landslide, soft clay creep, shallow gas, excess pore-water pressure, mud volcano or mud diaper, salt dome and so on. Therefore, it is necessary to survey the seafloor topography, identify the unfavourable geological risks and investigate their environment and mechanism before exploiting the deep-water oil-gas. Because of complex environment, the submarine phenomenon and features, like marine geohazards, can not be recognized directly. Multi-disciplinary data are acquired and analysed comprehensively in order to get more clear understanding about the submarine processes. The data include multi-beam bathymetry data, sidescan sonar images, seismic data, shallow-bottom profiling images, boring data, etc.. Such data sets nowadays increase rapidly to large amounts, but may be heterogeneous and have different resolutions. It is difficult to make good management and utilization of such submarine data with traditional means. GIS technology can provide efficient and powerful tools or services in such aspects as spatial data management, processing, analysis and visualization. They further promote the submarine scientific research and engineering development. The Liwan 3-1 Gas Field, the first deep-water gas field in China, is located in the Zhu II Depression in the Zhujiang Basin along the continental slope of the northern South China Sea. The exploitation of this field is designed to establish subsea wellhead and to use submarine pipeline for the transportation of oil. The deep-water section of the pipeline route in the gas field is to be selected to pass through the northern continental slope of the South China Sea. To avoid huge economic loss and ecological environmental damage, it is necessary to evaluate the geo-hazards for the establishment and safe operation of the pipeline. Based on previous scientific research results, several survey cruises have been carried out with ships and AUV to collect multidisciplinary and massive submarine data such as multi-beam bathymetric data, sidescan sonar images, shallow-bottom profiling images, high-resolution multi-channel seismic data and boring test data. In order to make good use of these precious data, GIS technology is used in our research. Data model is designed to depict the structure, organization and relationship between multi disciplinary submarine data. With these data models, database is established to manage and share the attribute and spatial data effectively. The spatial datasets, such as contours, TIN models, DEM models, etc., can be generated. Some submarine characteristics, such as slope, aspects, curvature, landslide volume, etc., can be calculated and extracted with spatial analysis tools. The thematic map can be produced easily based on database and generated spatial dataset. Through thematic map, the multidisciplinary data spatial relationship can be easily established and provide helpful information for regional submarine geohazards identification, assessments and prediction. The produced thematic map of the LW3-1 Gas Field, reveal the strike of the seafloor topography to be NE to SW. Five geomorphological zones have been divided, which include the outer continental shelf margin zone with sand waves and mega-ripples, the continental slope zone with coral reefs and sand waves, the continental slope zone with a monocline shape, the continental slope zone with fault terraces and the continental slope zone with turbidity current deposits.

  18. Ion-neutral Coupling During Deep Solar Minimum

    NASA Technical Reports Server (NTRS)

    Huang, Cheryl Y.; Roddy, Patrick A.; Sutton, Eric K.; Stoneback, Russell; Pfaff, Robert F.; Gentile, Louise C.; Delay, Susan H.

    2013-01-01

    The equatorial ionosphere under conditions of deep solar minimum exhibits structuring due to tidal forces. Data from instruments carried by the Communication Navigation Outage Forecasting System (CNOFS) which was launched in April 2008 have been analyzed for the first 2 years following launch. The Planar Langmuir Probe (PLP), Ion Velocity Meter (IVM) and Vector Electric Field Investigation (VEFI) all detect periodic structures during the 20082010 period which appear to be tides. However when the tidal features detected by these instruments are compared, there are distinctive and significant differences between the observations. Tides in neutral densities measured by the Gravity Recovery and Climate Experiment (GRACE) satellite were also observed during June 2008. In addition, Broad Plasma Decreases (BPDs) appear as a deep absolute minimum in the plasma and neutral density tidal pattern. These are co-located with regions of large downward-directed ion meridional velocities and minima in the zonal drifts, all on the nightside. The region in which BPDs occur coincides with a peak in occurrence rate of dawn depletions in plasma density observed on the Defense Meterological Satellite Program (DMSP) spacecraft, as well as a minimum in radiance detected by UV imagers on the Thermosphere Ionosphere Mesosphere Energetics and Dynamics (TIMED) and IMAGE satellites

  19. Refractive Optics for Hard X-ray Transmission Microscopy

    NASA Astrophysics Data System (ADS)

    Simon, M.; Ahrens, G.; Last, A.; Mohr, J.; Nazmov, V.; Reznikova, E.; Voigt, A.

    2011-09-01

    For hard x-ray transmission microscopy at photon energies higher than 15 keV we design refractive condenser and imaging elements to be used with synchrotron light sources as well as with x-ray tube sources. The condenser lenses are optimized for low x-ray attenuation—resulting in apertures greater than 1 mm—and homogeneous intensity distribution on the detector plane, whereas the imaging enables high-resolution (<100 nm) full-field imaging. To obtain high image quality at reasonable exposure times, custom-tailored matched pairs of condenser and imaging lenses are being developed. The imaging lenses (compound refractive lenses, CRLs) are made of SU-8 negative resist by deep x-ray lithography. SU-8 shows high radiation stability. The fabrication technique enables high-quality lens structures regarding surface roughness and arrangement precision with arbitrary 2D geometry. To provide point foci, crossed pairs of lenses are used. Condenser lenses have been made utilizing deep x-ray lithographic patterning of thick SU-8 layers, too, whereas in this case, the aperture is limited due to process restrictions. Thus, in terms of large apertures, condenser lenses made of structured and rolled polyimide film are more attractive. Both condenser types, x-ray mosaic lenses and rolled x-ray prism lenses (RXPLs), are considered to be implemented into a microscope setup. The x-ray optical elements mentioned above are characterized with synchrotron radiation and x-ray laboratory sources, respectively.

  20. Part-based deep representation for product tagging and search

    NASA Astrophysics Data System (ADS)

    Chen, Keqing

    2017-06-01

    Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.

  1. Deep learning in breast cancer risk assessment: evaluation of fine-tuned convolutional neural networks on a clinical dataset of FFDMs

    NASA Astrophysics Data System (ADS)

    Li, Hui; Mendel, Kayla R.; Lee, John H.; Lan, Li; Giger, Maryellen L.

    2018-02-01

    We evaluated the potential of deep learning in the assessment of breast cancer risk using convolutional neural networks (CNNs) fine-tuned on full-field digital mammographic (FFDM) images. This study included 456 clinical FFDM cases from two high-risk datasets: BRCA1/2 gene-mutation carriers (53 cases) and unilateral cancer patients (75 cases), and a low-risk dataset as the control group (328 cases). All FFDM images (12-bit quantization and 100 micron pixel) were acquired with a GE Senographe 2000D system and were retrospectively collected under an IRB-approved, HIPAA-compliant protocol. Regions of interest of 256x256 pixels were selected from the central breast region behind the nipple in the craniocaudal projection. VGG19 pre-trained on the ImageNet dataset was used to classify the images either as high-risk or as low-risk subjects. The last fully-connected layer of pre-trained VGG19 was fine-tuned on FFDM images for breast cancer risk assessment. Performance was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC) in the task of distinguishing between high-risk and low-risk subjects. AUC values of 0.84 (SE=0.05) and 0.72 (SE=0.06) were obtained in the task of distinguishing between the BRCA1/2 gene-mutation carriers and low-risk women and between unilateral cancer patients and low-risk women, respectively. Deep learning with CNNs appears to be able to extract parenchymal characteristics directly from FFDMs which are relevant to the task of distinguishing between cancer risk populations, and therefore has potential to aid clinicians in assessing mammographic parenchymal patterns for cancer risk assessment.

  2. Toward Rechargeable Persistent Luminescence for the First and Third Biological Windows via Persistent Energy Transfer and Electron Trap Redistribution.

    PubMed

    Xu, Jian; Murata, Daisuke; Ueda, Jumpei; Viana, Bruno; Tanabe, Setsuhisa

    2018-05-07

    Persistent luminescence (PersL) imaging without real-time external excitation has been regarded as the next generation of autofluorescence-free optical imaging technology. However, to achieve improved imaging resolution and deep tissue penetration, developing new near-infrared (NIR) persistent phosphors with intense and long duration PersL over 1000 nm is still a challenging but urgent task in this field. Herein, making use of the persistent energy transfer process from Cr 3+ to Er 3+ , we report a novel garnet persistent phosphor of Y 3 Al 2 Ga 3 O 12 codoped with Er 3+ and Cr 3+ (YAG G:Er-Cr), which shows intense Cr 3+ PersL (∼690 nm) in the deep red region matching well with the first biological window (NIR-I, 650-950 nm) and Er 3+ PersL (∼1532 nm) in the NIR region matching well with the third biological window (NIR-III, 1500-1800 nm). The optical imaging through raw-pork tissues (thickness of 1 cm) suggests that the emission band of Er 3+ can achieve higher spatial resolution and more accurate signal location than that of Cr 3+ due to the reduced light scattering at longer wavelengths. Furthermore, by utilizing two independent electron traps with two different trap depths in YAG G:Er-Cr, the Cr 3+ /Er 3+ PersL can even be recharged in situ by photostimulation with 660 nm LED thanks to the redistribution of trapped electrons from the deep trap to the shallow one. Our results serve as a guide in developing promising NIR (>1000 nm) persistent phosphors for long-term optical imaging.

  3. Parallel Radiofrequency Transmission for the Reduction of Heating in Deep Brain Stimulation Leads at 3T

    NASA Astrophysics Data System (ADS)

    McElcheran, Clare

    Deep Brain Stimulation (DBS) is increasingly used to treat a variety of brain diseases by sending electrical impulses to deep brain nuclei through long, electrically conductive leads. Magnetic resonance imaging (MRI) of patients pre- and post-implantation is desirable to target and position the implant, to evaluate possible side-effects and to examine DBS patients who have other health conditions. Although MRI is the preferred modality for pre-operative planning, MRI post-implantation is limited due to the risk of high local power deposition, and therefore tissue heating, at the tip of the lead. The localized power deposition arises from currents induced in the leads caused by coupling with the radiofrequency (RF) transmission field during imaging. In this thesis, parallel RF transmission (pTx) is used to tailor the RF electric field to suppress coupling effects. Three pTx coil configurations with 2-elements, 4-elements, and 8-elements, respectively, were investigated. Optimal input voltages to minimize coupling, while maintaining RF magnetic field homogeneity, were determined using a Nelder-Mead optimization algorithm. Resulting electric and magnetic fields were compared to that of a 16-rung birdcage coil. Experimental validation was performed with a custom-built 4-element pTx coil. Three cases were investigated to develop and evaluate this technique. First, a Proof-of-Concept study was performed to investigate the case of a simple, uniform cylindrical phantom with a straight, perfectly conducting wire. Second, a heterogeneous subject with bilateral, curved implanted wires was investigated. Finally, the third case investigated realistic patient lead-trajectories obtained from intra-operative CT scans. In all three cases, specific absorption rate (SAR), a metric used to quantify power deposition which results in heating, was reduced by over 90%. Maximal reduction in SAR was obtained with the 8-element pTx coil. Magnetic field homogeneity was comparable to the birdcage coil for the 4- and 8-element pTx configurations. Although further research is required before clinical implementation, these initial results suggest that the concept of optimizing pTx to reduce DBS heating effects holds considerable promise.

  4. VizieR Online Data Catalog: WINGS: Deep optical phot. of 77 nearby clusters (Varela+, 2009)

    NASA Astrophysics Data System (ADS)

    Varela, J.; D'Onofrio, M.; Marmo, C.; Fasano, G.; Bettoni, D.; Cava, A.; Couch, J. W.; Dressler, A.; Kjaergaard, P.; Moles, M.; Pignatelli, E.; Poggianti, M. B.; Valentinuzzi, T.

    2009-05-01

    This is the second paper of a series devoted to the WIde Field Nearby Galaxy-cluster Survey (WINGS). WINGS is a long term project which is gathering wide-field, multi-band imaging and spectroscopy of galaxies in a complete sample of 77 X-ray selected, nearby clusters (0.04200deg). The main goal of this project is to establish a local reference for evolutionary studies of galaxies and galaxy clusters. This paper presents the optical (B,V) photometric catalogs of the WINGS sample and describes the procedures followed to construct them. We have paid special care to correctly treat the large extended galaxies (which includes the brightest cluster galaxies) and the reduction of the influence of the bright halos of very bright stars. We have constructed photometric catalogs based on wide-field images in B and V bands using SExtractor. Photometry has been performed on images in which large galaxies and halos of bright stars were removed after modeling them with elliptical isophotes. We publish deep optical photometric catalogs (90% complete at V21.7, which translates to ~ MV* + 6 at mean redshift), giving positions, geometrical parameters, and several total and aperture magnitudes for all the objects detected. For each field we have produced three catalogs containing galaxies, stars and objects of "unknown" classification (~16%). From simulations we found that the uncertainty of our photometry is quite dependent of the light profile of the objects with stars having the most robust photometry and de Vaucouleurs profiles showing higher uncertainties and also an additional bias of ~-0.2m. The star/galaxy classification of the bright objects (V<20) was checked visually making negligible the fraction of misclassified objects. For fainter objects, we found that simulations do not provide reliable estimates of the possible misclassification and therefore we have compared our data with that from deep counts of galaxies and star counts from models of our Galaxy. Both sets turned out to be consistent with our data within ~5% (in the ratio galaxies/total) up to V~24. Finally, we remark that the application of our special procedure to remove large halos improves the photometry of the large galaxies in our sample with respect to the use of blind automatic procedures and increases (~16%) the detection rate of objects projected onto them. (4 data files).

  5. Transmission in near-infrared optical windows for deep brain imaging.

    PubMed

    Shi, Lingyan; Sordillo, Laura A; Rodríguez-Contreras, Adrián; Alfano, Robert

    2016-01-01

    Near-infrared (NIR) radiation has been employed using one- and two-photon excitation of fluorescence imaging at wavelengths 650-950 nm (optical window I) for deep brain imaging; however, longer wavelengths in NIR have been overlooked due to a lack of suitable NIR-low band gap semiconductor imaging detectors and/or femtosecond laser sources. This research introduces three new optical windows in NIR and demonstrates their potential for deep brain tissue imaging. The transmittances are measured in rat brain tissue in the second (II, 1,100-1,350 nm), third (III, 1,600-1,870 nm), and fourth (IV, centered at 2,200 nm) NIR optical tissue windows. The relationship between transmission and tissue thickness is measured and compared with the theory. Due to a reduction in scattering and minimal absorption, window III is shown to be the best for deep brain imaging, and windows II and IV show similar but better potential for deep imaging than window I. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. NiftyNet: a deep-learning platform for medical imaging.

    PubMed

    Gibson, Eli; Li, Wenqi; Sudre, Carole; Fidon, Lucas; Shakir, Dzhoshkun I; Wang, Guotai; Eaton-Rosen, Zach; Gray, Robert; Doel, Tom; Hu, Yipeng; Whyntie, Tom; Nachev, Parashkev; Modat, Marc; Barratt, Dean C; Ourselin, Sébastien; Cardoso, M Jorge; Vercauteren, Tom

    2018-05-01

    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. The NiftyNet infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on the TensorFlow framework and supports features such as TensorBoard visualization of 2D and 3D images and computational graphs by default. We present three illustrative medical image analysis applications built using NiftyNet infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. The NiftyNet infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Automatic bladder segmentation from CT images using deep CNN and 3D fully connected CRF-RNN.

    PubMed

    Xu, Xuanang; Zhou, Fugen; Liu, Bo

    2018-03-19

    Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.

  8. Design and implementation of optical imaging and sensor systems for characterization of deep-sea biological camouflage

    NASA Astrophysics Data System (ADS)

    Haag, Justin Mathew

    The visual ecology of deep-sea animals has long been of scientific interest. In the open ocean, where there is no physical structure to hide within or behind, diverse strategies have evolved to solve the problem of camouflage from a potential predator. Simulations of specific predator-prey scenarios have yielded estimates of the range of possible appearances that an animal may exhibit. However, there is a limited amount of quantitative information available related to both animal appearance and the light field at mesopelagic depths (200 m to 1000 m). To mitigate this problem, novel optical instrumentation, taking advantage of recent technological advances, was developed and is described in this dissertation. In the first half of this dissertation, the appearance of mirrored marine animals is quantitatively evaluated. A portable optical imaging scatterometer was developed to measure angular reflectance, described by the bidirectional reflectance distribution function (BRDF), of biological specimens. The instrument allows for BRDF capture from samples of arbitrary size, over a significant fraction of the reflectance hemisphere. Multiple specimens representing two species of marine animals, collected at mesopelagic depths, were characterized using the scatterometer. Low-dimensional parametric models were developed to simplify use of the data sets, and to validate the BRDF method. Results from principal component analysis confirm that BRDF measurements can be used to study intra- and interspecific variability of mirrored marine animal appearance. Collaborative efforts utilizing the BRDF data sets to develop physically-based scattering models are underway. In the second half of this dissertation, another key part of the deep-sea biological camouflage problem is examined. Two underwater radiometers, capable of low-light measurements, were developed to address the lack of available information related to the deep-sea light field. Quantitative comparison of spectral downward irradiance profiles at blue (~470~nm) and green (~560~nm) wavelengths, collected at Pacific and Atlantic field stations, provide insight into the presence of Raman (inelastic) scattering effects at mesopelagic depths. The radiometers were also used to collect in situ flashes of bioluminescence. Collaborations utilizing both the downward irradiance and bioluminescence data sets are planned.

  9. The OTELO Project

    NASA Astrophysics Data System (ADS)

    Cepa, J.; Alfaro, E. J.; Castañeda, H. O.; Gallego, J.; González-Serrano, J. I.; González, J. J.; Jones, D. H.; Pérez-García, A. M.; Sánchez-Portal, M.

    2007-06-01

    OSIRIS is the Spanish Day One instrument for the GTC 10.4-m telescope. OSIRIS is a general purpose instrument for imaging, low-resolution long slit and multi-object spectroscopy (MOS). OSIRIS has a field of view of 8.6×8.6 arcminutes, which makes it ideal for deep surveys, and operates in the optical wavelength range from 365 through 1000nm. The main characteristic that makes OSIRIS unique amongst other instruments in 8-10m class telescopes is the use of Tunable Filters (Bland-Hawthorn & Jones 1998). These allow a continuous selection of both the central wavelength and the width, thus providing scanning narrow band imaging within the OSIRIS wavelength range. The combination of the large GTC aperture, large OSIRIS field of view and availability of the TFs makes OTELO a truly unique emission line survey.

  10. Water-mediated green synthesis of PbS quantum dot and its glutathione and biotin conjugates for non-invasive live cell imaging

    NASA Astrophysics Data System (ADS)

    Vijaya Bharathi, M.; Maiti, Santanu; Sarkar, Bidisha; Ghosh, Kaustab; Paira, Priyankar

    2018-03-01

    This study addresses the cellular uptake of nanomaterials in the field of bio-applications. In the present study, we have synthesized water-soluble lead sulfide quantum dot (PbS QD) with glutathione and 3-MPA (mercaptopropionic acid) as the stabilizing ligand using a green approach. 3-MPA-capped QDs were further modified with streptavidin and then bound to biotin because of its high conjugation efficiency. Labelling and bio-imaging of cells with these bio-conjugated QDs were evaluated. The bright red fluorescence from these types of QDs in HeLa cells makes these materials suitable for deep tissue imaging.

  11. History of Hubble Space Telescope (HST)

    NASA Image and Video Library

    1995-12-01

    This deepest-ever view of the universe unveils myriad galaxies back to the begirning of time. Several hundred, never-before-seen, galaxies are visible in this view of the universe, called Hubble Deep Field (HDF). Besides the classical spiral and elliptical shaped galaxies, there is a bewildering variety of other galaxy shapes and colors that are important clues to understanding the evolution of the universe. Some of the galaxies may have formed less than one-billion years after the Big Bang. The image was assembled from many separate exposures with the Wide Field/Planetary Camera 2 (WF/PC2), for ten consecutive days between December 18, 1995 and December 28, 1995. This true-color view was assembled from separate images taken with blue, red, and infrared light. By combining these separate images into a single color picture, astronomers will be able to infer, at least statistically, the distance, age, and composition of galaxies in the field. Blue objects contain young stars and/or are relatively close, while redder objects contain older stellar populations and/or are farther away.

  12. VizieR Online Data Catalog: VANDELS High-Redshift Galaxy Evolution (McLure+, 2017)

    NASA Astrophysics Data System (ADS)

    McLure, R.; Pentericci, L.; Vandels Team

    2017-11-01

    This is the first data release (DR1) of the VANDELS survey, an ESO public spectroscopy survey targeting the high-redshift Universe. The VANDELS survey uses the VIMOS spectrograph on ESO's VLT to obtain ultra-deep, medium resolution, optical spectra of galaxies within the UKIDSS Ultra Deep Survey (UDS) and Chandra Deep Field South (CDFS) survey fields (0.2 sq. degree total area). Using robust photometric redshift pre-selection, VANDELS is targeting ~2100 galaxies in the redshift interval 1.0=3. In addition, VANDELS is targeting a substantial number of passive galaxies in the redshift interval 1.0

  13. T1 and susceptibility contrast at high fields

    NASA Astrophysics Data System (ADS)

    Neelavalli, Jaladhar

    Clinical imaging at high magnetic field strengths (≥ 3Tesla) is sought after primarily due to the increased signal strength available at these fields. This increased SNR can be used to perform: (a) high resolution imaging in the same time as at lower field strengths; (b) the same resolution imaging with much faster acquisition; and (c) functional MR imaging (fMRI), dynamic perfusion and diffusion imaging with increased sensitivity. However they are also associated with increased power deposition (SAR) due to increase in imaging frequency and longer T1 relaxation times. Longer T1s mean longer imaging times for generating good T1 contrast images. On the other hand for faster imaging, at high fields fast spin echo or magnetization prepared sequences are conventionally proposed which are, however, associated with high SAR values. Imaging with low SAR is more and more important as we move towards high fields and particularly for patients with metallic implants like pacemakers or deep brain stimulator. The SAR limit acceptable for these patients is much less than the limit acceptable for normal subjects. A new method is proposed for imaging at high fields with good contrast with simultaneous reduction in power deposition. Further, T1 based contrast optimization problem in FLASH imaging is considered for tissues with different T1s but same spin densities. The solution providing optimal imaging parameters is simplified for quick and easy computation in a clinical setting. The efficacy of the simplification is evaluated and practical limits under which the simplification can be applied are worked out. The phase difference due to variation in magnetic susceptibility property among biological tissues is another unique source of contrast which is different from the conventional T1, T2 and T2* contrast. This susceptibility based phase contrast has become more and more important at high fields, partly due to contrast generation issues due to longer T 1s and shorter T2s and partly because of the invariance of most tissue susceptibilities with field strength. This essentially ensures a constant available phase contrast between tissues across field strengths. In fact, with the increased SNR at high fields, the phase CNR actually increases with field strength which is even better. Susceptibility weighted imaging, which uniquely combines this phase and magnitude information to generate enhanced susceptibility contrast magnitude images, has proven to be an important tool in the study of various neurological conditions like, Alzheimer's, Parkinson's, Huntington's disease and multiple sclerosis even at conventional field strength of 1.5T and should have more applicability at high fields. A major issue in using phase images for susceptibility contrast, directly or as processed SWI magnitude images, is the large scale background phase variations that obscure the local susceptibility based contrast. A novel method is proposed for removing such geometrically induced large scale phase variations using a Fourier Transform based field calculation method. It is shown that the new method is capable of successfully removing the background field effects. It is shown that the new method is not only capable of successfully removing the background field effects but also helps in preserving more local phase information.

  14. Microscopic medical image classification framework via deep learning and shearlet transform.

    PubMed

    Rezaeilouyeh, Hadi; Mollahosseini, Ali; Mahoor, Mohammad H

    2016-10-01

    Cancer is the second leading cause of death in US after cardiovascular disease. Image-based computer-aided diagnosis can assist physicians to efficiently diagnose cancers in early stages. Existing computer-aided algorithms use hand-crafted features such as wavelet coefficients, co-occurrence matrix features, and recently, histogram of shearlet coefficients for classification of cancerous tissues and cells in images. These hand-crafted features often lack generalizability since every cancerous tissue and cell has a specific texture, structure, and shape. An alternative approach is to use convolutional neural networks (CNNs) to learn the most appropriate feature abstractions directly from the data and handle the limitations of hand-crafted features. A framework for breast cancer detection and prostate Gleason grading using CNN trained on images along with the magnitude and phase of shearlet coefficients is presented. Particularly, we apply shearlet transform on images and extract the magnitude and phase of shearlet coefficients. Then we feed shearlet features along with the original images to our CNN consisting of multiple layers of convolution, max pooling, and fully connected layers. Our experiments show that using the magnitude and phase of shearlet coefficients as extra information to the network can improve the accuracy of detection and generalize better compared to the state-of-the-art methods that rely on hand-crafted features. This study expands the application of deep neural networks into the field of medical image analysis, which is a difficult domain considering the limited medical data available for such analysis.

  15. A novel deep learning-based approach to high accuracy breast density estimation in digital mammography

    NASA Astrophysics Data System (ADS)

    Ahn, Chul Kyun; Heo, Changyong; Jin, Heongmin; Kim, Jong Hyo

    2017-03-01

    Mammographic breast density is a well-established marker for breast cancer risk. However, accurate measurement of dense tissue is a difficult task due to faint contrast and significant variations in background fatty tissue. This study presents a novel method for automated mammographic density estimation based on Convolutional Neural Network (CNN). A total of 397 full-field digital mammograms were selected from Seoul National University Hospital. Among them, 297 mammograms were randomly selected as a training set and the rest 100 mammograms were used for a test set. We designed a CNN architecture suitable to learn the imaging characteristic from a multitudes of sub-images and classify them into dense and fatty tissues. To train the CNN, not only local statistics but also global statistics extracted from an image set were used. The image set was composed of original mammogram and eigen-image which was able to capture the X-ray characteristics in despite of the fact that CNN is well known to effectively extract features on original image. The 100 test images which was not used in training the CNN was used to validate the performance. The correlation coefficient between the breast estimates by the CNN and those by the expert's manual measurement was 0.96. Our study demonstrated the feasibility of incorporating the deep learning technology into radiology practice, especially for breast density estimation. The proposed method has a potential to be used as an automated and quantitative assessment tool for mammographic breast density in routine practice.

  16. Hα Emitting Galaxies at z ∼ 0.6 in the Deep And Wide Narrow-band Survey

    NASA Astrophysics Data System (ADS)

    Coughlin, Alicia; Rhoads, James E.; Malhotra, Sangeeta; Probst, Ronald; Swaters, Rob; Tilvi, Vithal S.; Zheng, Zhen-Ya; Finkelstein, Steven; Hibon, Pascale; Mobasher, Bahram; Jiang, Tianxing; Joshi, Bhavin; Pharo, John; Veilleux, Sylvain; Wang, Junxian; Yang, Huan; Zabl, Johannes

    2018-05-01

    We present new measurements of the Hα luminosity function (LF) and star formation rate (SFR) volume density for galaxies at z ∼ 0.62 in the COSMOS field. Our results are part of the Deep And Wide Narrow-band Survey (DAWN), a unique infrared imaging program with large areal coverage (∼1.1 deg2 over five fields) and sensitivity (9.9× {10}-18 {erg} {cm}}-2 {{{s}}}-1 at 5σ). The present sample, based on a single DAWN field, contains 116 Hα emission-line candidates at z ∼ 0.62, 25% of which have spectroscopic confirmations. These candidates have been selected through the comparison of narrow and broad-band images in the infrared and through matching with existing catalogs in the COSMOS field. The dust-corrected LF is well described by a Schechter function with {L}* ={10}42.64+/- 0.92 erg s‑1, {{{Φ }}}* ={10}-3.32+/- 0.93 Mpc‑3, {L}* {{{Φ }}}* ={10}39.40+/- 0.15 erg s‑1 Mpc‑3, and α = ‑1.75 ± 0.09. From this LF, we calculate a SFR density of ρ SFR = 10‑1.37 ± 0.08 M ⊙ yr‑1 Mpc‑3. We expect an additional cosmic variance uncertainty of ∼20%. Both the faint end slope and luminosity density that we derive are consistent with prior results at similar redshifts, with reduced uncertainties. We also present an analysis of these Hα emitters’ sizes, which shows a direct correlation between the galaxies’ sizes and their Hα emission.

  17. Interference in astronomical speckle patterns

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.

    1976-01-01

    Astronomical speckle patterns are examined in an atmospheric-optics context in order to determine what kind of image quality is to be expected from several different imaging techniques. The model used to describe the instantaneous complex field distribution across the pupil of a large telescope regards the pupil as a deep phase grating with a periodicity given by the size of the cell of uniform phase or the refractive index structure function. This model is used along with an empirical formula derived purely from the physical appearance of the speckle patterns to discuss the orders of interference in astronomical speckle patterns.

  18. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Machine Learning in Medical Imaging.

    PubMed

    Giger, Maryellen L

    2018-03-01

    Advances in both imaging and computers have synergistically led to a rapid rise in the potential use of artificial intelligence in various radiological imaging tasks, such as risk assessment, detection, diagnosis, prognosis, and therapy response, as well as in multi-omics disease discovery. A brief overview of the field is given here, allowing the reader to recognize the terminology, the various subfields, and components of machine learning, as well as the clinical potential. Radiomics, an expansion of computer-aided diagnosis, has been defined as the conversion of images to minable data. The ultimate benefit of quantitative radiomics is to (1) yield predictive image-based phenotypes of disease for precision medicine or (2) yield quantitative image-based phenotypes for data mining with other -omics for discovery (ie, imaging genomics). For deep learning in radiology to succeed, note that well-annotated large data sets are needed since deep networks are complex, computer software and hardware are evolving constantly, and subtle differences in disease states are more difficult to perceive than differences in everyday objects. In the future, machine learning in radiology is expected to have a substantial clinical impact with imaging examinations being routinely obtained in clinical practice, providing an opportunity to improve decision support in medical image interpretation. The term of note is decision support, indicating that computers will augment human decision making, making it more effective and efficient. The clinical impact of having computers in the routine clinical practice may allow radiologists to further integrate their knowledge with their clinical colleagues in other medical specialties and allow for precision medicine. Copyright © 2018. Published by Elsevier Inc.

  20. Satellite Ocean Aerosol Retrieval (SOAR) Algorithm Extension to S-NPP VIIRS as Part of the "Deep Blue" Aerosol Project

    NASA Astrophysics Data System (ADS)

    Sayer, A. M.; Hsu, N. C.; Lee, J.; Bettenhausen, C.; Kim, W. V.; Smirnov, A.

    2018-01-01

    The Suomi National Polar-Orbiting Partnership (S-NPP) satellite, launched in late 2011, carries the Visible Infrared Imaging Radiometer Suite (VIIRS) and several other instruments. VIIRS has similar characteristics to prior satellite sensors used for aerosol optical depth (AOD) retrieval, allowing the continuation of space-based aerosol data records. The Deep Blue algorithm has previously been applied to retrieve AOD from Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Moderate Resolution Imaging Spectroradiometer (MODIS) measurements over land. The SeaWiFS Deep Blue data set also included a SeaWiFS Ocean Aerosol Retrieval (SOAR) algorithm to cover water surfaces. As part of NASA's VIIRS data processing, Deep Blue is being applied to VIIRS data over land, and SOAR has been adapted from SeaWiFS to VIIRS for use over water surfaces. This study describes SOAR as applied in version 1 of NASA's S-NPP VIIRS Deep Blue data product suite. Several advances have been made since the SeaWiFS application, as well as changes to make use of the broader spectral range of VIIRS. A preliminary validation against Maritime Aerosol Network (MAN) measurements suggests a typical uncertainty on retrieved 550 nm AOD of order ±(0.03+10%), comparable to existing SeaWiFS/MODIS aerosol data products. Retrieved Ångström exponent and fine-mode AOD fraction are also well correlated with MAN data, with small biases and uncertainty similar to or better than SeaWiFS/MODIS products.

  1. Regional two-dimensional magnetotelluric profile in West Bohemia/Vogtland reveals deep conductive channel into the earthquake swarm region

    NASA Astrophysics Data System (ADS)

    Muñoz, Gerard; Weckmann, Ute; Pek, Josef; Kováčiková, Světlana; Klanica, Radek

    2018-03-01

    The West Bohemia/Vogtland region, characterized by the intersection of the Eger (Ohře) Rift and the Mariánské Lázně fault, is a geodynamically active area exhibiting repeated occurrence of earthquake swarms, massive CO2 emanations and mid Pleistocene volcanism. The Eger Rift is the only known intra-continental region in Europe where such deep seated, active lithospheric processes currently take place. We present an image of electrical resistivity obtained from two-dimensional inversion of magnetotelluric (MT) data acquired along a regional profile crossing the Eger Rift. At the near surface, the Cheb basin and the aquifer feeding the mofette fields of Bublák and Hartoušov have been imaged as part of a region of very low resistivity. The most striking resistivity feature, however, is a deep reaching conductive channel which extends from the surface into the lower crust spatially correlated with the hypocentres of the seismic events of the Nový Kostel Focal Zone. This channel has been interpreted as imaging a pathway from a possible mid-crustal fluid reservoir to the surface. The resistivity model reinforces the relation between the fluid circulation along deep-reaching faults and the generation of the earthquakes. Additionally, a further conductive channel has been revealed to the south of the profile. This other feature could be associated to fossil hydrothermal alteration related to Mýtina and/or Neualbenreuth Maar structures or alternatively could be the signature of a structure associated to the suture between the Saxo-Thuringian and Teplá-Barrandian zones, whose surface expression is located only a few kilometres away.

  2. A PUBLIC, K-SELECTED, OPTICAL-TO-NEAR-INFRARED CATALOG OF THE EXTENDED CHANDRA DEEP FIELD SOUTH (ECDFS) FROM THE MULTIWAVELENGTH SURVEY BY YALE-CHILE (MUSYC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, Edward N.; Franx, Marijn; Quadri, Ryan F.

    2009-08-01

    We present a new, K-selected, optical-to-near infrared photometric catalog of the Extended Chandra Deep Field South (ECDFS), making it publicly available to the astronomical community.{sup 22}Imaging and spectroscopy data and catalogs are freely available through the MUSYC Public Data Release webpage: http://www.astro.yale.edu/MUSYC/. The data set is founded on publicly available imaging, supplemented by original z'JK imaging data collected as part of the MUltiwavelength Survey by Yale-Chile (MUSYC). The final photometric catalog consists of photometry derived from UU {sub 38} BVRIz'JK imaging covering the full 1/2 x 1/2 square circ of the ECDFS, plus H-band photometry for approximately 80% of themore » field. The 5{sigma} flux limit for point sources is K{sup (AB)}{sub tot}= 22.0. This is also the nominal completeness and reliability limit of the catalog: the empirical completeness for 21.75 < K < 22.00 is {approx}>85%. We have verified the quality of the catalog through both internal consistency checks, and comparisons to other existing and publicly available catalogs. As well as the photometric catalog, we also present catalogs of photometric redshifts and rest-frame photometry derived from the 10-band photometry. We have collected robust spectroscopic redshift determinations from published sources for 1966 galaxies in the catalog. Based on these sources, we have achieved a (1{sigma}) photometric redshift accuracy of {delta}z/(1 + z) = 0.036, with an outlier fraction of 7.8%. Most of these outliers are X-ray sources. Finally, we describe and release a utility for interpolating rest-frame photometry from observed spectral energy distributions, dubbed InterRest.{sup 23}InterRest is available via http://www.strw.leidenuniv.nl/{approx}ent/InterRest. Documentation and a complete walkthrough can be found at the same address.« less

  3. Earth-from-Luna Limb Imager (ELLI) for Deep Space Gateway

    NASA Astrophysics Data System (ADS)

    Gorkavyi, N.; DeLand, M.

    2018-02-01

    The new type of limb imager with a high-frequency imaging proposed for Deep Space Gateway. Each day this CubeSat' scale imager will generate the global 3D model of the aerosol component of the Earth's atmosphere and Polar Mesospheric Clouds.

  4. Studying Cosmic Dawn with WFIRST

    NASA Astrophysics Data System (ADS)

    Rhoads, James; Malhotra, Sangeeta; Jansen, Rolf A.; Windhorst, Rogier; Tilvi, Vithal; Finkelstein, Steven; Wold, Isak; Papovich, Casey; Fan, Xiaohui; Mellema, Garrelt; Zackrisson, Erik; Jensen, Hannes; T

    2018-01-01

    Our understanding of Cosmic Dawn can be revolutionized using WFIRST's combination of wide-field, sensitive, high resolution near-infrared imaging and spectroscopy. Guest investigator studies of WFIRST's high latitude imaging survey and supernova search fields will yield orders of magnitude increases in our samples of Lyman break galaxies from z=7 to z>12. The high latitude spectrsocopic survey will enable an unprecedented search for z>7 quasars. Guest observer deep fields can extend these studies to flux levels of Hubble's deepest fields, over regions measured in square degrees. The resulting census of luminous objects in the Cosmic Dawn will provide key insights into the sources of the ultraviolet photons that powered reionization. Moreover, because WFIRST has a wide field (slitless) spectroscopic capability, it can be used to search for Lyman alpha emitting galaxies over the full history of reionization. By comparing the Lyman alpha galaxy statistics to those of continuum sources, we can directly probe the transparency of the intergalactic gas and chart reionization history.Our team is planning for both Guest Investigator and Guest Observer applications of WFIRST to studying Cosmic Dawn, and welcomes dialog with other interested members of the community.

  5. Singlet gradient index lens for deep in vivo multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Murray, Teresa A.; Levene, Michael J.

    2012-02-01

    Micro-optical probes, including gradient index (GRIN) lenses and microprisms, have expanded the range of in vivo multiphoton microscopy to reach previously inaccessible deep brain structures such as deep cortical layers and the underlying hippocampus in mice. Yet imaging with GRIN lenses has been fundamentally limited by large amounts of spherical aberration and the need to construct compound lenses that limit the field-of-view. Here, we demonstrate the use of 0.5-mm-diameter, 1.7-mm-long GRIN lens singlets with 0.6 numerical aperture in conjunction with a cover glass and a conventional microscope objective correction collar to balance spherical aberrations. The resulting system achieves a lateral resolution of 618 nm and an axial resolution of 5.5 μm, compared to lateral and axial resolutions of ~1 μm and ~15 μm, respectively, for compound GRIN lenses of similar diameter. Furthermore, the GRIN lens singlets display fields-of-view in excess of 150 μm, compared with a few tens of microns for compound GRIN lenses. The GRIN lens/cover glass combination presented here is easy to assemble and inexpensive enough for use as a disposable device, enabling ready adoption by the neuroscience community.

  6. q-Space Deep Learning: Twelve-Fold Shorter and Model-Free Diffusion MRI Scans.

    PubMed

    Golkov, Vladimir; Dosovitskiy, Alexey; Sperl, Jonathan I; Menzel, Marion I; Czisch, Michael; Samann, Philipp; Brox, Thomas; Cremers, Daniel

    2016-05-01

    Numerous scientific fields rely on elaborate but partly suboptimal data processing pipelines. An example is diffusion magnetic resonance imaging (diffusion MRI), a non-invasive microstructure assessment method with a prominent application in neuroimaging. Advanced diffusion models providing accurate microstructural characterization so far have required long acquisition times and thus have been inapplicable for children and adults who are uncooperative, uncomfortable, or unwell. We show that the long scan time requirements are mainly due to disadvantages of classical data processing. We demonstrate how deep learning, a group of algorithms based on recent advances in the field of artificial neural networks, can be applied to reduce diffusion MRI data processing to a single optimized step. This modification allows obtaining scalar measures from advanced models at twelve-fold reduced scan time and detecting abnormalities without using diffusion models. We set a new state of the art by estimating diffusion kurtosis measures from only 12 data points and neurite orientation dispersion and density measures from only 8 data points. This allows unprecedentedly fast and robust protocols facilitating clinical routine and demonstrates how classical data processing can be streamlined by means of deep learning.

  7. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs.

    PubMed

    Han, Guanghui; Liu, Xiabi; Zheng, Guangyuan; Wang, Murong; Huang, Shan

    2018-06-06

    Ground-glass opacity (GGO) is a common CT imaging sign on high-resolution CT, which means the lesion is more likely to be malignant compared to common solid lung nodules. The automatic recognition of GGO CT imaging signs is of great importance for early diagnosis and possible cure of lung cancers. The present GGO recognition methods employ traditional low-level features and system performance improves slowly. Considering the high-performance of CNN model in computer vision field, we proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling is performed on multi-views and multi-receptive fields, which reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has the ability to obtain the optimal fine-tuning model. Multi-CNN models fusion strategy obtains better performance than any single trained model. We evaluated our method on the GGO nodule samples in publicly available LIDC-IDRI dataset of chest CT scans. The experimental results show that our method yields excellent results with 96.64% sensitivity, 71.43% specificity, and 0.83 F1 score. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images. Graphical abstract We proposed an automatic recognition method of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNN models in this paper. Our hybrid resampling reduces the risk of missing small or large GGOs by adopting representative sampling panels and processing GGOs with multiple scales simultaneously. The layer-wise fine-tuning strategy has ability to obtain the optimal fine-tuning model. Our method is a promising approach to apply deep learning method to computer-aided analysis of specific CT imaging signs with insufficient labeled images.

  8. Design and Test of Magnetic Wall Decoupling for Dipole Transmit/Receive Array for MR Imaging at the Ultrahigh Field of 7T.

    PubMed

    Yan, Xinqiang; Zhang, Xiaoliang; Wei, Long; Xue, Rong

    2015-01-01

    Radio-frequency coil arrays using dipole antenna technique have been recently applied for ultrahigh field magnetic resonance (MR) imaging to obtain the better signal-noise-ratio (SNR) gain at the deep area of human tissues. However, the unique structure of dipole antennas makes it challenging to achieve sufficient electromagnetic decoupling among the dipole antenna elements. Currently, there is no decoupling methods proposed for dipole antenna arrays in MR imaging. The recently developed magnetic wall (MW) or induced current elimination decoupling technique has demonstrated its feasibility and robustness in designing microstrip transmission line arrays, L/C loop arrays and monopole arrays. In this study, we aim to investigate the possibility and performance of MW decoupling technique in dipole arrays for MR imaging at the ultrahigh field of 7T. To achieve this goal, a two-channel MW decoupled dipole array was designed, constructed and analyzed experimentally through bench test and MR imaging. Electromagnetic isolation between the two dipole elements was improved from about -3.6 dB (without any decoupling treatments) to -16.5 dB by using the MW decoupling method. MR images acquired from a water phantom using the MW decoupled dipole array and the geometry factor maps were measured, calculated and compared with those acquired using the dipole array without decoupling treatments. The MW decoupled dipole array demonstrated well-defined image profiles from each element and had better geometry factor over the array without decoupling treatments. The experimental results indicate that the MW decoupling technique might be a promising solution to reducing the electromagnetic coupling of dipole arrays in ultrahigh field MRI, consequently improving their performance in SNR and parallel imaging.

  9. Mature vs. Active Deep-Seated Landslides: A Comparison Through Two Case Histories in the Alps

    NASA Astrophysics Data System (ADS)

    Delle Piane, Luca; Perello, Paolo; Baietto, Alessandro; Giorza, Alessandra; Musso, Alessia; Gabriele, Piercarlo; Baster, Ira

    2016-06-01

    Two case histories are presented, concerning the still poorly known alpine deep-seated gravitational slope deformations (DSD) located nearby Lanzada (central Italian Alps), and Sarre (north-western Italian Alps). The Lanzada DSD is a constantly monitored, juvenile, and active phenomenon, partly affecting an existing hydropower plant. Its well-developed landforms allow a precise field characterization of the instability-affected area. The Sarre DSD is a mature, strongly remodeled phenomenon, where the only hazard factor is represented by secondary instability processes at the base of the slope. In this case, the remodeling imposed the adoption of complementary analytical techniques to support the field work. The two presented studies had to be adapted to external factors, namely (a) available information, (b) geological and geomorphological setting, and (c) final scope of the work. The Lanzada case essentially relied upon accurate field work; the Sarre case was mostly based on digital image and DTM processing. In both cases a sound field structural analysis formed the necessary background to understand the mechanisms leading to instability. A back-analysis of the differences between the study methods adopted in the two cases is finally presented, leading to suggestions for further investigations and design.

  10. Nanofocusing beyond the near-field diffraction limit via plasmonic Fano resonance.

    PubMed

    Song, Maowen; Wang, Changtao; Zhao, Zeyu; Pu, Mingbo; Liu, Ling; Zhang, Wei; Yu, Honglin; Luo, Xiangang

    2016-01-21

    The past decade has witnessed a great deal of optical systems designed for exceeding the Abbe's diffraction limit. Unfortunately, a deep subwavelength spot is obtained at the price of extremely short focal length, which is indeed a near-field diffraction limit that could rarely go beyond in the nanofocusing device. One method to mitigate such a problem is to set up a rapid oscillatory electromagnetic field that converges at the prescribed focus. However, abrupt modulation of phase and amplitude within a small fraction of a wavelength seems to be the main obstacle in the visible regime, aggravated by loss and plasmonic features that come into function. In this paper, we propose a periodically repeated ring-disk complementary structure to break the near-field diffraction limit via plasmonic Fano resonance, originating from the interference between the complex hybrid plasmon resonance and the continuum of propagating waves through the silver film. This plasmonic Fano resonance introduces a π phase jump in the adjacent channels and amplitude modulation to achieve radiationless electromagnetic interference. As a result, deep subwavelength spots as small as 0.0045λ(2) at 36 nm above the silver film have been numerically demonstrated. This plate holds promise for nanolithography, subdiffraction imaging and microscopy.

  11. Active appearance model and deep learning for more accurate prostate segmentation on MRI

    NASA Astrophysics Data System (ADS)

    Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.

    2016-03-01

    Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirtley, John R., E-mail: jkirtley@stanford.edu; Rosenberg, Aaron J.; Palmstrom, Johanna C.

    Superconducting QUantum Interference Device (SQUID) microscopy has excellent magnetic field sensitivity, but suffers from modest spatial resolution when compared with other scanning probes. This spatial resolution is determined by both the size of the field sensitive area and the spacing between this area and the sample surface. In this paper we describe scanning SQUID susceptometers that achieve sub-micron spatial resolution while retaining a white noise floor flux sensitivity of ≈2μΦ{sub 0}/Hz{sup 1/2}. This high spatial resolution is accomplished by deep sub-micron feature sizes, well shielded pickup loops fabricated using a planarized process, and a deep etch step that minimizes themore » spacing between the sample surface and the SQUID pickup loop. We describe the design, modeling, fabrication, and testing of these sensors. Although sub-micron spatial resolution has been achieved previously in scanning SQUID sensors, our sensors not only achieve high spatial resolution but also have integrated modulation coils for flux feedback, integrated field coils for susceptibility measurements, and batch processing. They are therefore a generally applicable tool for imaging sample magnetization, currents, and susceptibilities with higher spatial resolution than previous susceptometers.« less

  13. Beauty and Astrophysics

    NASA Astrophysics Data System (ADS)

    Bessell, Michael S.

    2000-08-01

    Spectacular colour images have been made by combining CCD images in three different passbands using Adobe Photoshop. These beautiful images highlight a variety of astrophysical phenomena and should be a valuable resource for science education and public awareness of science. The wide field images were obtained at the Siding Spring Observatory (SSO) by mounting a Hasselblad or Nikkor telephoto lens in front of a 2K × 2K CCD. Options of more than 30 degrees or 6 degrees square coverage are produced in a single exposure in this way. Narrow band or broad band filters were placed between lens and CCD enabling deep, linear images in a variety of passbands to be obtained. We have mapped the LMC and SMC and are mapping the Galactic Plane for comparison with the Molonglo Radio Survey. Higher resolution images have also been made with the 40 inch telescope of galaxies and star forming regions in the Milky Way.

  14. Deep into the Brain: Artificial Intelligence in Stroke Imaging

    PubMed Central

    Lee, Eun-Jae; Kim, Yong-Hwan; Kim, Namkug; Kang, Dong-Wha

    2017-01-01

    Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives. PMID:29037014

  15. Deep into the Brain: Artificial Intelligence in Stroke Imaging.

    PubMed

    Lee, Eun-Jae; Kim, Yong-Hwan; Kim, Namkug; Kang, Dong-Wha

    2017-09-01

    Artificial intelligence (AI), a computer system aiming to mimic human intelligence, is gaining increasing interest and is being incorporated into many fields, including medicine. Stroke medicine is one such area of application of AI, for improving the accuracy of diagnosis and the quality of patient care. For stroke management, adequate analysis of stroke imaging is crucial. Recently, AI techniques have been applied to decipher the data from stroke imaging and have demonstrated some promising results. In the very near future, such AI techniques may play a pivotal role in determining the therapeutic methods and predicting the prognosis for stroke patients in an individualized manner. In this review, we offer a glimpse at the use of AI in stroke imaging, specifically focusing on its technical principles, clinical application, and future perspectives.

  16. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    PubMed

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  17. A Brief Review of Facial Emotion Recognition Based on Visual Information

    PubMed Central

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  18. Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.

    PubMed

    Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou

    2017-05-10

    Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.

  19. A Deep Narrowband Imaging Search for C IV and He II Emission from Lyα Blobs

    NASA Astrophysics Data System (ADS)

    Arrigoni Battaia, Fabrizio; Yang, Yujin; Hennawi, Joseph F.; Prochaska, J. Xavier; Matsuda, Yuichi; Yamada, Toru; Hayashino, Tomoki

    2015-05-01

    We conduct a deep narrowband imaging survey of 13 Lyα blobs (LABs) located in the SSA22 proto-cluster at z ˜ 3.1 in the C iv and He ii emission lines in an effort to constrain the physical process powering the Lyα emission in LABs. Our observations probe down to unprecedented surface brightness (SB) limits of (2.1-3.4) × 10-18 erg s-1 cm-2 arcsec-2 per 1 arcsec2 aperture (5σ) for the He ii λ1640 and C iv λ1549 lines, respectively. We do not detect extended He ii and C iv emission in any of the LABs, placing strong upper limits on the He ii/Lyα and C iv/Lyα line ratios, of 0.11 and 0.16, for the brightest two LABs in the field. We conduct detailed photoionization modeling of the expected line ratios and find that, although our data constitute the deepest ever observations of these lines, they are still not deep enough to rule out a scenario where the Lyα emission is powered by the ionizing radiation from an obscured active galactic nucleus. Our models can accommodate He ii/Lyα and C iv/Lyα ratios as low as ≃0.05 and ≃0.07, respectively, implying that one needs to reach SB as low as (1-1.5) × 10-18 erg s-1 cm-2 arcsec-2 (at 5σ) in order to rule out a photoionization scenario. These depths will be achievable with the new generation of image-slicing integral field units such as the Multi Unit Spectroscopic Explorer (MUSE) on VLT and the Keck Cosmic Web Imager (KCWI). We also model the expected He ii/Lyα and C iv/Lyα in a different scenario, where Lyα emission is powered by shocks generated in a large-scale superwind, but find that our observational constraints can only be met for shock velocities vs ≳ 250 km s-1, which appear to be in conflict with recent observations of quiescent kinematics in LABs. .

  20. VizieR Online Data Catalog: MACT survey. I. Opt. spectroscopy in Subaru Deep Field (Ly+, 2016)

    NASA Astrophysics Data System (ADS)

    Ly, C.; Malhotra, S.; Malkan, M. A.; Rigby, J. R.; Kashikawa, N.; de Los Reyes, M. A.; Rhoads, J. E.

    2016-10-01

    The primary results of this paper are based on optical spectroscopy conducted with Keck's Deep Imaging Multi-Object Spectrograph (DEIMOS) and MMT's Hectospec. In total, we obtain 3243 optical spectra for 1911 narrowband/intermediate-band excess emitters (roughly 20% of our narrowband/intermediate-band excess samples), and successfully detect emission lines to determine redshift for 1493 galaxies or 78% of the targeted sample. The MMT observations were conducted on 2008 March 13, 2008 April 10-11, 2008 April 14, 2014 February 27-28, 2014 March 25, and 2014 March 28-31, and correspond to the equivalent of three full nights. The Keck observations were conducted on 2004 April 23-24, 2008 May 01-02, 2009 April 25-28, 2014 May 02, and 2015 March 17/19/26. The majority of the observations were obtained in 2014-2015. The 2004 spectroscopic observations have been discussed in Kashikawa et al. (2006, J/ApJ/648/7) and Ly07 (J/ApJ/657/738), and the 2008-2009 data have been discussed in Kashikawa et al. (2011ApJ...734..119K). See section 2.2 for further details. The Subaru Deep Field (SDF) has been imaged with: (1) GALEX in both the FUV and NUV bands; (2) KPNO's Mayall telescope using MOSAIC in U; (3) Subaru telescope with Suprime-Cam in 14 bands (BVRci'z'zbzr), and five narrowband and two intermediate-band filters); (4) KPNO's Mayall telescope using NEWFIRM in H; (5) UKIRT using WFCAM in J and K; and (6) Spitzer in the four IRAC bands (3.6, 4.5, 5.8, and 8.0um). Most of these imaging data have been discussed in Ly et al. (2011ApJ...735...91L), except for the WFCAM J-band data and most of the NEWFIRM H-band data. The more recent NEWFIRM imaging data were acquired on 2012 March 06-07 and 2013 March 27-30. The WFCAM data were obtained on 2005 April 14-15, 2010 March 15-20, and 2010 April 22-23. See section 4.4 for further details. (11 data files).

  1. A Composite Model of Wound Segmentation Based on Traditional Methods and Deep Neural Networks

    PubMed Central

    Wang, Changjian; Liu, Xiaohui; Jin, Shiyao

    2018-01-01

    Wound segmentation plays an important supporting role in the wound observation and wound healing. Current methods of image segmentation include those based on traditional process of image and those based on deep neural networks. The traditional methods use the artificial image features to complete the task without large amounts of labeled data. Meanwhile, the methods based on deep neural networks can extract the image features effectively without the artificial design, but lots of training data are required. Combined with the advantages of them, this paper presents a composite model of wound segmentation. The model uses the skin with wound detection algorithm we designed in the paper to highlight image features. Then, the preprocessed images are segmented by deep neural networks. And semantic corrections are applied to the segmentation results at last. The model shows a good performance in our experiment. PMID:29955227

  2. Clutter elimination for deep clinical optoacoustic imaging using localised vibration tagging (LOVIT)☆

    PubMed Central

    Jaeger, Michael; Bamber, Jeffrey C.; Frenz, Martin

    2013-01-01

    This paper investigates a novel method which allows clutter elimination in deep optoacoustic imaging. Clutter significantly limits imaging depth in clinical optoacoustic imaging, when irradiation optics and ultrasound detector are integrated in a handheld probe for flexible imaging of the human body. Strong optoacoustic transients generated at the irradiation site obscure weak signals from deep inside the tissue, either directly by propagating towards the probe, or via acoustic scattering. In this study we demonstrate that signals of interest can be distinguished from clutter by tagging them at the place of origin with localised tissue vibration induced by the acoustic radiation force in a focused ultrasonic beam. We show phantom results where this technique allowed almost full clutter elimination and thus strongly improved contrast for deep imaging. Localised vibration tagging by means of acoustic radiation force is especially promising for integration into ultrasound systems that already have implemented radiation force elastography. PMID:25302147

  3. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    NASA Astrophysics Data System (ADS)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  4. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Guo, Rongrong; Zhang, Guoyi; Tade, Funmilayo; Schuster, David M.; Nieh, Peter; Master, Viraj; Fei, Baowei

    2017-02-01

    Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.

  5. STrategically Acquired Gradient Echo (STAGE) imaging, part I: Creating enhanced T1 contrast and standardized susceptibility weighted imaging and quantitative susceptibility mapping.

    PubMed

    Chen, Yongsheng; Liu, Saifeng; Wang, Yu; Kang, Yan; Haacke, E Mark

    2018-02-01

    To provide whole brain grey matter (GM) to white matter (WM) contrast enhanced T1W (T1WE) images, multi-echo quantitative susceptibility mapping (QSM), proton density (PD) weighted images, T1 maps, PD maps, susceptibility weighted imaging (SWI), and R2* maps with minimal misregistration in scanning times <5min. Strategically acquired gradient echo (STAGE) imaging includes two fully flow compensated double echo gradient echo acquisitions with a resolution of 0.67×1.33×2.0mm 3 acquired in 5min for 64 slices. Ten subjects were recruited and scanned at 3 Tesla. The optimum pair of flip angles (6° and 24° with TR=25ms at 3T) were used for both T1 mapping with radio frequency (RF) transmit field correction and creating enhanced GM/WM contrast (the T1WE). The proposed T1WE image was created from a combination of the proton density weighted (6°, PDW) and T1W (24°) images and corrected for RF transmit field variations. Prior to the QSM calculation, a multi-echo phase unwrapping strategy was implemented using the unwrapped short echo to unwrap the longer echo to speed up computation. R2* maps were used to mask deep grey matter and veins during the iterative QSM calculation. A weighted-average sum of susceptibility maps was generated to increase the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR). The proposed T1WE image has a significantly improved CNR both for WM to deep GM and WM to cortical GM compared to the acquired T1W image (the first echo of 24° scan) and the T1MPRAGE image. The weighted-average susceptibility maps have 80±26%, 55±22%, 108±33% SNR increases across the ten subjects compared to the single echo result of 17.5ms for the putamen, caudate nucleus, and globus pallidus, respectively. STAGE imaging offers the potential to create a standardized brain imaging protocol providing four pieces of quantitative tissue property information and multiple types of qualitative information in just 5min. Published by Elsevier Inc.

  6. The DEEP-South: Scheduling and Data Reduction Software System

    NASA Astrophysics Data System (ADS)

    Yim, Hong-Suh; Kim, Myung-Jin; Bae, Youngho; Moon, Hong-Kyu; Choi, Young-Jun; Roh, Dong-Goo; the DEEP-South Team

    2015-08-01

    The DEep Ecliptic Patrol of the Southern sky (DEEP-South), started in October 2012, is currently in test runs with the first Korea Microlensing Telescope Network (KMTNet) 1.6 m wide-field telescope located at CTIO in Chile. While the primary objective for the DEEP-South is physical characterization of small bodies in the Solar System, it is expected to discover a large number of such bodies, many of them previously unknown.An automatic observation planning and data reduction software subsystem called "The DEEP-South Scheduling and Data reduction System" (the DEEP-South SDS) is currently being designed and implemented for observation planning, data reduction and analysis of huge amount of data with minimum human interaction. The DEEP-South SDS consists of three software subsystems: the DEEP-South Scheduling System (DSS), the Local Data Reduction System (LDR), and the Main Data Reduction System (MDR). The DSS manages observation targets, makes decision on target priority and observation methods, schedules nightly observations, and archive data using the Database Management System (DBMS). The LDR is designed to detect moving objects from CCD images, while the MDR conducts photometry and reconstructs lightcurves. Based on analysis made at the LDR and the MDR, the DSS schedules follow-up observation to be conducted at other KMTNet stations. In the end of 2015, we expect the DEEP-South SDS to achieve a stable operation. We also have a plan to improve the SDS to accomplish finely tuned observation strategy and more efficient data reduction in 2016.

  7. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image

    PubMed Central

    Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping

    2016-01-01

    Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289

  8. Getting to Know the Neighbors: Deep Imaging of the Andromeda Satellite Dwarf Galaxy Cassiopeia III with WIYN pODI

    NASA Astrophysics Data System (ADS)

    Smith, Madison; Rhode, Katherine L.; Janowiecki, Steven

    2016-01-01

    We present results from WIYN pODI imaging of Cassiopeia III/Andromeda XXXII (Cas III/And XXXII), an Andromeda satellite dwarf galaxy recently discovered by Martin et al. (2013) in Pan-STARRS1 survey data. Detailed studies of satellite dwarf galaxies in the Local Group and its environs provide important insight into how low-mass galaxies form and evolve as well as how more massive galaxies are assembled in a hierarchical universe. The goal of this project is to obtain deep, wide-field photometry of Cas III in order to study its stellar population in more detail. The images used for this analysis were taken in October 2013 with the 24' x 24' pODI camera on the WIYN 3.5-m telescope in the SDSS g and i filters. Calibrated photometry was performed on all point sources in the g and i images and then used to quantify the radial distribution of stars in Cas III and to construct a color-magnitude diagram (CMD). We present this CMD along with a map of the resolved stellar population and measurements of the galaxy magnitude and structural properties. This research was supported by the NSF Research Experiences for Undergraduates program (grant number AST-1358980).

  9. Three-dimensional oxygen isotope imaging of convective fluid flow around the Big Bonanza, Comstock lode mining district, Nevada

    USGS Publications Warehouse

    Criss, R.E.; Singleton, M.J.; Champion, D.E.

    2000-01-01

    Oxygen isotope analyses of propylitized andesites from the Con Virginia and California mines allow construction of a detailed, three-dimensional image of the isotopic surfaces produced by the convective fluid flows that deposited the famous Big Bonanza orebody. On a set of intersecting maps and sections, the δ18O isopleths clearly show the intricate and conformable relationship of the orebody to a deep, ~500 m gyre of meteoric-hydrothermal fluid that circulated along and above the Comstock fault, near the contact of the Davidson Granodiorite. The core of this gyre (δ18O = 0 to 3.8‰) encompasses the bonanza and is almost totally surrounded by rocks having much lower δ18O values (–1.0 to –4.4‰). This deep gyre may represent a convective longitudinal roll superimposed on a large unicellular meteoric-hydrothermal system, producing a complex flow field with both radial and longitudinal components that is consistent with experimentally observed patterns of fluid convection in permeable media.

  10. Novel x-ray silicon detector for 2D imaging and high-resolution spectroscopy

    NASA Astrophysics Data System (ADS)

    Castoldi, Andrea; Gatti, Emilio; Guazzoni, Chiara; Longoni, Antonio; Rehak, Pavel; Strueder, Lothar

    1999-10-01

    A novel x-ray silicon detector for 2D imaging has been recently proposed. The detector, called Controlled-Drift Detector, is operated in integrate-readout mode. Its basic feature is the fast transport of the integrated charge to the output electrode by means of a uniform drift field. The drift time of the charge packet identifies the pixel of incidence. A new architecture to implement the Controlled- Drift Detector concept will be presented. The potential wells for the integration of the signal charge are obtained by means of a suitable pattern of deep n-implants and deep p-implants. During the readout mode the signal electrons are transferred in the drift channel that flanks each column of potential wells where they drift towards the collecting electrode at constant velocity. The first experimental measurements demonstrate the successful integration, transfer and drift of the signal electrons. The low output capacitance of the readout electrode together with the on- chip front-end electronics allows high resolution spectroscopy of the detected photons.

  11. Deep learning in the small sample size setting: cascaded feed forward neural networks for medical image segmentation

    NASA Astrophysics Data System (ADS)

    Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke

    2016-03-01

    Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.

  12. Feasibility of Using Linearly Polarized Rotating Birdcage Transmitters and Close-Fitting Receive Arrays in MRI to Reduce SAR in the Vicinity of Deep Brain Simulation Implants

    PubMed Central

    Golestanirad, Laleh; Keil, Boris; Angelone, Leonardo M.; Bonmassar, Giorgio; Mareyam, Azma; Wald, Lawrence L.

    2016-01-01

    Purpose MRI of patients with deep brain stimulation (DBS) implants is strictly limited due to safety concerns, including high levels of local specific absorption rate (SAR) of radiofrequency (RF) fields near the implant and related RF-induced heating. This study demonstrates the feasibility of using a rotating linearly polarized birdcage transmitter and a 32-channel close-fit receive array to significantly reduce local SAR in MRI of DBS patients. Methods Electromagnetic simulations and phantom experiments were performed with generic DBS lead geometries and implantation paths. The technique was based on mechanically rotating a linear birdcage transmitter to align its zero electric-field region with the implant while using a close-fit receive array to significantly increase signal to noise ratio of the images. Results It was found that the zero electric-field region of the transmitter is thick enough at 1.5 Tesla to encompass DBS lead trajectories with wire segments that were up to 30 degrees out of plane, as well as leads with looped segments. Moreover, SAR reduction was not sensitive to tissue properties, and insertion of a close-fit 32-channel receive array did not degrade the SAR reduction performance. Conclusion The ensemble of rotating linear birdcage and 32-channel close-fit receive array introduces a promising technology for future improvement of imaging in patients with DBS implants. PMID:27059266

  13. Ganymede G1 & G2 Encounters - Interior of Ganymede

    NASA Image and Video Library

    1997-12-16

    NASA's Voyager images are used to create a global view of Ganymede. The cut-out reveals the interior structure of this icy moon. This structure consists of four layers based on measurements of Ganymede's gravity field and theoretical analyses using Ganymede's known mass, size and density. Ganymede's surface is rich in water ice and Voyager and Galileo images show features which are evidence of geological and tectonic disruption of the surface in the past. As with the Earth, these geological features reflect forces and processes deep within Ganymede's interior. Based on geochemical and geophysical models, scientists expected Ganymede's interior to either consist of: a) an undifferentiated mixture of rock and ice or b) a differentiated structure with a large lunar sized "core" of rock and possibly iron overlain by a deep layer of warm soft ice capped by a thin cold rigid ice crust. Galileo's measurement of Ganymede's gravity field during its first and second encounters with the huge moon have basically confirmed the differentiated model and allowed scientists to estimate the size of these layers more accurately. In addition the data strongly suggest that a dense metallic core exists at the center of the rock core. This metallic core suggests a greater degree of heating at sometime in Ganymede's past than had been proposed before and may be the source of Ganymede's magnetic field discovered by Galileo's space physics experiments. http://photojournal.jpl.nasa.gov/catalog/PIA00519

  14. Composite View of Asteroid Braille from Deep Space 1

    NASA Image and Video Library

    1999-08-03

    The two images on the left hand side of this composite image frame were taken 914 seconds and 932 seconds after the NASA Deep Space 1 encounter with the asteroid 9969 Braille. The image on the right was created by combining the two images on the left.

  15. Deep learning for medical image segmentation - using the IBM TrueNorth neurosynaptic system

    NASA Astrophysics Data System (ADS)

    Moran, Steven; Gaonkar, Bilwaj; Whitehead, William; Wolk, Aidan; Macyszyn, Luke; Iyer, Subramanian S.

    2018-03-01

    Deep convolutional neural networks have found success in semantic image segmentation tasks in computer vision and medical imaging. These algorithms are executed on conventional von Neumann processor architectures or GPUs. This is suboptimal. Neuromorphic processors that replicate the structure of the brain are better-suited to train and execute deep learning models for image segmentation by relying on massively-parallel processing. However, given that they closely emulate the human brain, on-chip hardware and digital memory limitations also constrain them. Adapting deep learning models to execute image segmentation tasks on such chips, requires specialized training and validation. In this work, we demonstrate for the first-time, spinal image segmentation performed using a deep learning network implemented on neuromorphic hardware of the IBM TrueNorth Neurosynaptic System and validate the performance of our network by comparing it to human-generated segmentations of spinal vertebrae and disks. To achieve this on neuromorphic hardware, the training model constrains the coefficients of individual neurons to {-1,0,1} using the Energy Efficient Deep Neuromorphic (EEDN)1 networks training algorithm. Given the 1 million neurons and 256 million synapses, the scale and size of the neural network implemented by the IBM TrueNorth allows us to execute the requisite mapping between segmented images and non-uniform intensity MR images >20 times faster than on a GPU-accelerated network and using <0.1 W. This speed and efficiency implies that a trained neuromorphic chip can be deployed in intra-operative environments where real-time medical image segmentation is necessary.

  16. Linking Deep Astrometric Standards to the ICRF

    NASA Astrophysics Data System (ADS)

    Frey, S.; Platais, I.; Fey, A. L.

    2007-07-01

    The next-generation large aperature and large field-of-view telescopes will address fundamantal questions of astrophysica and cosmology such as the nature of dark matter and dark energy. For a variety of applications, the CCD mosaic detectors in the focal plane arrays require astronomic calibrationat the milli-arcsecond (mas) level. The existing optical reference frames are insufficient to support such calibrations. To address this problem, deep optical astronomic fields are being established near the Galactic plane. In order to achiev a 5-10-mas or better positional accuracyfor the Deepp Astrometric Standards (DAS), and to obtain bsolute stellar proper motions for the study of Galactic structure, it is crucial to link these fields to the International Celestial Reference Frame (ICRF). To this end, we selected 15 candidate compact extragalactic radio sources in the Gemini-Orion-Taurus (GOT) field. These sources were observed with the European VLBI Network (EVN) at 5 GHz in phase-reference mode. The bright compact calibrator source J0603+2159 and seven other sources were detected and imaged at the angular resolution of -1.5-8 mas. Relative astrometric positions were derived for these sources at a milli-arcsecond accuracy level. The detection of the optical counterparts of these extragalactic radio sources will allow us to establish a direct link to the ICRF locally in the GOT field.

  17. Global Magnetospheric Imaging from the Deep Space Gateway in Lunar Orbit

    NASA Astrophysics Data System (ADS)

    Chua, D. H.; Socker, D. G.; Englert, C. R.; Carter, M. T.; Plunkett, S. P.; Korendyke, C. M.; Meier, R. R.

    2018-02-01

    We propose to use the Deep Space Gateway as an observing platform for a magnetospheric imager that will capture the first direct global images of the interface between the incident solar wind and the Earth's magnetosphere.

  18. Application of a deep-learning method to the forecast of daily solar flare occurrence using Convolution Neural Network

    NASA Astrophysics Data System (ADS)

    Shin, Seulki; Moon, Yong-Jae; Chu, Hyoungseok

    2017-08-01

    As the application of deep-learning methods has been succeeded in various fields, they have a high potential to be applied to space weather forecasting. Convolutional neural network, one of deep learning methods, is specialized in image recognition. In this study, we apply the AlexNet architecture, which is a winner of Imagenet Large Scale Virtual Recognition Challenge (ILSVRC) 2012, to the forecast of daily solar flare occurrence using the MatConvNet software of MATLAB. Our input images are SOHO/MDI, EIT 195Å, and 304Å from January 1996 to December 2010, and output ones are yes or no of flare occurrence. We select training dataset from Jan 1996 to Dec 2000 and from Jan 2003 to Dec 2008. Testing dataset is chosen from Jan 2001 to Dec 2002 and from Jan 2009 to Dec 2010 in order to consider the solar cycle effect. In training dataset, we randomly select one fifth of training data for validation dataset to avoid the overfitting problem. Our model successfully forecasts the flare occurrence with about 0.90 probability of detection (POD) for common flares (C-, M-, and X-class). While POD of major flares (M- and X-class) forecasting is 0.96, false alarm rate (FAR) also scores relatively high(0.60). We also present several statistical parameters such as critical success index (CSI) and true skill statistics (TSS). Our model can immediately be applied to automatic forecasting service when image data are available.

  19. WE-G-BRD-01: A Data-Driven 4D-MRI Motion Model to Estimate Full Field-Of-View Abdominal Motion From 2D Image Navigators During MR-Linac Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stemkens, B; Tijssen, RHN; Denis de Senneville, B Denis

    2015-06-15

    Purpose: To estimate full field-of-view abdominal respiratory motion from fast 2D image navigators using a 4D-MRI based motion model. This will allow for radiation dose accumulation mapping during MR-Linac treatment. Methods: Experiments were conducted on a Philips Ingenia 1.5T MRI. First, a retrospectively ordered 4D-MRI was constructed using 3D transient-bSSFP with radial in-plane sampling. Motion fields were calculated through 3D non-rigid registration. From these motion fields a PCA-based abdominal motion model was constructed and used to warp a 3D reference volume to fast 2D cine-MR image navigators that can be used for real-time tracking. To test this procedure, a time-seriesmore » consisting of two interleaved orthogonal slices (sagittal and coronal), positioned on the pancreas or kidneys, were acquired for 1m38s (dynamic scan-time=0.196ms), during normal, shallow, or deep breathing. The coronal slices were used to update the optimal weights for the first two PCA components, in order to warp the 3D reference image and construct a dynamic 4D-MRI time-series. The interleaved sagittal slices served as an independent measure to test the model’s accuracy and fit. Spatial maps of the root-mean-squared error (RMSE) and histograms of the motion differences within the pancreas and kidneys were used to evaluate the method. Results: Cranio-caudal motion was accurately calculated within the pancreas using the model for normal and shallow breathing with an RMSE of 1.6mm and 1.5mm and a histogram median and standard deviation below 0.2 and 1.7mm, respectively. For deep-breathing an underestimation of the inhale amplitude was observed (RMSE=4.1mm). Respiratory-induced antero-posterior and lateral motion were correctly mapped (RMSE=0.6/0.5mm). Kidney motion demonstrated good motion estimation with RMSE-values of 0.95 and 2.4mm for the right and left kidney, respectively. Conclusion: We have demonstrated a method that can calculate dynamic 3D abdominal motion in a large volume, while acquiring real-time cine-MR images for MR-guided radiotherapy.« less

  20. Generative Models in Deep Learning: Constraints for Galaxy Evolution

    NASA Astrophysics Data System (ADS)

    Turp, Maximilian Dennis; Schawinski, Kevin; Zhang, Ce; Weigel, Anna K.

    2018-01-01

    New techniques are essential to make advances in the field of galaxy evolution. Recent developments in the field of artificial intelligence and machine learning have proven that these tools can be applied to problems far more complex than simple image recognition. We use these purely data driven approaches to investigate the process of star formation quenching. We show that Variational Autoencoders provide a powerful method to forward model the process of galaxy quenching. Our results imply that simple changes in specific star formation rate and bulge to disk ratio cannot fully describe the properties of the quenched population.

  1. Swept-source optical coherence tomography powered by a 1.3-μm vertical cavity surface emitting laser enables 2.3-mm-deep brain imaging in mice in vivo

    NASA Astrophysics Data System (ADS)

    Choi, Woo June; Wang, Ruikang K.

    2015-10-01

    We report noninvasive, in vivo optical imaging deep within a mouse brain by swept-source optical coherence tomography (SS-OCT), enabled by a 1.3-μm vertical cavity surface emitting laser (VCSEL). VCSEL SS-OCT offers a constant signal sensitivity of 105 dB throughout an entire depth of 4.25 mm in air, ensuring an extended usable imaging depth range of more than 2 mm in turbid biological tissue. Using this approach, we show deep brain imaging in mice with an open-skull cranial window preparation, revealing intact mouse brain anatomy from the superficial cerebral cortex to the deep hippocampus. VCSEL SS-OCT would be applicable to small animal studies for the investigation of deep tissue compartments in living brains where diseases such as dementia and tumor can take their toll.

  2. Deep kernel learning method for SAR image target recognition

    NASA Astrophysics Data System (ADS)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  3. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    NASA Astrophysics Data System (ADS)

    Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.

    2013-09-01

    Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.

  4. Role of Big Data and Machine Learning in Diagnostic Decision Support in Radiology.

    PubMed

    Syeda-Mahmood, Tanveer

    2018-03-01

    The field of diagnostic decision support in radiology is undergoing rapid transformation with the availability of large amounts of patient data and the development of new artificial intelligence methods of machine learning such as deep learning. They hold the promise of providing imaging specialists with tools for improving the accuracy and efficiency of diagnosis and treatment. In this article, we will describe the growth of this field for radiology and outline general trends highlighting progress in the field of diagnostic decision support from the early days of rule-based expert systems to cognitive assistants of the modern era. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  5. Astronomers Set a New Galaxy Distance Record

    NASA Image and Video Library

    2015-05-06

    This is a Hubble Space Telescope image of the farthest spectroscopically confirmed galaxy observed to date (inset). It was identified in this Hubble image of a field of galaxies in the CANDELS survey (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey). NASA’s Spitzer Space Telescope also observed the unique galaxy. The W. M. Keck Observatory was used to obtain a spectroscopic redshift (z=7.7), extending the previous redshift record. Measurements of the stretching of light, or redshift, give the most reliable distances to other galaxies. This source is thus currently the most distant confirmed galaxy known, and it appears to also be one of the brightest and most massive sources at that time. The galaxy existed over 13 billion years ago. The near-infrared light image of the galaxy (inset) has been colored blue as suggestive of its young, and hence very blue, stars. The CANDELS field is a combination of visible-light and near-infrared exposures. Credits: NASA, ESA, P. Oesch (Yale U.)

  6. Coronary artery calcification (CAC) classification with deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Liu, Xiuming; Wang, Shice; Deng, Yufeng; Chen, Kuan

    2017-03-01

    Coronary artery calcification (CAC) is a typical marker of the coronary artery disease, which is one of the biggest causes of mortality in the U.S. This study evaluates the feasibility of using a deep convolutional neural network (DCNN) to automatically detect CAC in X-ray images. 1768 posteroanterior (PA) view chest X-Ray images from Sichuan Province Peoples Hospital, China were collected retrospectively. Each image is associated with a corresponding diagnostic report written by a trained radiologist (907 normal, 861 diagnosed with CAC). Onequarter of the images were randomly selected as test samples; the rest were used as training samples. DCNN models consisting of 2,4,6 and 8 convolutional layers were designed using blocks of pre-designed CNN layers. Each block was implemented in Theano with Graphics Processing Units (GPU). Human-in-the-loop learning was also performed on a subset of 165 images with framed arteries by trained physicians. The results from the DCNN models were compared to the diagnostic reports. The average diagnostic accuracies for models with 2,4,6,8 layers were 0.85, 0.87, 0.88, and 0.89 respectively. The areas under the curve (AUC) were 0.92, 0.95, 0.95, and 0.96. As the model grows deeper, the AUC or diagnostic accuracies did not have statistically significant changes. The results of this study indicate that DCNN models have promising potential in the field of intelligent medical image diagnosis practice.

  7. Volumetric imaging of fast biological dynamics in deep tissue via wavefront engineering

    NASA Astrophysics Data System (ADS)

    Kong, Lingjie; Tang, Jianyong; Cui, Meng

    2016-03-01

    To reveal fast biological dynamics in deep tissue, we combine two wavefront engineering methods that were developed in our laboratory, namely optical phase-locked ultrasound lens (OPLUL) based volumetric imaging and iterative multiphoton adaptive compensation technique (IMPACT). OPLUL is used to generate oscillating defocusing wavefront for fast axial scanning, and IMPACT is used to compensate the wavefront distortions for deep tissue imaging. We show its promising applications in neuroscience and immunology.

  8. Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations

    DOE PAGES

    Meisner, A. M.; Lang, D.; Schlegel, D. J.

    2017-09-26

    Here, we have reprocessed over 100 terabytes of single-exposure Wide-field Infrared Survey Explorer (WISE)/NEOWISE images to create the deepest ever full-sky maps at 3-5 microns. We include all publicly available W1 and W2 imaging - a total of ~8 million exposures in each band - from ~37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and typically incorporate ~3× more input frames than those of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacymore » Survey (MzLS), both of which are being used to select targets for the Dark Energy Spectroscopic Instrument. We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g., high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.« less

  9. Revisiting Stephan's Quintet with deep optical images

    NASA Astrophysics Data System (ADS)

    Duc, Pierre-Alain; Cuillandre, Jean-Charles; Renaud, Florent

    2018-03-01

    Stephan's Quintet, a compact group of galaxies, is often used as a laboratory to study a number of phenomena, including physical processes in the interstellar medium, star formation, galaxy evolution, and the formation of fossil groups. As such, it has been subject to intensive multiwavelength observation campaigns. Yet, models lack constrains to pin down the role of each galaxy in the assembly of the group. We revisit here this system with multiband deep optical images obtained with MegaCam on the Canada-France-Hawaii Telescope (CFHT), focusing on the detection of low surface brightness (LSB) structures. They reveal a number of extended LSB features, some new, and some already visible in published images but not discussed before. An extended diffuse, reddish, lopsided, halo is detected towards the early-type galaxy NGC 7317, the role of which had so far been ignored in models. The presence of this halo made of old stars may indicate that the group formed earlier than previously thought. Finally, a number of additional diffuse filaments are visible, some close to the foreground galaxy NGC 7331 located in the same field. Their structure and association with mid-infrared emission suggest contamination by emission from Galactic cirrus.

  10. Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meisner, A. M.; Lang, D.; Schlegel, D. J.

    Here, we have reprocessed over 100 terabytes of single-exposure Wide-field Infrared Survey Explorer (WISE)/NEOWISE images to create the deepest ever full-sky maps at 3-5 microns. We include all publicly available W1 and W2 imaging - a total of ~8 million exposures in each band - from ~37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and typically incorporate ~3× more input frames than those of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacymore » Survey (MzLS), both of which are being used to select targets for the Dark Energy Spectroscopic Instrument. We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g., high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.« less

  11. Toolkits and Libraries for Deep Learning.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  12. Ship detection leveraging deep neural networks in WorldView-2 images

    NASA Astrophysics Data System (ADS)

    Yamamoto, T.; Kazama, Y.

    2017-10-01

    Interpretation of high-resolution satellite images has been so difficult that skilled interpreters must have checked the satellite images manually because of the following issues. One is the requirement of the high detection accuracy rate. The other is the variety of the target, taking ships for example, there are many kinds of ships, such as boat, cruise ship, cargo ship, aircraft carrier, and so on. Furthermore, there are similar appearance objects throughout the image; therefore, it is often difficult even for the skilled interpreters to distinguish what object the pixels really compose. In this paper, we explore the feasibility of object extraction leveraging deep learning with high-resolution satellite images, especially focusing on ship detection. We calculated the detection accuracy using the WorldView-2 images. First, we collected the training images labelled as "ship" and "not ship". After preparing the training data, we defined the deep neural network model to judge whether ships are existing or not, and trained them with about 50,000 training images for each label. Subsequently, we scanned the evaluation image with different resolution windows and extracted the "ship" images. Experimental result shows the effectiveness of the deep learning based object detection.

  13. The ASTRODEEP Frontier Fields catalogues. I. Multiwavelength photometry of Abell-2744 and MACS-J0416

    NASA Astrophysics Data System (ADS)

    Merlin, E.; Amorín, R.; Castellano, M.; Fontana, A.; Buitrago, F.; Dunlop, J. S.; Elbaz, D.; Boucaud, A.; Bourne, N.; Boutsia, K.; Brammer, G.; Bruce, V. A.; Capak, P.; Cappelluti, N.; Ciesla, L.; Comastri, A.; Cullen, F.; Derriere, S.; Faber, S. M.; Ferguson, H. C.; Giallongo, E.; Grazian, A.; Lotz, J.; Michałowski, M. J.; Paris, D.; Pentericci, L.; Pilo, S.; Santini, P.; Schreiber, C.; Shu, X.; Wang, T.

    2016-05-01

    Context. The Frontier Fields survey is a pioneering observational program aimed at collecting photometric data, both from space (Hubble Space Telescope and Spitzer Space Telescope) and from ground-based facilities (VLT Hawk-I), for six deep fields pointing at clusters of galaxies and six nearby deep parallel fields, in a wide range of passbands. The analysis of these data is a natural outcome of the Astrodeep project, an EU collaboration aimed at developing methods and tools for extragalactic photometry and creating valuable public photometric catalogues. Aims: We produce multiwavelength photometric catalogues (from B to 4.5 μm) for the first two of the Frontier Fields, Abell-2744 and MACS-J0416 (plus their parallel fields). Methods: To detect faint sources even in the central regions of the clusters, we develop a robust and repeatable procedure that uses the public codes Galapagos and Galfit to model and remove most of the light contribution from both the brightest cluster members, and the intra-cluster light. We perform the detection on the processed HST H160 image to obtain a pure H-selected sample, which is the primary catalogue that we publish. We also add a sample of sources which are undetected in the H160 image but appear on a stacked infrared image. Photometry on the other HST bands is obtained using SExtractor, again on processed images after the procedure for foreground light removal. Photometry on the Hawk-I and IRAC bands is obtained using our PSF-matching deconfusion code t-phot. A similar procedure, but without the need for the foreground light removal, is adopted for the Parallel fields. Results: The procedure of foreground light subtraction allows for the detection and the photometric measurements of ~2500 sources per field. We deliver and release complete photometric H-detected catalogues, with the addition of the complementary sample of infrared-detected sources. All objects have multiwavelength coverage including B to H HST bands, plus K-band from Hawk-I, and 3.6-4.5 μm from Spitzer. full and detailed treatment of photometric errors is included. We perform basic sanity checks on the reliability of our results. Conclusions: The multiwavelength photometric catalogues are available publicly and are ready to be used for scientific purposes. Our procedures allows for the detection of outshone objects near the bright galaxies, which, coupled with the magnification effect of the clusters, can reveal extremely faint high redshift sources. Full analysis on photometric redshifts is presented in Paper II. The catalogues, together with the final processed images for all HST bands (as well as some diagnostic data and images), are publicly available and can be downloaded from the Astrodeep website at http://www.astrodeep.eu/frontier-fields/ and from a dedicated CDS webpage (http://astrodeep.u-strasbg.fr/ff/index.html). The catalogues are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A31

  14. Distributed deep learning networks among institutions for medical imaging.

    PubMed

    Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree

    2018-03-29

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

  15. A comparative study of deep learning models for medical image classification

    NASA Astrophysics Data System (ADS)

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are favoured as they provide better visual processing models successfully classifying the noisy data as well. The work centres on the detection on Diabetic Retinopathy-loss in vision and recognition of computed tomography (CT) emphysema data measuring the severity levels for both cases. The paper discovers how various Machine Learning algorithms can be implemented ensuing a supervised approach, so as to get accurate results with less complexity possible.

  16. Hunting the Southern Skies with SIMBA

    NASA Astrophysics Data System (ADS)

    2001-08-01

    First Images from the New "Millimetre Camera" on SEST at La Silla Summary A new instrument, SIMBA ("SEST IMaging Bolometer Array") , has been installed at the Swedish-ESO Submillimetre Telescope (SEST) at the ESO La Silla Observatory in July 2001. It records astronomical images at a wavelength of 1.2 mm and is able to quickly map large sky areas. In order to achieve the best possible sensitivity, SIMBA is cooled to only 0.3 deg above the absolute zero on the temperature scale. SIMBA is the first imaging millimetre instrument in the southern hemisphere . Radiation at this wavelength is mostly emitted from cold dust and ionized gas in a variety of objects in the Universe. Among other, SIMBA now opens exciting prospects for in-depth studies of the "hidden" sites of star formation , deep inside dense interstellar nebulae. While such clouds are impenetrable to optical light, they are transparent to millimetre radiation and SIMBA can therefore observe the associated phenomena, in particular the dust around nascent stars . This sophisticated instrument can also search for disks of cold dust around nearby stars in which planets are being formed or which may be left-overs of this basic process. Equally important, SIMBA may observe extremely distant galaxies in the early universe , recording them while they were still in the formation stage. Various SIMBA images have been obtained during the first tests of the new instrument. The first observations confirm the great promise for unique astronomical studies of the southern sky in the millimetre wavelength region. These results also pave the way towards the Atacama Large Millimeter Array (ALMA) , the giant, joint research project that is now under study in Europe, the USA and Japan. PR Photo 28a/01 : SIMBA image centered on the infrared source IRAS 17175-3544 PR Photo 28b/01 : SIMBA image centered on the infrared source IRAS 18434-0242 PR Photo 28c/01 : SIMBA image centered on the infrared source IRAS 17271-3439 PR Photo 28d/01 : View of the SIMBA instrument First observations with SIMBA SIMBA ("SEST IMaging Bolometer Array") was built and installed at the Swedish-ESO Submillimetre Telescope (SEST) at La Silla (Chile) within an international collaboration between the University of Bochum and the Max Planck Institute for Radio Astronomy in Germany, the Swedish National Facility for Radio Astronomy and ESO . The SIMBA ("Lion" in Swahili) instrument detects radiation at a wavelength of 1.2 mm . It has 37 "horns" and acts like a camera with 37 picture elements (pixels). By changing the pointing direction of the telescope, relatively large sky fields can be imaged. As the first and only imaging millimetre instrument in the southern hemisphere , SIMBA now looks up towards rich and virgin hunting grounds in the sky. Observations at millimetre wavelengths are particularly useful for studies of star formation , deep inside dense interstellar clouds that are impenetrable to optical light. Other objects for which SIMBA is especially suited include planet-forming disks of cold dust around nearby stars and extremely distant galaxies in the early universe , still in the stage of formation. During the first observations, SIMBA was used to study the gas and dust content of star-forming regions in our own Milky Way Galaxy, as well as in the Magellanic Clouds and more distant galaxies. It was also used to record emission from planetary nebulae , clouds of matter ejected by dying stars. Moreover, attempts were made to detect distant galaxies and quasars radiating at mm-wavelengths and located in two well-studied sky fields, the "Hubble Deep Field South" and the "Chandra Deep Field" [1]. Observations with SEST and SIMBA also serve to identify objects that can be observed at higher resolution and at shorter wavelengths with future southern submm telescopes and interferometers such as APEX (see MPG Press Release 07/01 of 6 July 2001) and ALMA. SIMBA images regions of high-mass star formation ESO PR Photo 28a/01 ESO PR Photo 28a/01 [Preview - JPEG: 400 x 568 pix - 61k] [Normal - JPEG: 800 x 1136 pix - 200k] Caption : This intensity-coded, false-colour SIMBA image is centered on the infrared source IRAS 17175-3544 and covers the well-known high-mass star formation complex NGC 6334 , at a distance of 5500 light-years. The southern bright source is an ultra-compact region of ionized hydrogen ("HII region") created by a star or several stars already formed. The northern bright source has not yet developed an HII region and may be a star or a cluster of stars that are presently forming. A remarkable, narrow, linear dust filament extends over the image; it was known to exist before, but the SIMBA image now shows it to a much larger extent and much more clearly. This and the following images cover an area of about 15 arcmin x 6 arcmin on the sky and have a pixel size of 8 arcsec. ESO PR Photo 28b/01 ESO PR Photo 28b/01 [Preview - JPEG: 532 x 400 pix - 52k] [Normal - JPEG: 1064 x 800 pix - 168k] Caption : This SIMBA image is centered on the object IRAS 18434-0242 . It includes many bright sources that are associated with dense cores and compact HII regions located deep inside the cloud. A much less detailed map was made several years ago with a single channel bolometer on SEST. The new SIMBA map is more extended and shows more sources. ESO PR Photo 28c/01 ESO PR Photo 28c/01 [Preview - JPEG: 400 x 505 pix - 59k] [Normal - JPEG: 800 x 1009 pix - 160k] Caption : Another SIMBA image is centered on IRAS 17271-3439 and includes an extended bright source that is associated with several compact HII regions as well as a cluster of weaker sources. Some of the recent SIMBA images are shown above; they were taken during test observations, and within a pilot survey of high-mass starforming regions . Stars form in interstellar clouds that consist of gas and dust. The denser parts of these clouds can collapse into cold and dense cores which may form stars. Often many stars are formed in clusters, at about the same time. The newborn stars heat up the surrounding regions of the cloud . Radiation is emitted, first at mm-wavelengths and later at infrared wavelengths as the cloud core gets hotter. If very massive stars are formed, their UV-radiation ionizes the immediate surrounding gas and this ionized gas also emits at mm-wavelengths. These ionized regions are called ultra compact HII regions . Because the stars form deep inside the interstellar clouds, the obscuration at visible wavelengths is very high and it is not possible to see these regions optically. The objects selected for the SIMBA survey are from a catalog of objects, first detected at long infrared wavelengths with the IRAS satellite (launched in 1983), hence the designations indicated in Photos 28a-c/01 . From 1995 to 1998, the ESA Infrared Space Observatory (ISO) gathered an enormous amount of valuable data, obtaining images and spectra in the broad infrared wavelength region from 2.5 to 240 µm (0.025 to 0.240 mm), i.e. just shortward of the millimetre region in which SIMBA operates. ISO produced mid-infrared images of field size and angular resolution (sharpness) comparable to those of SIMBA. It will obviously be most interesting to combine the images that will be made with SIMBA with imaging and spectral data from ISO and also with those obtained by large ground-based telescopes in the near- and mid-infrared spectral regions. Some technical details about the SIMBA instrument ESO PR Photo 28d/01 ESO PR Photo 28d/01 [Preview - JPEG: 509 x 400 pix - 83k] [Normal - JPEG: 1017 x 800 pix - 528k] Caption : The SIMBA instrument - with the cover removed - in the SEST electronics laboratory. The 37 antenna horns to the right, each of which produces one picture element (pixel) of the combined image. The bolometer elements are located behind the horns. The cylindrical aluminium foil covered unit is the cooler that keeps SIMBA at extremely low temperature (-272.85 °C, or only 0.3 deg above the absolute zero) when it is mounted in the telescope. SIMBA is unique because of its ability to quickly map large sky areas due to the fast scanning mode. In order to achieve low noise and good sensitivity, the instrument is cooled to only 0.3 deg above the absolute zero, i.e., to -272.85 °C. SIMBA consists of 37 horns (each providing one pixel on the sky) arranged in a hexagonal pattern, cf. Photo 28d/01 . To form images, the sky position of the telescope is changed according to a raster pattern - in this way all of a celestial object and the surrounding sky field may be "scanned" fast, at speeds of typically 80 arcsec per second. This makes SIMBA a very efficient facility: for instance, a fully sampled image of good sensitivity with a field size of 15 arcmin x 6 arcmin can be taken in 15 minutes. If higher sensitivity is needed (to observe fainter sources), more images may be obtained of the same field and then added together. Large sky areas can be covered by combining many images taken at different positions. The image resolution (the "telescope beamsize") is 22 arcsec, corresponding to the angular resolution of this 15-m telescope at the indicated wavelength. Note [1} Observations of the HDFS and CDFS fields in other wavebands with other telescopes at the ESO observatories have been reported earlier, e.g. within the ESO Imaging Survey Project (EIS) (the "EIS Deep-Survey"). It is the ESO policy on these fields to make data public world-wide.

  17. Radio-Optical Reference Frame Link Using the U.S. Naval Observatory Astrograph and Deep CCD Imaging

    NASA Astrophysics Data System (ADS)

    Zacharias, N.; Zacharias, M. I.

    2014-05-01

    Between 1997 and 2004 several observing runs were conducted, mainly with the CTIO 0.9 m, to image International Celestial Reference Frame (ICRF) counterparts (mostly QSOs) in order to determine accurate optical positions. Contemporary to these deep CCD images, the same fields were observed with the U.S. Naval Observatory astrograph in the same bandpass. They provide accurate positions on the Hipparcos/Tycho-2 system for stars in the 10-16 mag range used as reference stars for the deep CCD imaging data. Here we present final optical position results of 413 sources based on reference stars obtained by dedicated astrograph observations that were reduced following two different procedures. These optical positions are compared to radio very long baseline interferometry positions. The current optical system is not perfectly aligned to the ICRF radio system with rigid body rotation angles of 3-5 mas (= 3σ level) found between them for all three axes. Furthermore, statistically, the optical-radio position differences are found to exceed the total, combined, known errors in the observations. Systematic errors in the optical reference star positions and physical offsets between the centers of optical and radio emissions are both identified as likely causes. A detrimental, astrophysical, random noise component is postulated to be on about the 10 mas level. If confirmed by future observations, this could severely limit the Gaia to ICRF reference frame alignment accuracy to an error of about 0.5 mas per coordinate axis with the current number of sources envisioned to provide the link. A list of 36 ICRF sources without the detection of an optical counterpart to a limiting magnitude of about R = 22 is provided as well.

  18. Real time coarse orientation detection in MR scans using multi-planar deep convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bhatia, Parmeet S.; Reda, Fitsum; Harder, Martin; Zhan, Yiqiang; Zhou, Xiang Sean

    2017-02-01

    Automatically detecting anatomy orientation is an important task in medical image analysis. Specifically, the ability to automatically detect coarse orientation of structures is useful to minimize the effort of fine/accurate orientation detection algorithms, to initialize non-rigid deformable registration algorithms or to align models to target structures in model-based segmentation algorithms. In this work, we present a deep convolution neural network (DCNN)-based method for fast and robust detection of the coarse structure orientation, i.e., the hemi-sphere where the principal axis of a structure lies. That is, our algorithm predicts whether the principal orientation of a structure is in the northern hemisphere or southern hemisphere, which we will refer to as UP and DOWN, respectively, in the remainder of this manuscript. The only assumption of our method is that the entire structure is located within the scan's field-of-view (FOV). To efficiently solve the problem in 3D space, we formulated it as a multi-planar 2D deep learning problem. In the training stage, a large number coronal-sagittal slice pairs are constructed as 2-channel images to train a DCNN to classify whether a scan is UP or DOWN. During testing, we randomly sample a small number of coronal-sagittal 2-channel images and pass them through our trained network. Finally, coarse structure orientation is determined using majority voting. We tested our method on 114 Elbow MR Scans. Experimental results suggest that only five 2-channel images are sufficient to achieve a high success rate of 97.39%. Our method is also extremely fast and takes approximately 50 milliseconds per 3D MR scan. Our method is insensitive to the location of the structure in the FOV.

  19. Radio-optical reference frame link using the U.S. Naval observatory astrograph and deep CCD imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zacharias, N.; Zacharias, M. I., E-mail: nz@usno.navy.mil

    2014-05-01

    Between 1997 and 2004 several observing runs were conducted, mainly with the CTIO 0.9 m, to image International Celestial Reference Frame (ICRF) counterparts (mostly QSOs) in order to determine accurate optical positions. Contemporary to these deep CCD images, the same fields were observed with the U.S. Naval Observatory astrograph in the same bandpass. They provide accurate positions on the Hipparcos/Tycho-2 system for stars in the 10-16 mag range used as reference stars for the deep CCD imaging data. Here we present final optical position results of 413 sources based on reference stars obtained by dedicated astrograph observations that were reducedmore » following two different procedures. These optical positions are compared to radio very long baseline interferometry positions. The current optical system is not perfectly aligned to the ICRF radio system with rigid body rotation angles of 3-5 mas (= 3σ level) found between them for all three axes. Furthermore, statistically, the optical-radio position differences are found to exceed the total, combined, known errors in the observations. Systematic errors in the optical reference star positions and physical offsets between the centers of optical and radio emissions are both identified as likely causes. A detrimental, astrophysical, random noise component is postulated to be on about the 10 mas level. If confirmed by future observations, this could severely limit the Gaia to ICRF reference frame alignment accuracy to an error of about 0.5 mas per coordinate axis with the current number of sources envisioned to provide the link. A list of 36 ICRF sources without the detection of an optical counterpart to a limiting magnitude of about R = 22 is provided as well.« less

  20. Automatic detection of hemorrhagic pericardial effusion on PMCT using deep learning - a feasibility study.

    PubMed

    Ebert, Lars C; Heimer, Jakob; Schweitzer, Wolf; Sieberth, Till; Leipner, Anja; Thali, Michael; Ampanozi, Garyfalia

    2017-12-01

    Post mortem computed tomography (PMCT) can be used as a triage tool to better identify cases with a possibly non-natural cause of death, especially when high caseloads make it impossible to perform autopsies on all cases. Substantial data can be generated by modern medical scanners, especially in a forensic setting where the entire body is documented at high resolution. A solution for the resulting issues could be the use of deep learning techniques for automatic analysis of radiological images. In this article, we wanted to test the feasibility of such methods for forensic imaging by hypothesizing that deep learning methods can detect and segment a hemopericardium in PMCT. For deep learning image analysis software, we used the ViDi Suite 2.0. We retrospectively selected 28 cases with, and 24 cases without, hemopericardium. Based on these data, we trained two separate deep learning networks. The first one classified images into hemopericardium/not hemopericardium, and the second one segmented the blood content. We randomly selected 50% of the data for training and 50% for validation. This process was repeated 20 times. The best performing classification network classified all cases of hemopericardium from the validation images correctly with only a few false positives. The best performing segmentation network would tend to underestimate the amount of blood in the pericardium, which is the case for most networks. This is the first study that shows that deep learning has potential for automated image analysis of radiological images in forensic medicine.

  1. High-Resolution Ultrasound-Switchable Fluorescence Imaging in Centimeter-Deep Tissue Phantoms with High Signal-To-Noise Ratio and High Sensitivity via Novel Contrast Agents

    PubMed Central

    Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D’Souza, Francis; Nguyen, Kytai T.; Hong, Yi; Yuan, Baohong

    2016-01-01

    For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena—such as the presence of immune system cells, tumor angiogenesis, and metastasis—may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging. PMID:27829050

  2. High-Resolution Ultrasound-Switchable Fluorescence Imaging in Centimeter-Deep Tissue Phantoms with High Signal-To-Noise Ratio and High Sensitivity via Novel Contrast Agents.

    PubMed

    Cheng, Bingbing; Bandi, Venugopal; Wei, Ming-Yuan; Pei, Yanbo; D'Souza, Francis; Nguyen, Kytai T; Hong, Yi; Yuan, Baohong

    2016-01-01

    For many years, investigators have sought after high-resolution fluorescence imaging in centimeter-deep tissue because many interesting in vivo phenomena-such as the presence of immune system cells, tumor angiogenesis, and metastasis-may be located deep in tissue. Previously, we developed a new imaging technique to achieve high spatial resolution in sub-centimeter deep tissue phantoms named continuous-wave ultrasound-switchable fluorescence (CW-USF). The principle is to use a focused ultrasound wave to externally and locally switch on and off the fluorophore emission from a small volume (close to ultrasound focal volume). By making improvements in three aspects of this technique: excellent near-infrared USF contrast agents, a sensitive frequency-domain USF imaging system, and an effective signal processing algorithm, for the first time this study has achieved high spatial resolution (~ 900 μm) in 3-centimeter-deep tissue phantoms with high signal-to-noise ratio (SNR) and high sensitivity (3.4 picomoles of fluorophore in a volume of 68 nanoliters can be detected). We have achieved these results in both tissue-mimic phantoms and porcine muscle tissues. We have also demonstrated multi-color USF to image and distinguish two fluorophores with different wavelengths, which might be very useful for simultaneously imaging of multiple targets and observing their interactions in the future. This work has opened the door for future studies of high-resolution centimeter-deep tissue fluorescence imaging.

  3. Orientation selective deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Lehto, Lauri J.; Slopsema, Julia P.; Johnson, Matthew D.; Shatillo, Artem; Teplitzky, Benjamin A.; Utecht, Lynn; Adriany, Gregor; Mangia, Silvia; Sierra, Alejandra; Low, Walter C.; Gröhn, Olli; Michaeli, Shalom

    2017-02-01

    Objective. Target selectivity of deep brain stimulation (DBS) therapy is critical, as the precise locus and pattern of the stimulation dictates the degree to which desired treatment responses are achieved and adverse side effects are avoided. There is a clear clinical need to improve DBS technology beyond currently available stimulation steering and shaping approaches. We introduce orientation selective neural stimulation as a concept to increase the specificity of target selection in DBS. Approach. This concept, which involves orienting the electric field along an axonal pathway, was tested in the corpus callosum of the rat brain by freely controlling the direction of the electric field on a plane using a three-electrode bundle, and monitoring the response of the neurons using functional magnetic resonance imaging (fMRI). Computational models were developed to further analyze axonal excitability for varied electric field orientation. Main results. Our results demonstrated that the strongest fMRI response was observed when the electric field was oriented parallel to the axons, while almost no response was detected with the perpendicular orientation of the electric field relative to the primary fiber tract. These results were confirmed by computational models of the experimental paradigm quantifying the activation of radially distributed axons while varying the primary direction of the electric field. Significance. The described strategies identify a new course for selective neuromodulation paradigms in DBS based on axonal fiber orientation.

  4. Marshall Space Flight Center - Launching the Future of Science and Exploration

    NASA Technical Reports Server (NTRS)

    Shivers, Alisa; Shivers, Herbert

    2010-01-01

    Topics include: NASA Centers around the country, launching a legacy (Explorer I), Marshall's continuing role in space exploration, MSFC history, lifting from Earth, our next mission STS 133, Space Shuttle propulsion systems, Space Shuttle facts, Space Shuttle and the International Space Station, technologies/materials originally developed for the space program, astronauts come from all over, potential future missions and example technologies, significant accomplishments, living and working in space, understanding our world, understanding worlds beyond, from exploration to innovation, inspiring the next generation, space economy, from exploration to opportunity, new program assignments, NASA's role in education, and images from deep space including a composite of a galaxy with a black hole, Sagittarius A, Pillars of Creation, and an ultra deep field

  5. Deep learning with non-medical training used for chest pathology identification

    NASA Astrophysics Data System (ADS)

    Bar, Yaniv; Diamant, Idit; Wolf, Lior; Greenspan, Hayit

    2015-03-01

    In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.

  6. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    PubMed

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible and beneficial methods to examine deep details of optic disc pathologies, while the MHz-OCT bears the advantage of an essentially swifter imaging process.

  7. Classification of radiolarian images with hand-crafted and deep features

    NASA Astrophysics Data System (ADS)

    Keçeli, Ali Seydi; Kaya, Aydın; Keçeli, Seda Uzunçimen

    2017-12-01

    Radiolarians are planktonic protozoa and are important biostratigraphic and paleoenvironmental indicators for paleogeographic reconstructions. Radiolarian paleontology still remains as a low cost and the one of the most convenient way to obtain dating of deep ocean sediments. Traditional methods for identifying radiolarians are time-consuming and cannot scale to the granularity or scope necessary for large-scale studies. Automated image classification will allow making these analyses promptly. In this study, a method for automatic radiolarian image classification is proposed on Scanning Electron Microscope (SEM) images of radiolarians to ease species identification of fossilized radiolarians. The proposed method uses both hand-crafted features like invariant moments, wavelet moments, Gabor features, basic morphological features and deep features obtained from a pre-trained Convolutional Neural Network (CNN). Feature selection is applied over deep features to reduce high dimensionality. Classification outcomes are analyzed to compare hand-crafted features, deep features, and their combinations. Results show that the deep features obtained from a pre-trained CNN are more discriminative comparing to hand-crafted ones. Additionally, feature selection utilizes to the computational cost of classification algorithms and have no negative effect on classification accuracy.

  8. Global detection approach for clustered microcalcifications in mammograms using a deep learning network.

    PubMed

    Wang, Juan; Nishikawa, Robert M; Yang, Yongyi

    2017-04-01

    In computerized detection of clustered microcalcifications (MCs) from mammograms, the traditional approach is to apply a pattern detector to locate the presence of individual MCs, which are subsequently grouped into clusters. Such an approach is often susceptible to the occurrence of false positives (FPs) caused by local image patterns that resemble MCs. We investigate the feasibility of a direct detection approach to determining whether an image region contains clustered MCs or not. Toward this goal, we develop a deep convolutional neural network (CNN) as the classifier model to which the input consists of a large image window ([Formula: see text] in size). The multiple layers in the CNN classifier are trained to automatically extract image features relevant to MCs at different spatial scales. In the experiments, we demonstrated this approach on a dataset consisting of both screen-film mammograms and full-field digital mammograms. We evaluated the detection performance both on classifying image regions of clustered MCs using a receiver operating characteristic (ROC) analysis and on detecting clustered MCs from full mammograms by a free-response receiver operating characteristic analysis. For comparison, we also considered a recently developed MC detector with FP suppression. In classifying image regions of clustered MCs, the CNN classifier achieved 0.971 in the area under the ROC curve, compared to 0.944 for the MC detector. In detecting clustered MCs from full mammograms, at 90% sensitivity, the CNN classifier obtained an FP rate of 0.69 clusters/image, compared to 1.17 clusters/image by the MC detector. These results indicate that using global image features can be more effective in discriminating clustered MCs from FPs caused by various sources, such as linear structures, thereby providing a more accurate detection of clustered MCs on mammograms.

  9. Deep learning with convolutional neural network in radiology.

    PubMed

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  10. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields.

    PubMed

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-11

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  11. Protein Secondary Structure Prediction Using Deep Convolutional Neural Fields

    NASA Astrophysics Data System (ADS)

    Wang, Sheng; Peng, Jian; Ma, Jianzhu; Xu, Jinbo

    2016-01-01

    Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility.

  12. Mass Modeling of Frontier Fields Cluster MACS J1149.5+2223 Using Strong and Weak Lensing

    NASA Astrophysics Data System (ADS)

    Finney, Emily Quinn; Bradač, Maruša; Huang, Kuang-Han; Hoag, Austin; Morishita, Takahiro; Schrabback, Tim; Treu, Tommaso; Borello Schmidt, Kasper; Lemaux, Brian C.; Wang, Xin; Mason, Charlotte

    2018-05-01

    We present a gravitational-lensing model of MACS J1149.5+2223 using ultra-deep Hubble Frontier Fields imaging data and spectroscopic redshifts from HST grism and Very Large Telescope (VLT)/MUSE spectroscopic data. We create total mass maps using 38 multiple images (13 sources) and 608 weak-lensing galaxies, as well as 100 multiple images of 31 star-forming regions in the galaxy that hosts supernova Refsdal. We find good agreement with a range of recent models within the HST field of view. We present a map of the ratio of projected stellar mass to total mass (f ⋆) and find that the stellar mass fraction for this cluster peaks on the primary BCG. Averaging within a radius of 0.3 Mpc, we obtain a value of < {f}\\star > ={0.012}-0.003+0.004, consistent with other recent results for this ratio in cluster environments, though with a large global error (up to δf ⋆ = 0.005) primarily due to the choice of IMF. We compare values of f ⋆ and measures of star formation efficiency for this cluster to other Hubble Frontier Fields clusters studied in the literature, finding that MACS1149 has a higher stellar mass fraction than these other clusters but a star formation efficiency typical of massive clusters.

  13. A deep-sea, high-speed, stereoscopic imaging system for in situ measurement of natural seep bubble and droplet characteristics

    NASA Astrophysics Data System (ADS)

    Wang, Binbin; Socolofsky, Scott A.

    2015-10-01

    Development, testing, and application of a deep-sea, high-speed, stereoscopic imaging system are presented. The new system is designed for field-ready deployment, focusing on measurement of the characteristics of natural seep bubbles and droplets with high-speed and high-resolution image capture. The stereo view configuration allows precise evaluation of the physical scale of the moving particles in image pairs. Two laboratory validation experiments (a continuous bubble chain and an airstone bubble plume) were carried out to test the calibration procedure, performance of image processing and bubble matching algorithms, three-dimensional viewing, and estimation of bubble size distribution and volumetric flow rate. The results showed that the stereo view was able to improve the individual bubble size measurement over the single-camera view by up to 90% in the two validation cases, with the single-camera being biased toward overestimation of the flow rate. We also present the first application of this imaging system in a study of natural gas seeps in the Gulf of Mexico. The high-speed images reveal the rigidity of the transparent bubble interface, indicating the presence of clathrate hydrate skins on the natural gas bubbles near the source (lowest measurement 1.3 m above the vent). We estimated the dominant bubble size at the seep site Sleeping Dragon in Mississippi Canyon block 118 to be in the range of 2-4 mm and the volumetric flow rate to be 0.2-0.3 L/min during our measurements from 17 to 21 July 2014.

  14. Deep-Layer Microvasculature Dropout by Optical Coherence Tomography Angiography and Microstructure of Parapapillary Atrophy.

    PubMed

    Suh, Min Hee; Zangwill, Linda M; Manalastas, Patricia Isabel C; Belghith, Akram; Yarmohammadi, Adeleh; Akagi, Tadamichi; Diniz-Filho, Alberto; Saunders, Luke; Weinreb, Robert N

    2018-04-01

    To investigate the association between the microstructure of β-zone parapapillary atrophy (βPPA) and parapapillary deep-layer microvasculature dropout assessed by optical coherence tomography angiography (OCT-A). Thirty-seven eyes with βPPA devoid of the Bruch's membrane (BM) (γPPA) ranging between completely absent and discontinuous BM were matched by severity of the visual field (VF) damage with 37 eyes with fully intact BM (βPPA+BM) based on the spectral-domain (SD) OCT imaging. Parapapillary deep-layer microvasculature dropout was defined as a dropout of the microvasculature within choroid or scleral flange in the βPPA on the OCT-A. The widths of βPPA, γPPA, and βPPA+BM were measured on six radial SD-OCT images. Prevalence of the dropout was compared between eyes with and without γPPA. Logistic regression was performed for evaluating association of the dropout with the width of βPPA, γPPA, and βPPA+BM, and the γPPA presence. Eyes with γPPA had significantly higher prevalence of the dropout than did those without γPPA (75.7% versus 40.8%; P = 0.004). In logistic regression, presence and longer width of the γPPA, worse VF mean deviation, and presence of focal lamina cribrosa defects were significantly associated with the dropout (P < 0.05), whereas width of the βPPA and βPPA+BM, axial length, and choroidal thickness were not (P > 0.10). Parapapillary deep-layer microvasculature dropout was associated with the presence and larger width of γPPA, but not with the βPPA+BM width. Presence and width of the exposed scleral flange, rather than the retinal pigmented epithelium atrophy, may be associated with deep-layer microvasculature dropout.

  15. Deep-Layer Microvasculature Dropout by Optical Coherence Tomography Angiography and Microstructure of Parapapillary Atrophy

    PubMed Central

    Suh, Min Hee; Zangwill, Linda M.; Manalastas, Patricia Isabel C.; Belghith, Akram; Yarmohammadi, Adeleh; Akagi, Tadamichi; Diniz-Filho, Alberto; Saunders, Luke; Weinreb, Robert N.

    2018-01-01

    Purpose To investigate the association between the microstructure of β-zone parapapillary atrophy (βPPA) and parapapillary deep-layer microvasculature dropout assessed by optical coherence tomography angiography (OCT-A). Methods Thirty-seven eyes with βPPA devoid of the Bruch's membrane (BM) (γPPA) ranging between completely absent and discontinuous BM were matched by severity of the visual field (VF) damage with 37 eyes with fully intact BM (βPPA+BM) based on the spectral-domain (SD) OCT imaging. Parapapillary deep-layer microvasculature dropout was defined as a dropout of the microvasculature within choroid or scleral flange in the βPPA on the OCT-A. The widths of βPPA, γPPA, and βPPA+BM were measured on six radial SD-OCT images. Prevalence of the dropout was compared between eyes with and without γPPA. Logistic regression was performed for evaluating association of the dropout with the width of βPPA, γPPA, and βPPA+BM, and the γPPA presence. Results Eyes with γPPA had significantly higher prevalence of the dropout than did those without γPPA (75.7% versus 40.8%; P = 0.004). In logistic regression, presence and longer width of the γPPA, worse VF mean deviation, and presence of focal lamina cribrosa defects were significantly associated with the dropout (P < 0.05), whereas width of the βPPA and βPPA+BM, axial length, and choroidal thickness were not (P > 0.10). Conclusions Parapapillary deep-layer microvasculature dropout was associated with the presence and larger width of γPPA, but not with the βPPA+BM width. Presence and width of the exposed scleral flange, rather than the retinal pigmented epithelium atrophy, may be associated with deep-layer microvasculature dropout. PMID:29677362

  16. DeepLensing: The Use of Deep Machine Learning to Find Strong Gravitational Lenses in Astronomical Surveys

    NASA Astrophysics Data System (ADS)

    Nord, Brian

    2017-01-01

    Strong gravitational lenses have potential as very powerful probes of dark energy and cosmic structure. However, efficiently finding lenses poses a significant challenge—especially in the era of large-scale cosmological surveys. I will present a new application of deep machine learning algorithms to find strong lenses, as well as the strong lens discovery program of the Dark Energy Survey (DES).Strong lenses provide unique information about the evolution of distant galaxies, the nature of dark energy, and the shapes of dark matter haloes. Current and future surveys, like DES and the Large Synoptic Survey Telescope, present an opportunity to find many thousands of strong lenses, far more than have ever been discovered. By and large, searches have heretofore relied on the time-consuming effort of human scanners. Deep machine learning frameworks, like convolutional neural nets, have revolutionized the task of image recognition, and have a natural place in the processing of astronomical images, including the search for strong lenses.Over five observing seasons, which started in August 2013, DES will carry out a wide-field survey of 5000 square degrees of the Southern Galactic Cap. DES has identified nearly 200 strong lensing candidates in the first two seasons of data. We have performed spectroscopic follow-up on a subsample of these candidates at Gemini South, confirming over a dozen new strong lenses. I will present this DES discovery program, including searches and spectroscopic follow-up of galaxy-scale, cluster-scale and time-delay lensing systems.I will focus, however, on a discussion of the successful search for strong lenses using deep learning methods. In particular, we show that convolutional neural nets present a new set of tools for efficiently finding lenses, and accelerating advancements in strong lensing science.

  17. Assessment of potential catastrophic landslides in Taiwan by airborne LiDAR-derived DEM

    NASA Astrophysics Data System (ADS)

    Hou, Chin-Shyong; Hsieh, Yu-Chung; Hu, Jyr-Ching; Chiu, Cheng-Lung; Chen, Hung-Jen; Fei, Li-Yuan

    2013-04-01

    The heavy rainfall of Typhoon Morakot caused severe damage to infrastructures, property and human lives in southern Taiwan in 2009. The most atrocious incident was the Hsiaolin landslide, which buried more than 400 victims. After this catastrophic event, the recognition of localities of deep-seated landslides becomes a critical issue in landslide hazard mitigation induced from extreme climate events. Consequently the airborne LiDAR survey was carried out in first phase from 2010 to 2012 by Central Geological Survey, MOEA in Taiwan in order to assess the potential catastrophic deep-seated landslides in the steep and rocky terrain in south-central Taiwan. The second phase of LiDAR survey is ongoing from 2013 to 2015 for the recognition and the assessment of possible impact area induced by deep-seated landslide in the mountainous area of whole Taiwan. Transitionally, the recognition of potential deep-seated landslide sites is adopted in term of landslide inventories from space-borne images, aerial photographs and field investigation. However, it is difficult to produce robust landslide inventories due to the poor spatial resolution of ground elevation and highly dense vegetation in mountainous area in Taiwan. In this study, the 1 m LiDAR-derived DEM is used to extract key geomorphological features such as crown cracks, minor scarps, toe of surface rupture at meter to sub-meter scale hidden under forests with high degree of accuracy. Preliminary result shows that about 400 potential landslide sites have been recognized to improve the quality of landslide inventories. In addition, detailed contour maps and visualized images are reproduced to outline the shape of potential deep-seated landslides. Further geomorphometric analyses based on hillshade, aspect, slope, eigenvalue ratio (ER) and openness will be integrated to easily create landslide inventories to mitigate landslide disasters in Taiwan mountainous area.

  18. The Stellar Mass Assembly of Galaxies at z=1 -- New Results from Subaru

    NASA Astrophysics Data System (ADS)

    Bundy, K.; Fukugita, M.; Ellis, R.; Conselice, C.; Kodama, T.; Brinchmann, J.

    2002-12-01

    We report on progress made analyzing deep CISCO K' imaging of well-studied HST redshift survey fields to determine the mass accretion and merger rates of field galaxies out to z ~1. Using an approach similar to that employed by Le Fevre et al. 2000, we find a field-corrected infrared pair fraction of 15% +/- 8% in the z ~ 0.5 to 1 redshift range. This is lower than the result of an equivalent analysis performed on WFPC2-814 images of the same fields, which delivers a pair fraction of 24% +/- 10% over the identical redshift range. Although currently marginal, this result supports the contention that optical pair fractions are inflated by associated star formation and that IR data will be more reliable in tracing the mass assembly history. Future observations will extend this sample beyond the 89 galaxies studied so far, allowing us to test this hypothesis more rigorously. We also report on a comparison between pair fraction and morphological type as wells as estimates of the stellar mass of companion galaxies, used to determine the time-dependent mass accretion rate.

  19. Applying Deep Learning in Medical Images: The Case of Bone Age Estimation.

    PubMed

    Lee, Jang Hyung; Kim, Kwang Gi

    2018-01-01

    A diagnostic need often arises to estimate bone age from X-ray images of the hand of a subject during the growth period. Together with measured physical height, such information may be used as indicators for the height growth prognosis of the subject. We present a way to apply the deep learning technique to medical image analysis using hand bone age estimation as an example. Age estimation was formulated as a regression problem with hand X-ray images as input and estimated age as output. A set of hand X-ray images was used to form a training set with which a regression model was trained. An image preprocessing procedure is described which reduces image variations across data instances that are unrelated to age-wise variation. The use of Caffe, a deep learning tool is demonstrated. A rather simple deep learning network was adopted and trained for tutorial purpose. A test set distinct from the training set was formed to assess the validity of the approach. The measured mean absolute difference value was 18.9 months, and the concordance correlation coefficient was 0.78. It is shown that the proposed deep learning-based neural network can be used to estimate a subject's age from hand X-ray images, which eliminates the need for tedious atlas look-ups in clinical environments and should improve the time and cost efficiency of the estimation process.

  20. Deblurring adaptive optics retinal images using deep convolutional neural networks.

    PubMed

    Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong

    2017-12-01

    The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved.

  1. Deblurring adaptive optics retinal images using deep convolutional neural networks

    PubMed Central

    Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong

    2017-01-01

    The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved. PMID:29296496

  2. Saliency U-Net: A regional saliency map-driven hybrid deep learning network for anomaly segmentation

    NASA Astrophysics Data System (ADS)

    Karargyros, Alex; Syeda-Mahmood, Tanveer

    2018-02-01

    Deep learning networks are gaining popularity in many medical image analysis tasks due to their generalized ability to automatically extract relevant features from raw images. However, this can make the learning problem unnecessarily harder requiring network architectures of high complexity. In case of anomaly detection, in particular, there is often sufficient regional difference between the anomaly and the surrounding parenchyma that could be easily highlighted through bottom-up saliency operators. In this paper we propose a new hybrid deep learning network using a combination of raw image and such regional maps to more accurately learn the anomalies using simpler network architectures. Specifically, we modify a deep learning network called U-Net using both the raw and pre-segmented images as input to produce joint encoding (contraction) and expansion paths (decoding) in the U-Net. We present results of successfully delineating subdural and epidural hematomas in brain CT imaging and liver hemangioma in abdominal CT images using such network.

  3. Understanding Deep Representations Learned in Modeling Users Likes.

    PubMed

    Guntuku, Sharath Chandra; Zhou, Joey Tianyi; Roy, Sujoy; Lin, Weisi; Tsang, Ivor W

    2016-08-01

    Automatically understanding and discriminating different users' liking for an image is a challenging problem. This is because the relationship between image features (even semantic ones extracted by existing tools, viz., faces, objects, and so on) and users' likes is non-linear, influenced by several subtle factors. This paper presents a deep bi-modal knowledge representation of images based on their visual content and associated tags (text). A mapping step between the different levels of visual and textual representations allows for the transfer of semantic knowledge between the two modalities. Feature selection is applied before learning deep representation to identify the important features for a user to like an image. The proposed representation is shown to be effective in discriminating users based on images they like and also in recommending images that a given user likes, outperforming the state-of-the-art feature representations by  ∼ 15 %-20%. Beyond this test-set performance, an attempt is made to qualitatively understand the representations learned by the deep architecture used to model user likes.

  4. Designed Er(3+)-singly doped NaYF4 with double excitation bands for simultaneous deep macroscopic and microscopic upconverting bioimaging.

    PubMed

    Wen, Xuanyuan; Wang, Baoju; Wu, Ruitao; Li, Nana; He, Sailing; Zhan, Qiuqiang

    2016-06-01

    Simultaneous deep macroscopic imaging and microscopic imaging is in urgent demand, but is challenging to achieve experimentally due to the lack of proper fluorescent probes. Herein, we have designed and successfully synthesized simplex Er(3+)-doped upconversion nanoparticles (UCNPs) with double excitation bands for simultaneous deep macroscopic and microscopic imaging. The material structure and the excitation wavelength of Er(3+)-singly doped UCNPs were further optimized to enhance the upconversion emission efficiency. After optimization, we found that NaYF4:30%Er(3+)@NaYF4:2%Er(3+) could simultaneously achieve efficient two-photon excitation (2PE) macroscopic tissue imaging and three-photon excitation (3PE) deep microscopic when excited by 808 nm continuous wave (CW) and 1480 nm CW lasers, respectively. In vitro cell imaging and in vivo imaging have also been implemented to demonstrate the feasibility and potential of the proposed simplex Er(3+)-doped UCNPs as bioprobe.

  5. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  6. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  7. VizieR Online Data Catalog: z>~5 AGN in Chandra Deep Field-South (Weigel+, 2015)

    NASA Astrophysics Data System (ADS)

    Weigel, A. K.; Schawinski, K.; Treister, E.; Urry, C. M.; Koss, M.; Trakhtenbrot, B.

    2015-09-01

    The Chandra 4-Ms source catalogue by Xue et al. (2011, Cat. J/ApJS/195/10) is the starting point of this work. It contains 740 sources and provides counts and observed frame fluxes in the soft (0.5-2keV), hard (2-8keV) and full (0.5-8keV) band. All object IDs used in this work refer to the source numbers listed in the Xue et al. (2011, Cat. J/ApJS/195/10) Chandra 4-Ms catalogue. We make use of Hubble Space Telescope (HST)/Advanced Camera for Surveys (ACS) data from the Great Observatories Origins Deep Survey South (GOODS-south) in the optical wavelength range. We use catalogues and images for filters F435W (B), F606W (V), F775W (i) and 850LP (z) from the second GOODS/ACS data release (v2.0; Giavalisco et al., 2004, Cat. II/261). We use Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) Wide Field Camera 3 (WFC3)/infrared data from the first data release (DR1, v1.0) for passbands F105W (Y), F125W (J) and F160W (H) (Grogin et al., 2011ApJS..197...35G; Koekemoer et al., 2011ApJS..197...36K). To determine which objects are red, dusty, low-redshift interlopers, we also include the 3.6 and 4.5 micron Spitzer Infrared Array Camera (IRAC) channels. We use SIMPLE image data from the DR1 (van Dokkum et al., 2005, Spitzer Proposal, 2005.20708) and the first version of the extended SIMPLE catalogue by Damen et al. (2011, Cat. J/ApJ/727/1). (6 data files).

  8. An Application of Multi-band Forced Photometry to One Square Degree of SERVS: Accurate Photometric Redshifts and Implications for Future Science

    NASA Astrophysics Data System (ADS)

    Nyland, Kristina; Lacy, Mark; Sajina, Anna; Pforr, Janine; Farrah, Duncan; Wilson, Gillian; Surace, Jason; Häußler, Boris; Vaccari, Mattia; Jarvis, Matt

    2017-05-01

    We apply The Tractor image modeling code to improve upon existing multi-band photometry for the Spitzer Extragalactic Representative Volume Survey (SERVS). SERVS consists of post-cryogenic Spitzer observations at 3.6 and 4.5 μm over five well-studied deep fields spanning 18 deg2. In concert with data from ground-based near-infrared (NIR) and optical surveys, SERVS aims to provide a census of the properties of massive galaxies out to z ≈ 5. To accomplish this, we are using The Tractor to perform “forced photometry.” This technique employs prior measurements of source positions and surface brightness profiles from a high-resolution fiducial band from the VISTA Deep Extragalactic Observations survey to model and fit the fluxes at lower-resolution bands. We discuss our implementation of The Tractor over a square-degree test region within the XMM Large Scale Structure field with deep imaging in 12 NIR/optical bands. Our new multi-band source catalogs offer a number of advantages over traditional position-matched catalogs, including (1) consistent source cross-identification between bands, (2) de-blending of sources that are clearly resolved in the fiducial band but blended in the lower resolution SERVS data, (3) a higher source detection fraction in each band, (4) a larger number of candidate galaxies in the redshift range 5 < z < 6, and (5) a statistically significant improvement in the photometric redshift accuracy as evidenced by the significant decrease in the fraction of outliers compared to spectroscopic redshifts. Thus, forced photometry using The Tractor offers a means of improving the accuracy of multi-band extragalactic surveys designed for galaxy evolution studies. We will extend our application of this technique to the full SERVS footprint in the future.

  9. Going deeper in the automated identification of Herbarium specimens.

    PubMed

    Carranza-Rojas, Jose; Goeau, Herve; Bonnet, Pierre; Mata-Montero, Erick; Joly, Alexis

    2017-08-11

    Hundreds of herbarium collections have accumulated a valuable heritage and knowledge of plants over several centuries. Recent initiatives started ambitious preservation plans to digitize this information and make it available to botanists and the general public through web portals. However, thousands of sheets are still unidentified at the species level while numerous sheets should be reviewed and updated following more recent taxonomic knowledge. These annotations and revisions require an unrealistic amount of work for botanists to carry out in a reasonable time. Computer vision and machine learning approaches applied to herbarium sheets are promising but are still not well studied compared to automated species identification from leaf scans or pictures of plants in the field. In this work, we propose to study and evaluate the accuracy with which herbarium images can be potentially exploited for species identification with deep learning technology. In addition, we propose to study if the combination of herbarium sheets with photos of plants in the field is relevant in terms of accuracy, and finally, we explore if herbarium images from one region that has one specific flora can be used to do transfer learning to another region with other species; for example, on a region under-represented in terms of collected data. This is, to our knowledge, the first study that uses deep learning to analyze a big dataset with thousands of species from herbaria. Results show the potential of Deep Learning on herbarium species identification, particularly by training and testing across different datasets from different herbaria. This could potentially lead to the creation of a semi, or even fully automated system to help taxonomists and experts with their annotation, classification, and revision works.

  10. Deep Imaging Survey

    NASA Image and Video Library

    2003-07-25

    This is the first Deep Imaging Survey image taken by NASA Galaxy Evolution Explorer. On June 22 and 23, 2003, the spacecraft obtained this near ultraviolet image of the Groth region by adding multiple orbits for a total exposure time of 14,000 seconds. Tens of thousands of objects can be identified in this picture. http://photojournal.jpl.nasa.gov/catalog/PIA04627

  11. Deep-Sea Coral Image Catalog: Northeast Pacific

    NASA Astrophysics Data System (ADS)

    Freed, J. C.

    2016-02-01

    In recent years, deep-sea exploration in the Northeast Pacific ocean has been on the rise using submersibles and remotely operated vehicles (ROVs), acquiring a plethora of underwater videos and photographs. Analysis of deep-sea fauna revealed by this research has been hampered by the lack of catalogs or guides that allow identification of species in the field. Deep-sea corals are of particular conservation concern, but currently, there are few catalogs which describe and provide detailed information on deep-sea corals from the Northeast Pacific and those that exist are focused on small, specific areas. This project, in collaboration with NOAA's Deep-Sea Coral Ecology Laboratory at the Center for Coastal Environmental Health and Biomolecular Research (CCEHBR) and the Southwest Fisheries Science Center (SWFSC), developed pages for a deep-sea coral identification guide that provides photos and information on the visual identification, distributions, and habitats of species found in the Northeast Pacific. Using online databases, photo galleries, and literature, this catalog has been developed to be a living document open to future additions. This project produced 12 entries for the catalog on a variety of different deep-sea corals. The catalog is intended to be used during underwater surveys in the Northeast Pacific, but will also assist in identification of deep-sea coral by-catch by fishing vessels, and for general educational use. These uses will advance NOAA's ability to identify and protect sensitive deep-sea habitats that act as biological hotspots. The catalog is intended to be further developed into an online resource with greater interactive features with links to other resources and featured on NOAA's Deep-Sea Coral Data Portal.

  12. Nanofocusing beyond the near-field diffraction limit via plasmonic Fano resonance

    NASA Astrophysics Data System (ADS)

    Song, Maowen; Wang, Changtao; Zhao, Zeyu; Pu, Mingbo; Liu, Ling; Zhang, Wei; Yu, Honglin; Luo, Xiangang

    2016-01-01

    The past decade has witnessed a great deal of optical systems designed for exceeding the Abbe's diffraction limit. Unfortunately, a deep subwavelength spot is obtained at the price of extremely short focal length, which is indeed a near-field diffraction limit that could rarely go beyond in the nanofocusing device. One method to mitigate such a problem is to set up a rapid oscillatory electromagnetic field that converges at the prescribed focus. However, abrupt modulation of phase and amplitude within a small fraction of a wavelength seems to be the main obstacle in the visible regime, aggravated by loss and plasmonic features that come into function. In this paper, we propose a periodically repeated ring-disk complementary structure to break the near-field diffraction limit via plasmonic Fano resonance, originating from the interference between the complex hybrid plasmon resonance and the continuum of propagating waves through the silver film. This plasmonic Fano resonance introduces a π phase jump in the adjacent channels and amplitude modulation to achieve radiationless electromagnetic interference. As a result, deep subwavelength spots as small as 0.0045λ2 at 36 nm above the silver film have been numerically demonstrated. This plate holds promise for nanolithography, subdiffraction imaging and microscopy.The past decade has witnessed a great deal of optical systems designed for exceeding the Abbe's diffraction limit. Unfortunately, a deep subwavelength spot is obtained at the price of extremely short focal length, which is indeed a near-field diffraction limit that could rarely go beyond in the nanofocusing device. One method to mitigate such a problem is to set up a rapid oscillatory electromagnetic field that converges at the prescribed focus. However, abrupt modulation of phase and amplitude within a small fraction of a wavelength seems to be the main obstacle in the visible regime, aggravated by loss and plasmonic features that come into function. In this paper, we propose a periodically repeated ring-disk complementary structure to break the near-field diffraction limit via plasmonic Fano resonance, originating from the interference between the complex hybrid plasmon resonance and the continuum of propagating waves through the silver film. This plasmonic Fano resonance introduces a π phase jump in the adjacent channels and amplitude modulation to achieve radiationless electromagnetic interference. As a result, deep subwavelength spots as small as 0.0045λ2 at 36 nm above the silver film have been numerically demonstrated. This plate holds promise for nanolithography, subdiffraction imaging and microscopy. Electronic supplementary information (ESI) available: The plasmon hybridization modes have been analyzed. The transmittance, reflectance and absorbance have been plotted to have a better understanding of the coupling in a silver nanoring. The dependencies of the intensity enhancement on the total numbers of building blocks have been shown. See DOI: 10.1039/c5nr06504f

  13. Constraints on z~10 Galaxies from the Deepest Hubble Space Telescope NICMOS Fields

    NASA Astrophysics Data System (ADS)

    Bouwens, R. J.; Illingworth, G. D.; Thompson, R. I.; Franx, M.

    2005-05-01

    We use all available fields with deep NICMOS imaging to search for J110-dropouts (H160,AB<~28) at z~10. Our primary data set for this search is the two J110+H160 NICMOS fields taken in parallel with the Advanced Camera for Surveys (ACS) Hubble Ultra Deep Field (UDF). The 5 σ limiting magnitudes were ~28.6 in J110 and ~28.5 in H160 (0.6" apertures). Several shallower fields were also used: J110+H160 NICMOS frames available over the Hubble Deep Field (HDF) North, the HDF-South NICMOS parallel, and the ACS UDF (with 5 σ limiting magnitudes in J110 and H160 ranging from 27.0 to 28.2). The primary selection criterion was (J110-H160)AB>1.8. Eleven such sources were found in all search fields using this criterion. Eight of these are clearly ruled out as credible z~10 sources, either as a result of detections (>2 σ) blueward of J110 or their colors redward of the break (H160-K~1.5) (redder than >~98% of lower redshift dropouts). The nature of the three remaining sources could not be determined from the data. This number appears consistent with the expected contamination from low-redshift interlopers. Analysis of the stacked images for the three candidates also suggests some contamination. Regardless of their true redshifts, the actual number of z~10 sources must be three or fewer. To assess the significance of these results, two lower redshift samples (a z~3.8 B-dropout and z~6 i-dropout sample) were projected to z~7-13 using a (1+z)-1 size scaling (for fixed luminosity). They were added to the image frames and the selection was repeated, giving 15.6 and 4.8 J110-dropouts, respectively. This suggests that to the limit of this probe (~0.3L*z=3), there has been evolution from z~3.8 and possibly from z~6. This is consistent with the strong evolution already noted at z~6 and z~7.5 relative to z~3-4. Even assuming that three sources from this probe are at z~10, the rest-frame continuum UV (~1500 Å) luminosity density at z~10 (integrated down to 0.3L*z=3) is just 0.19+0.13-0.09 times that at z~3.8 (or 0.19+0.15-0.10 times, including the small effect from cosmic variance). However, if none of our sources are at z~10, this ratio has a 1 σ upper limit of 0.07. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  14. Visualizing Carrier Transport in Metal Halide Perovskite Nanoplates via Electric Field Modulated Photoluminescence Imaging.

    PubMed

    Hu, Xuelu; Wang, Xiao; Fan, Peng; Li, Yunyun; Zhang, Xuehong; Liu, Qingbo; Zheng, Weihao; Xu, Gengzhao; Wang, Xiaoxia; Zhu, Xiaoli; Pan, Anlian

    2018-05-09

    Metal halide perovskite nanostructures have recently been the focus of intense research due to their exceptional optoelectronic properties and potential applications in integrated photonics devices. Charge transport in perovskite nanostructure is a crucial process that defines efficiency of optoelectronic devices but still requires a deep understanding. Herein, we report the study of the charge transport, particularly the drift of minority carrier in both all-inorganic CsPbBr 3 and organic-inorganic hybrid CH 3 NH 3 PbBr 3 perovskite nanoplates by electric field modulated photoluminescence (PL) imaging. Bias voltage dependent elongated PL emission patterns were observed due to the carrier drift at external electric fields. By fitting the drift length as a function of electric field, we obtained the carrier mobility of about 28 cm 2 V -1 S -1 in the CsPbBr 3 perovskite nanoplate. The result is consistent with the spatially resolved PL dynamics measurement, confirming the feasibility of the method. Furthermore, the electric field modulated PL imaging is successfully applied to the study of temperature-dependent carrier mobility in CsPbBr 3 nanoplates. This work not only offers insights for the mobile carrier in metal halide perovskite nanostructures, which is essential for optimizing device design and performance prediction, but also provides a novel and simple method to investigate charge transport in many other optoelectronic materials.

  15. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  16. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.

  17. An analysis of image storage systems for scalable training of deep neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Seung-Hwan; Young, Steven R; Patton, Robert M

    This study presents a principled empirical evaluation of image storage systems for training deep neural networks. We employ the Caffe deep learning framework to train neural network models for three different data sets, MNIST, CIFAR-10, and ImageNet. While training the models, we evaluate five different options to retrieve training image data: (1) PNG-formatted image files on local file system; (2) pushing pixel arrays from image files into a single HDF5 file on local file system; (3) in-memory arrays to hold the pixel arrays in Python and C++; (4) loading the training data into LevelDB, a log-structured merge tree based key-valuemore » storage; and (5) loading the training data into LMDB, a B+tree based key-value storage. The experimental results quantitatively highlight the disadvantage of using normal image files on local file systems to train deep neural networks and demonstrate reliable performance with key-value storage based storage systems. When training a model on the ImageNet dataset, the image file option was more than 17 times slower than the key-value storage option. Along with measurements on training time, this study provides in-depth analysis on the cause of performance advantages/disadvantages of each back-end to train deep neural networks. We envision the provided measurements and analysis will shed light on the optimal way to architect systems for training neural networks in a scalable manner.« less

  18. Landcover Classification Using Deep Fully Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, J.; Li, X.; Zhou, S.; Tang, J.

    2017-12-01

    Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.

  19. Geographical topic learning for social images with a deep neural network

    NASA Astrophysics Data System (ADS)

    Feng, Jiangfan; Xu, Xin

    2017-03-01

    The use of geographical tagging in social-media images is becoming a part of image metadata and a great interest for geographical information science. It is well recognized that geographical topic learning is crucial for geographical annotation. Existing methods usually exploit geographical characteristics using image preprocessing, pixel-based classification, and feature recognition. How to effectively exploit the high-level semantic feature and underlying correlation among different types of contents is a crucial task for geographical topic learning. Deep learning (DL) has recently demonstrated robust capabilities for image tagging and has been introduced into geoscience. It extracts high-level features computed from a whole image component, where the cluttered background may dominate spatial features in the deep representation. Therefore, a method of spatial-attentional DL for geographical topic learning is provided and we can regard it as a special case of DL combined with various deep networks and tuning tricks. Results demonstrated that the method is discriminative for different types of geographical topic learning. In addition, it outperforms other sequential processing models in a tagging task for a geographical image dataset.

  20. Wide-field three-photon excitation in biological samples

    PubMed Central

    Rowlands, Christopher J; Park, Demian; Bruns, Oliver T; Piatkevich, Kiryl D; Fukumura, Dai; Jain, Rakesh K; Bawendi, Moungi G; Boyden, Edward S; So, Peter TC

    2017-01-01

    Three-photon wide-field depth-resolved excitation is used to overcome some of the limitations in conventional point-scanning two- and three-photon microscopy. Excitation of chromophores as diverse as channelrhodopsins and quantum dots is shown, and a penetration depth of more than 700 μm into fixed scattering brain tissue is achieved, approximately twice as deep as that achieved using two-photon wide-field excitation. Compatibility with live animal experiments is confirmed by imaging the cerebral vasculature of an anesthetized mouse; a complete focal stack was obtained without any evidence of photodamage. As an additional validation of the utility of wide-field three-photon excitation, functional excitation is demonstrated by performing three-photon optogenetic stimulation of cultured mouse hippocampal neurons expressing a channelrhodopsin; action potentials could reliably be excited without causing photodamage. PMID:29152380

  1. THE DIFFERENCE IMAGING PIPELINE FOR THE TRANSIENT SEARCH IN THE DARK ENERGY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kessler, R.; Scolnic, D.; Marriner, J.

    2015-12-15

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg{sup 2} fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImgmore » functions are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ∼130 detections per deg{sup 2} per observation in each band, of which only ∼25% are artifacts. Of the ∼7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ∼30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 “shallow” fields with single-epoch 50% completeness depth ∼23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 “deep” fields with mag-depth ∼24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.« less

  2. The Difference Imaging Pipeline for the Transient Search in the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Kessler, R.; Marriner, J.; Childress, M.; Covarrubias, R.; D'Andrea, C. B.; Finley, D. A.; Fischer, J.; Foley, R. J.; Goldstein, D.; Gupta, R. R.; Kuehn, K.; Marcha, M.; Nichol, R. C.; Papadopoulos, A.; Sako, M.; Scolnic, D.; Smith, M.; Sullivan, M.; Wester, W.; Yuan, F.; Abbott, T.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Carnero Rosell, A.; Carrasco Kind, M.; Castander, F. J.; Crocce, M.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Fausti Neto, A.; Flaugher, B.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Kuropatkin, N.; Li, T. S.; Maia, M. A. G.; Marshall, J. L.; Martini, P.; Miller, C. J.; Miquel, R.; Nord, B.; Ogando, R.; Plazas, A. A.; Reil, K.; Romer, A. K.; Roodman, A.; Sanchez, E.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thaler, J.; Thomas, R. C.; Tucker, D.; Walker, A. R.; DES Collaboration

    2015-12-01

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg2 fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functions are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ˜130 detections per deg2 per observation in each band, of which only ˜25% are artifacts. Of the ˜7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ˜30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 “shallow” fields with single-epoch 50% completeness depth ˜23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 “deep” fields with mag-depth ˜24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.

  3. Lens models under the microscope: comparison of Hubble Frontier Field cluster magnification maps

    NASA Astrophysics Data System (ADS)

    Priewe, Jett; Williams, Liliya L. R.; Liesenborgs, Jori; Coe, Dan; Rodney, Steven A.

    2017-02-01

    Using the power of gravitational lensing magnification by massive galaxy clusters, the Hubble Frontier Fields provide deep views of six patches of the high-redshift Universe. The combination of deep Hubble imaging and exceptional lensing strength has revealed the greatest numbers of multiply-imaged galaxies available to constrain models of cluster mass distributions. However, even with O(100) images per cluster, the uncertainties associated with the reconstructions are not negligible. The goal of this paper is to show the diversity of model magnification predictions. We examine seven and nine mass models of Abell 2744 and MACS J0416, respectively, submitted to the Mikulski Archive for Space Telescopes for public distribution in 2015 September. The dispersion between model predictions increases from 30 per cent at common low magnifications (μ ˜ 2) to 70 per cent at rare high magnifications (μ ˜ 40). MACS J0416 exhibits smaller dispersions than Abell 2744 for 2 < μ < 10. We show that magnification maps based on different lens inversion techniques typically differ from each other by more than their quoted statistical errors. This suggests that some models underestimate the true uncertainties, which are primarily due to various lensing degeneracies. Though the exact mass sheet degeneracy is broken, its generalized counterpart is not broken at least in Abell 2744. Other local degeneracies are also present in both clusters. Our comparison of models is complementary to the comparison of reconstructions of known synthetic mass distributions. By focusing on observed clusters, we can identify those that are best constrained, and therefore provide the clearest view of the distant Universe.

  4. A Science Portal and Archive for Extragalactic Globular Cluster Systems Data

    NASA Astrophysics Data System (ADS)

    Young, Michael; Rhode, Katherine L.; Gopu, Arvind

    2015-01-01

    For several years we have been carrying out a wide-field imaging survey of the globular cluster populations of a sample of giant spiral, S0, and elliptical galaxies with distances of ~10-30 Mpc. We use mosaic CCD cameras on the WIYN 3.5-m and Kitt Peak 4-m telescopes to acquire deep BVR imaging of each galaxy and then analyze the data to derive global properties of the globular cluster system. In addition to measuring the total numbers, specific frequencies, spatial distributions, and color distributions for the globular cluster populations, we have produced deep, high-quality images and lists of tens to thousands of globular cluster candidates for the ~40 galaxies included in the survey.With the survey nearing completion, we have been exploring how to efficiently disseminate not only the overall results, but also all of the relevant data products, to the astronomical community. Here we present our solution: a scientific portal and archive for extragalactic globular cluster systems data. With a modern and intuitive web interface built on the same framework as the WIYN One Degree Imager Portal, Pipeline, and Archive (ODI-PPA), our system will provide public access to the survey results and the final stacked mosaic images of the target galaxies. In addition, the astrometric and photometric data for thousands of identified globular cluster candidates, as well as for all point sources detected in each field, will be indexed and searchable. Where available, spectroscopic follow-up data will be paired with the candidates. Advanced imaging tools will enable users to overlay the cluster candidates and other sources on the mosaic images within the web interface, while metadata charting tools will allow users to rapidly and seamlessly plot the survey results for each galaxy and the data for hundreds of thousands of individual sources. Finally, we will appeal to other researchers with similar data products and work toward making our portal a central repository for data related to well-studied giant galaxy globular cluster systems. This work is supported by NSF Faculty Early Career Development (CAREER) award AST-0847109.

  5. Investigating Mars: Tithonium Chasma

    NASA Image and Video Library

    2018-02-07

    This VIS image shows part of the floor of Tithonium Chasma. Eroded materials cover most of the image. The initial formation of layered floor deposits was possibly created of air fall of dust, sand, and volcanic materials and water lain materials. The weathering of these deposits is probably by the wind. The bottom part of the image has complex, hummocky material, probably very old landslide deposits. At the top of the image is a large mound of material that has been eroded mainly by wind action. The overlapping of these surfaces indicates a long history of modication of Tithonium Chasma. Tithonium Chasma is at the western end of Valles Marineris. Valles Marineris is over 4000 kilometers long, wider than the United States. Tithonium Chasma is almost 810 kilometers long (499 miles), 50 kilometers wide and over 6 kilometers deep. In comparison, the Grand Canyon in Arizona is about 175 kilometers long, 30 kilometers wide, and only 2 kilometers deep. The canyons of Valles Marineris were formed by extensive fracturing and pulling apart of the crust during the uplift of the vast Tharsis plateau. Landslides have enlarged the canyon walls and created deposits on the canyon floor. Weathering of the surface and influx of dust and sand have modified the canyon floor, both creating and modifying layered materials. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 71,000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 3936 Latitude: -5.06026 Longitude: 271.813 Instrument: VIS Captured: 2002-11-03 13:15 https://photojournal.jpl.nasa.gov/catalog/PIA22269

  6. Investigating Mars: Tithonium Chasma

    NASA Image and Video Library

    2018-02-05

    This VIS image shows part of the central region of Tithonium Chasma. The steep wall of the canyon is visible at the top of the image. The top of the canyon walls are layered, mostly likely by numerous volcanic flows. This material is more resistant and forms the ridges extending down the canyon walls. A large landslide deposit covers the right side of the image. An eroded mound on the floor of the canyon exists at the bottom left of the image. The initial formation of the mound was possibly created of air fall of dust, sand, and volcanic materials and water lain materials. Tithonium Chasma is at the western end of Valles Marineris. Valles Marineris is over 4000 kilometers long, wider than the United States. Tithonium Chasma is almost 810 kilometers long (499 miles), 50 kilometers wide and over 6 kilometers deep. In comparison, the Grand Canyon in Arizona is about 175 kilometers long, 30 kilometers wide, and only 2 kilometers deep. The canyons of Valles Marineris were formed by extensive fracturing and pulling apart of the crust during the uplift of the vast Tharsis plateau. Landslides have enlarged the canyon walls and created deposits on the canyon floor. Weathering of the surface and influx of dust and sand have modified the canyon floor. The Odyssey spacecraft has spent over 15 years in orbit around Mars, circling the planet more than 71,000 times. It holds the record for longest working spacecraft at Mars. THEMIS, the IR/VIS camera system, has collected data for the entire mission and provides images covering all seasons and lighting conditions. Over the years many features of interest have received repeated imaging, building up a suite of images covering the entire feature. From the deepest chasma to the tallest volcano, individual dunes inside craters and dune fields that encircle the north pole, channels carved by water and lava, and a variety of other feature, THEMIS has imaged them all. For the next several months the image of the day will focus on the Tharsis volcanoes, the various chasmata of Valles Marineris, and the major dunes fields. We hope you enjoy these images! Orbit Number: 3187 Latitude: -4.15004 Longitude: 272.043 Instrument: VIS Captured: 2002-09-02 21:33 https://photojournal.jpl.nasa.gov/catalog/PIA22267

  7. Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation.

    PubMed

    Mansoor, Awais; Cerrolaza, Juan J; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George

    2017-02-11

    Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM 1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.

  8. Marginal shape deep learning: applications to pediatric lung field segmentation

    NASA Astrophysics Data System (ADS)

    Mansoor, Awais; Cerrolaza, Juan J.; Perez, Geovany; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George

    2017-02-01

    Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, local- ization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0:927 using only the four highest modes of variation (compared to 0:888 with classical ASM1 (p-value=0:01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects.

  9. Marginal Shape Deep Learning: Applications to Pediatric Lung Field Segmentation

    PubMed Central

    Mansoor, Awais; Cerrolaza, Juan J.; Perez, Geovanny; Biggs, Elijah; Nino, Gustavo; Linguraru, Marius George

    2017-01-01

    Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, localization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained a mean Dice similarity coefficient of 0.927 using only the four highest modes of variation (compared to 0.888 with classical ASM1 (p-value=0.01) using same configuration). To the best of our knowledge this is the first demonstration of using DL framework for parametrized shape learning for the delineation of deformable objects. PMID:28592911

  10. Using virtual data for training deep model for hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  11. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  12. BIRDY - Interplanetary CubeSat for planetary geodesy of Small Solar System Bodies (SSSB).

    NASA Astrophysics Data System (ADS)

    Hestroffer, D.; Agnan, M.; Segret, B.; Quinsac, G.; Vannitsen, J.; Rosenblatt, P.; Miau, J. J.

    2017-12-01

    We are developing the Birdy concept of a scientific interplanetary CubeSat, for cruise, or proximity operations around a Small body of the Solar System (asteroid, comet, irregular satellite). The scientific aim is to characterise the body's shape, gravity field, and internal structure through imaging and radio-science techniques. Radio-science is now of common use in planetary science (flybys or orbiters) to derive the mass of the scientific target and possibly higher order terms of its gravity field. Its application to a nano-satellite brings the advantage of enabling low orbits that can get closer to the body's surface, hence increasing the SNR for precise orbit determination (POD), with a fully dedicated instrument. Additionally, it can be applied to two or more satellites, on a leading-trailing trajectory, to improve the gravity field determination. However, the application of this technique to CubeSats in deep space, and inter-satellite link has to be proven. Interplanetary CubeSats need to overcome a few challenges before reaching successfully their deep-space objectives: link to ground-segment, energy supply, protection against radiation, etc. Besides, the Birdy CubeSat — as our basis concept — is designed to be accompanying a mothercraft, and relies partly on the main mission for reaching the target, as well as on data-link with the Earth. However, constraints to the mothercraft needs to be reduced, by having the CubeSat as autonomous as possible. In this respect, propulsion and auto-navigation are key aspects, that we are studying in a Birdy-T engineering model. We envisage a 3U size CubeSat with radio link, object-tracker and imaging function, and autonomous ionic propulsion system. We are considering two case studies for autonomous guidance, navigation and control, with autonomous propulsion: in cruise and in proximity, necessitating ΔV up to 2m/s for a total budget of about 50m/s. In addition to the propulsion, in-flight orbit determination (IFOD) and maintenance are studied, through analysis of images by an object-tracker and astrometry of solar system objects in front of background stars. Before going to deep-space, our project will start with BIRDY-1 orbiting the Earth, to validate the concepts of adopted propulsion, IFOD and orbit maintenance, as well as the radio-science and POD.

  13. The Hubble Deep UV Legacy Survey (HDUV)

    NASA Astrophysics Data System (ADS)

    Montes, Mireia; Oesch, Pascal

    2015-08-01

    Deep HST imaging has shown that the overall star formation density and UV light density at z>3 is dominated by faint, blue galaxies. Remarkably, very little is known about the equivalent galaxy population at lower redshifts. Understanding how these galaxies evolve across the epoch of peak cosmic star-formation is key to a complete picture of galaxy evolution. Here, we present a new HST WFC3/UVIS program, the Hubble Deep UV (HDUV) legacy survey. The HDUV is a 132 orbit program to obtain deep imaging in two filters (F275W and F336W) over the two CANDELS Deep fields. We will cover ~100 arcmin2 sampling the rest-frame far-UV at z>~0.5, this will provide a unique legacy dataset with exquisite HST multi-wavelength imaging as well as ancillary HST grism NIR spectroscopy for a detailed study of faint, star-forming galaxies at z~0.5-2. The HDUV will enable a wealth of research by the community, which includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. This poster provides an overview of the HDUV survey and presents the reduced data products and catalogs which will be released to the community, reaching down to 27.5-28.0 mag at 5 sigma. By directly sampling the rest-frame far-UV at z>~0.5, this will provide a unique legacy dataset with exquisite HST multi-wavelength imaging as well as ancillary HST grism NIR spectroscopy for a detailed study of faint, star-forming galaxies at z~0.5-2. The HDUV will enable a wealth of research by the community, which includes tracing the evolution of the FUV luminosity function over the peak of the star formation rate density from z~3 down to z~0.5, measuring the physical properties of sub-L* galaxies, and characterizing resolved stellar populations to decipher the build-up of the Hubble sequence from sub-galactic clumps. This poster provides an overview of the HDUV survey and presents reduced data products and catalogs which will be released to the community.

  14. Groth Deep Image

    NASA Image and Video Library

    2003-07-25

    This ultraviolet color blowup of the Groth Deep Image was taken by NASA Galaxy Evolution Explorer on June 22 and June 23, 2003. Many hundreds of galaxies are detected in this portion of the image. NASA astronomers believe the faint red galaxies are 6 billion light years away. http://photojournal.jpl.nasa.gov/catalog/PIA04625

  15. Radiometric Calibration of Earth Science Imagers Using HyCalCam on the Deep Space Gateway Platform

    NASA Astrophysics Data System (ADS)

    Butler, J. J.; Thome, K. J.

    2018-02-01

    HyCalCam, an SI-traceable imaging spectrometer on the Deep Space Gateway, acquires images of the Moon and Earth to characterize the lunar surface and terrestrial scenes for use as absolute calibration targets for on-orbit LEO and GEO sensors.

  16. Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savas, Omer

    Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less

  17. The Metal Abundances across Cosmic Time (MACT) Survey. I. Optical Spectroscopy in the Subaru Deep Field

    NASA Astrophysics Data System (ADS)

    Ly, Chun; Malhotra, Sangeeta; Malkan, Matthew A.; Rigby, Jane R.; Kashikawa, Nobunari; de los Reyes, Mithi A.; Rhoads, James E.

    2016-09-01

    Deep rest-frame optical spectroscopy is critical for characterizing and understanding the physical conditions and properties of the ionized gas in galaxies. Here, we present a new spectroscopic survey called “Metal Abundances across Cosmic Time” or { M }{ A }{ C }{ T }, which will obtain rest-frame optical spectra for ˜3000 emission-line galaxies. This paper describes the optical spectroscopy that has been conducted with MMT/Hectospec and Keck/DEIMOS for ≈1900 z = 0.1-1 emission-line galaxies selected from our narrowband and intermediate-band imaging in the Subaru Deep Field. In addition, we present a sample of 164 galaxies for which we have measured the weak [O III]λ4363 line (66 with at least 3σ detections and 98 with significant upper limits). This nebular emission line determines the gas-phase metallicity by measuring the electron temperature of the ionized gas. This paper presents the optical spectra, emission-line measurements, interstellar properties (e.g., metallicity, gas density), and stellar properties (e.g., star formation rates, stellar mass). Paper II of the { M }{ A }{ C }{ T } survey (Ly et al.) presents the first results on the stellar mass-gas metallicity relation at z ≲ 1 using the sample with [O III]λ4363 measurements.

  18. A recent deep earthquake doublet in light of long-term evolution of Nazca subduction

    NASA Astrophysics Data System (ADS)

    Zahradník, J.; Čížková, H.; Bina, C. R.; Sokos, E.; Janský, J.; Tavera, H.; Carvalho, J.

    2017-03-01

    Earthquake faulting at ~600 km depth remains puzzling. Here we present a new kinematic interpretation of two Mw7.6 earthquakes of November 24, 2015. In contrast to teleseismic analysis of this doublet, we use regional seismic data providing robust two-point source models, further validated by regional back-projection and rupture-stop analysis. The doublet represents segmented rupture of a ˜30-year gap in a narrow, deep fault zone, fully consistent with the stress field derived from neighbouring 1976-2015 earthquakes. Seismic observations are interpreted using a geodynamic model of regional subduction, incorporating realistic rheology and major phase transitions, yielding a model slab that is nearly vertical in the deep-earthquake zone but stagnant below 660 km, consistent with tomographic imaging. Geodynamically modelled stresses match the seismically inferred stress field, where the steeply down-dip orientation of compressive stress axes at ˜600 km arises from combined viscous and buoyant forces resisting slab penetration into the lower mantle and deformation associated with slab buckling and stagnation. Observed fault-rupture geometry, demonstrated likelihood of seismic triggering, and high model temperatures in young subducted lithosphere, together favour nanometric crystallisation (and associated grain-boundary sliding) attending high-pressure dehydration as a likely seismogenic mechanism, unless a segment of much older lithosphere is present at depth.

  19. Computer aided lung cancer diagnosis with deep learning algorithms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Zheng, Bin; Qian, Wei

    2016-03-01

    Deep learning is considered as a popular and powerful method in pattern recognition and classification. However, there are not many deep structured applications used in medical imaging diagnosis area, because large dataset is not always available for medical images. In this study we tested the feasibility of using deep learning algorithms for lung cancer diagnosis with the cases from Lung Image Database Consortium (LIDC) database. The nodules on each computed tomography (CT) slice were segmented according to marks provided by the radiologists. After down sampling and rotating we acquired 174412 samples with 52 by 52 pixel each and the corresponding truth files. Three deep learning algorithms were designed and implemented, including Convolutional Neural Network (CNN), Deep Belief Networks (DBNs), Stacked Denoising Autoencoder (SDAE). To compare the performance of deep learning algorithms with traditional computer aided diagnosis (CADx) system, we designed a scheme with 28 image features and support vector machine. The accuracies of CNN, DBNs, and SDAE are 0.7976, 0.8119, and 0.7929, respectively; the accuracy of our designed traditional CADx is 0.7940, which is slightly lower than CNN and DBNs. We also noticed that the mislabeled nodules using DBNs are 4% larger than using traditional CADx, this might be resulting from down sampling process lost some size information of the nodules.

  20. Fully automatic cervical vertebrae segmentation framework for X-ray images.

    PubMed

    Al Arif, S M Masudur Rahman; Knapp, Karen; Slabaugh, Greg

    2018-04-01

    The cervical spine is a highly flexible anatomy and therefore vulnerable to injuries. Unfortunately, a large number of injuries in lateral cervical X-ray images remain undiagnosed due to human errors. Computer-aided injury detection has the potential to reduce the risk of misdiagnosis. Towards building an automatic injury detection system, in this paper, we propose a deep learning-based fully automatic framework for segmentation of cervical vertebrae in X-ray images. The framework first localizes the spinal region in the image using a deep fully convolutional neural network. Then vertebra centers are localized using a novel deep probabilistic spatial regression network. Finally, a novel shape-aware deep segmentation network is used to segment the vertebrae in the image. The framework can take an X-ray image and produce a vertebrae segmentation result without any manual intervention. Each block of the fully automatic framework has been trained on a set of 124 X-ray images and tested on another 172 images, all collected from real-life hospital emergency rooms. A Dice similarity coefficient of 0.84 and a shape error of 1.69 mm have been achieved. Copyright © 2018 Elsevier B.V. All rights reserved.

Top