Large-viewing-angle electroholography by space projection
NASA Astrophysics Data System (ADS)
Sato, Koki; Obana, Kazuki; Okumura, Toshimichi; Kanaoka, Takumi; Nishikawa, Satoko; Takano, Kunihiko
2004-06-01
The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel ( time shared CGH of RGB three colors ). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.
NASA Astrophysics Data System (ADS)
Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong
2018-02-01
Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.
Presence and preferable viewing conditions when using an ultrahigh-definition large-screen display
NASA Astrophysics Data System (ADS)
Masaoka, Kenichiro; Emoto, Masaki; Sugawara, Masayuki; Okano, Fumio
2005-01-01
We are investigating psychological aspects to obtain guidelines for the design of TVs aimed at future high-presence broadcasting. In this study, we performed subjective assessment tests to examine the psychological effects of different combinations of viewing conditions obtained by varying the viewing distance, screen size, and picture resolution (between 4000 and 1000 scan lines). The evaluation images were presented in the form of two-minute programs comprising a sequence of 10 still images, and the test subjects were asked to complete a questionnaire consisting of 20 items relating to psychological effects such as "presence", "adverse effects", and "preferability". It was found that the test subjects reported a higher feeling of presence for 1000-line images when viewed around a distance of 1.5H (less than the standard viewing distance of 3H, which is recommended as a viewing distance for subjective evaluation of image quality for HDTV), and reported a higher feeling of presence for 4000-line images than for 1000-line images. The adverse effects such as "difficulty of viewing" did not differ significantly with resolution, but were evaluated to be lower as the viewing distance increased and tended to saturate at viewing distances above 2H. The viewing conditions were evaluated as being more preferable as the screen size increased, showing that it is possible to broadcast comfortable high-presence pictures using high-resolution large-screen displays.
Characteristics of mist 3D screen for projection type electro-holography
NASA Astrophysics Data System (ADS)
Sato, Koki; Okumura, Toshimichi; Kanaoka, Takumi; Koizumi, Shinya; Nishikawa, Satoko; Takano, Kunihiko
2006-01-01
The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel (time shared CGH of RGB three colors). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
Escott, Edward J; Rubinstein, David
2004-01-01
It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.
Web tools for large-scale 3D biological images and atlases
2012-01-01
Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP) has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume. PMID:22676296
BigView Image Viewing on Tiled Displays
NASA Technical Reports Server (NTRS)
Sandstrom, Timothy
2007-01-01
BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.
Development of 40-in hybrid hologram screen for auto-stereoscopic video display
NASA Astrophysics Data System (ADS)
Song, Hyun Ho; Nakashima, Y.; Momonoi, Y.; Honda, Toshio
2004-06-01
Usually in auto stereoscopic display, there are two problems. The first problem is that large image display is difficult, and the second problem is that the view zone (which means the zone in which both eyes are put for stereoscopic or 3-D image observation) is very narrow. We have been developing an auto stereoscopic large video display system (over 100 inches diagonal) which a few people can view simultaneously1,2. Usually in displays that are over 100 inches diagonal, an optical video projection system is used. As one of auto stereoscopic display systems the hologram screen has been proposed3,4,5,6. However, if the hologram screen becomes too large, the view zone (corresponding to the reconstructed diffused object) causes color dispersion and color aberration7. We also proposed the additional Fresnel lens attached to the hologram screen. We call the screen a "hybrid hologram screen", (HHS in short). We made the HHS 866mm(H)×433mm(V) (about 40 inch diagonal)8,9,10,11. By using the lens in the reconstruction step, the angle between object light and reference light can be small, compared to without the lens. So, the spread of the view zone by the color dispersion and color aberration becomes small. And also, the virtual image which is reconstructed from the hologram screen can be transformed to a real image (view zone). So, it is not necessary to use a large lens or concave mirror while making a large hologram screen.
Panoramic cone beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Zhou Lili; Wang Song
2012-05-15
Purpose: Cone-beam computed tomography (CBCT) is the main imaging tool for image-guided radiotherapy but its functionality is limited by a small imaging volume and restricted image position (imaged at the central instead of the treatment position for peripheral lesions to avoid collisions). In this paper, the authors present the concept of ''panoramic CBCT,'' which can image patients at the treatment position with an imaging volume as large as practically needed. Methods: In this novel panoramic CBCT technique, the target is scanned sequentially from multiple view angles. For each view angle, a half scan (180 deg. + {theta}{sub cone} where {theta}{submore » cone} is the cone angle) is performed with the imaging panel positioned in any location along the beam path. The panoramic projection images of all views for the same gantry angle are then stitched together with the direct image stitching method (i.e., according to the reported imaging position) and full-fan, half-scan CBCT reconstruction is performed using the stitched projection images. To validate this imaging technique, the authors simulated cone-beam projection images of the Mathematical Cardiac Torso (MCAT) thorax phantom for three panoramic views. Gaps, repeated/missing columns, and different exposure levels were introduced between adjacent views to simulate imperfect image stitching due to uncertainties in imaging position or output fluctuation. A modified simultaneous algebraic reconstruction technique (modified SART) was developed to reconstruct CBCT images directly from the stitched projection images. As a gold standard, full-fan, full-scan (360 deg. gantry rotation) CBCT reconstructions were also performed using projection images of one imaging panel large enough to encompass the target. Contrast-to-noise ratio (CNR) and geometric distortion were evaluated to quantify the quality of reconstructed images. Monte Carlo simulations were performed to evaluate the effect of scattering on the image quality and imaging dose for both standard and panoramic CBCT. Results: Truncated images with artifacts were observed for the CBCT reconstruction using projection images of the central view only. When the image stitching was perfect, complete reconstruction was obtained for the panoramic CBCT using the modified SART with the image quality similar to the gold standard (full-scan, full-fan CBCT using one large imaging panel). Imperfect image stitching, on the other hand, lead to (streak, line, or ring) reconstruction artifacts, reduced CNR, and/or distorted geometry. Results from Monte Carlo simulations showed that, for identical imaging quality, the imaging dose was lower for the panoramic CBCT than that acquired with one large imaging panel. For the same imaging dose, the CNR of the three-view panoramic CBCT was 50% higher than that of the regular CBCT using one big panel. Conclusions: The authors have developed a panoramic CBCT technique and demonstrated with simulation data that it can image tumors of any location for patients of any size at the treatment position with comparable or less imaging dose and time. However, the image quality of this CBCT technique is sensitive to the reconstruction artifacts caused by imperfect image stitching. Better algorithms are therefore needed to improve the accuracy of image stitching for panoramic CBCT.« less
NASA Astrophysics Data System (ADS)
Wu, Xiongxiong; Wang, Xiaorui; Zhang, Jianlei; Yuan, Ying; Chen, Xiaoxiang
2017-04-01
To realize large field of view (FOV) and high-resolution dynamic gaze of the moving target, this paper proposes the monocentric multiscale foveated (MMF) imaging system based on monocentric multiscale design and foveated imaging. First we present the MMF imaging system concept. Then we analyze large field curvature and distortion of the secondary image when the spherical intermediate image produced by the primary monocentric objective lens is relayed by the microcameras. Further a type of zoom endoscope objective lens is selected as the initial structure and optimized to minimize the field curvature and distortion with ZEMAX optical design software. The simulation results show that the maximum field curvature in full field of view is below 0.25 mm and the maximum distortion in full field of view is below 0.6%, which can meet the requirements of the microcamera in the proposed MMF imaging system. In addition, a simple doublet is used to design the foveated imaging system. Results of the microcamera together with the foveated imager compose the results of the whole MMF imaging system.
Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging
NASA Astrophysics Data System (ADS)
Lin, Bingxiong; Sun, Yu; Qian, Xiaoning
2013-03-01
Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.
Improved integral images compression based on multi-view extraction
NASA Astrophysics Data System (ADS)
Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric
2016-09-01
Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.
Development of an immersive virtual reality head-mounted display with high performance.
Wang, Yunqi; Liu, Weiqi; Meng, Xiangxiang; Fu, Hanyi; Zhang, Daliang; Kang, Yusi; Feng, Rui; Wei, Zhonglun; Zhu, Xiuqing; Jiang, Guohua
2016-09-01
To resolve the contradiction between large field of view and high resolution in immersive virtual reality (VR) head-mounted displays (HMDs), an HMD monocular optical system with a large field of view and high resolution was designed. The system was fabricated by adopting aspheric technology with CNC grinding and a high-resolution LCD as the image source. With this monocular optical system, an HMD binocular optical system with a wide-range continuously adjustable interpupillary distance was achieved in the form of partially overlapping fields of view (FOV) combined with a screw adjustment mechanism. A fast image processor-centered LCD driver circuit and an image preprocessing system were also built to address binocular vision inconsistency in the partially overlapping FOV binocular optical system. The distortions of the HMD optical system with a large field of view were measured. Meanwhile, the optical distortions in the display and the trapezoidal distortions introduced during image processing were corrected by a calibration model for reverse rotations and translations. A high-performance not-fully-transparent VR HMD device with high resolution (1920×1080) and large FOV [141.6°(H)×73.08°(V)] was developed. The full field-of-view average value of angular resolution is 18.6 pixels/degree. With the device, high-quality VR simulations can be completed under various scenarios, and the device can be utilized for simulated trainings in aeronautics, astronautics, and other fields with corresponding platforms. The developed device has positive practical significance.
A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy
NASA Astrophysics Data System (ADS)
Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.
1989-05-01
A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.
Storage and distribution of pathology digital images using integrated web-based viewing systems.
Marchevsky, Alberto M; Dulbandzhyan, Ronda; Seely, Kevin; Carey, Steve; Duncan, Raymond G
2002-05-01
Health care providers have expressed increasing interest in incorporating digital images of gross pathology specimens and photomicrographs in routine pathology reports. To describe the multiple technical and logistical challenges involved in the integration of the various components needed for the development of a system for integrated Web-based viewing, storage, and distribution of digital images in a large health system. An Oracle version 8.1.6 database was developed to store, index, and deploy pathology digital photographs via our Intranet. The database allows for retrieval of images by patient demographics or by SNOMED code information. The Intranet of a large health system accessible from multiple computers located within the medical center and at distant private physician offices. The images can be viewed using any of the workstations of the health system that have authorized access to our Intranet, using a standard browser or a browser configured with an external viewer or inexpensive plug-in software, such as Prizm 2.0. The images can be printed on paper or transferred to film using a digital film recorder. Digital images can also be displayed at pathology conferences by using wireless local area network (LAN) and secure remote technologies. The standardization of technologies and the adoption of a Web interface for all our computer systems allows us to distribute digital images from a pathology database to a potentially large group of users distributed in multiple locations throughout a large medical center.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Portable oral cancer detection using a miniature confocal imaging probe with a large field of view
NASA Astrophysics Data System (ADS)
Wang, Youmin; Raj, Milan; McGuff, H. Stan; Bhave, Gauri; Yang, Bin; Shen, Ting; Zhang, Xiaojing
2012-06-01
We demonstrate a MEMS micromirror enabled handheld confocal imaging probe for portable oral cancer detection, where a comparatively large field of view (FOV) was generated through the programmable Lissajous scanning pattern of the MEMS micromirror. Miniaturized handheld MEMS confocal imaging probe was developed, and further compared with the desktop confocal prototype under clinical setting. For the handheld confocal imaging system, optical design simulations using CODE VR® shows the lateral and axial resolution to be 0.98 µm and 4.2 µm, where experimental values were determined to be 3 µm and 5.8 µm, respectively, with a FOV of 280 µm×300 µm. Fast Lissajous imaging speed up to 2 fps was realized with improved Labview and Java based real-time imaging software. Properties such as 3D imaging through autofocusing and mosaic imaging for extended lateral view (6 mm × 8 mm) were examined for carcinoma real-time pathology. Neoplastic lesion tissues of giant cell fibroma and peripheral ossifying fibroma, the fibroma inside the paraffin box and ex vivo gross tissues were imaged by the bench-top and handheld imaging modalities, and further compared with commercial microscope imaging results. The MEMS scanner-based handheld confocal imaging probe shows great promise as a potential clinical tool for oral cancer diagnosis and treatment.
Large Smoke Plume from Industrial Fires in Miyagi Prefecture, Japan
2011-03-13
This images, acquired on March 12, 2011 by NASA Terra spacecraft, shows a large smoke plume that appears to be associated either with the Shiogama incident or Sendai port fires. 3D glasses are necessary to view this image.
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation
NASA Astrophysics Data System (ADS)
Inamoto, Naho; Saito, Hideo
2003-06-01
This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..
NASA Astrophysics Data System (ADS)
Ryu, Inkeon; Kim, Daekeun
2018-04-01
A typical selective plane illumination microscopy (SPIM) image size is basically limited by the field of view, which is a characteristic of the objective lens. If an image larger than the imaging area of the sample is to be obtained, image stitching, which combines step-scanned images into a single panoramic image, is required. However, accurately registering the step-scanned images is very difficult because the SPIM system uses a customized sample mount where uncertainties for the translational and the rotational motions exist. In this paper, an image registration technique based on multiple fluorescent microsphere tracking is proposed in the view of quantifying the constellations and measuring the distances between at least two fluorescent microspheres embedded in the sample. Image stitching results are demonstrated for optically cleared large tissue with various staining methods. Compensation for the effect of the sample rotation that occurs during the translational motion in the sample mount is also discussed.
NASA Astrophysics Data System (ADS)
Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.
1988-06-01
Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.
Large-field-of-view imaging by multi-pupil adaptive optics.
Park, Jung-Hoon; Kong, Lingjie; Zhou, Yifeng; Cui, Meng
2017-06-01
Adaptive optics can correct for optical aberrations. We developed multi-pupil adaptive optics (MPAO), which enables simultaneous wavefront correction over a field of view of 450 × 450 μm 2 and expands the correction area to nine times that of conventional methods. MPAO's ability to perform spatially independent wavefront control further enables 3D nonplanar imaging. We applied MPAO to in vivo structural and functional imaging in the mouse brain.
A Unified Framework for Street-View Panorama Stitching
Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei
2016-01-01
In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481
Storage and retrieval of large digital images
Bradley, J.N.
1998-01-20
Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.
Storage and retrieval of large digital images
Bradley, Jonathan N.
1998-01-01
Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.
Toyz: A framework for scientific analysis of large datasets and astronomical images
NASA Astrophysics Data System (ADS)
Moolekamp, F.; Mamajek, E.
2015-11-01
As the size of images and data products derived from astronomical data continues to increase, new tools are needed to visualize and interact with that data in a meaningful way. Motivated by our own astronomical images taken with the Dark Energy Camera (DECam) we present Toyz, an open source Python package for viewing and analyzing images and data stored on a remote server or cluster. Users connect to the Toyz web application via a web browser, making it a convenient tool for students to visualize and interact with astronomical data without having to install any software on their local machines. In addition it provides researchers with an easy-to-use tool that allows them to browse the files on a server and quickly view very large images (>2 Gb) taken with DECam and other cameras with a large FOV and create their own visualization tools that can be added on as extensions to the default Toyz framework.
Development of a Large Field of View Shadowgraph System for a 16 Ft. Transonic Wind Tunnel
NASA Technical Reports Server (NTRS)
Talley, Michael A.; Jones, Stephen B.; Goodman, Wesley L.
2000-01-01
A large field of view shadowgraph flow visualization system for the Langley 16 ft. Transonic Tunnel (16 ft.TT) has been developed to provide fast, low cost, aerodynamic design concept evaluation capability to support the development of the next generation of commercial and military aircraft and space launch vehicles. Key features of the 16 ft. TT shadowgraph system are: (1) high resolution (1280 X 1024) digital snap shots and sequences; (2) video recording of shadowgraph at 30 frames per second; (3) pan, tilt, & zoom to find and observe flow features; (4) one microsecond flash for freeze frame images; (5) large field of view approximately 12 X 6 ft; and (6) a low maintenance, high signal/noise ratio, retro-reflective screen to allow shadowgraph imaging while test section lights are on.
NASA Technical Reports Server (NTRS)
Post, R. B.; Welch, R. B.
1996-01-01
Visually perceived eye level (VPEL) was measured while subjects viewed two vertical lines which were either upright or pitched about the horizontal axis. In separate conditions, the display consisted of a relatively large pair of lines viewed at a distance of 1 m, or a display scaled to one third the dimensions and viewed at a distance of either 1 m or 33.3 cm. The small display viewed at 33.3 cm produced a retinal image the same size as that of the large display at 1 m. Pitch of all three displays top-toward and top-away from the observer caused upward and downward VPEL shifts, respectively. These effects were highly similar for the large display and the small display viewed at 33.3 cm (ie equal retinal size), but were significantly smaller for the small display viewed at 1 m. In a second experiment, perceived size of the three displays was measured and found to be highly accurate. The results of the two experiments indicate that the effect of optical pitch on VPEL depends on the retinal image size of stimuli rather than on perceived size.
High speed color imaging through scattering media with a large field of view
NASA Astrophysics Data System (ADS)
Zhuang, Huichang; He, Hexiang; Xie, Xiangsheng; Zhou, Jianying
2016-09-01
Optical imaging through complex media has many important applications. Although research progresses have been made to recover optical image through various turbid media, the widespread application of the technology is hampered by the recovery speed, requirement on specific illumination, poor image quality and limited field of view. Here we demonstrate that above-mentioned drawbacks can be essentially overcome. The realization of high speed color imaging through turbid media is successfully carried out by taking into account the media memory effect, the point spread function, the exit pupil of the optical system, and the optimized signal to noise ratio. By retrieving selected speckles with enlarged field of view, high quality image is recovered with a responding speed only determined by the frame rates of the image capturing devices. The immediate application of the technique is expected to register static and dynamic imaging under human skin to recover information with a wearable device.
Discriminative Multi-View Interactive Image Re-Ranking.
Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng
2017-07-01
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
Living in a World with Eyeballs: How Women Make Meaning of Body Image in the College Environment
ERIC Educational Resources Information Center
Stanley, Chrystal Ann
2013-01-01
Negative body image is pervasive among traditional, college-age women and takes a heavy toll on women's economic, personal, and political lives. Previous research has indicated that a large percentage of women hold negative views of their body. Women embarking on higher education are not exempt from these negative views. Conversely, college…
2006-05-10
This MOC image shows a summertime view of the south polar residual cap of Mars. In this image, mesas composed largely of solid carbon dioxide are separated from one another by irregularly-shaped depressions
Icebergs Adrift in the Amundsen Sea
2002-03-27
The Thwaites Ice Tongue is a large sheet of glacial ice extending from the West Antarctic mainland into the southern Amundsen Sea. A large crack in the Thwaites Tongue was discovered in imagery from Terra's Moderate Resolution Imaging SpectroRadiometer (MODIS). Subsequent widening of the crack led to the calving of a large iceberg. The development of this berg, designated B-22 by the National Ice Center, can be observed in these images from the Multi-angle Imaging SpectroRadiometer, also aboard Terra. The two views were acquired by MISR's nadir (vertical-viewing) camera on March 10 and 24, 2002. The B-22 iceberg, located below and to the left of image center, measures approximately 82 kilometers long x 62 kilometers wide. Comparison of the two images shows the berg to have drifted away from the ice shelf edge. The breakup of ice near the shelf edge, in the area surrounding B-22, is also visible in the later image. These natural-color images were acquired during Terra orbits 11843 and 12047, respectively. At the right-hand edge is Pine Island Bay, where the calving of another large iceberg (B-21) occurred in November 2001. B-21 subsequently split into two smaller bergs, both of which are visible to the right of B-22. http://photojournal.jpl.nasa.gov/catalog/PIA03700
2017-08-11
These two views of Saturn's moon Titan exemplify how NASA's Cassini spacecraft has revealed the surface of this fascinating world. Cassini carried several instruments to pierce the veil of hydrocarbon haze that enshrouds Titan. The mission's imaging cameras also have several spectral filters sensitive to specific wavelengths of infrared light that are able to make it through the haze to the surface and back into space. These "spectral windows" have enable the imaging cameras to map nearly the entire surface of Titan. In addition to Titan's surface, images from both the imaging cameras and VIMS have provided windows into the moon's ever-changing atmosphere, chronicling the appearance and movement of hazes and clouds over the years. A large, bright and feathery band of summer clouds can be seen arcing across high northern latitudes in the view at right. These views were obtained with the Cassini spacecraft narrow-angle camera on March 21, 2017. Images taken using red, green and blue spectral filters were combined to create the natural-color view at left. The false-color view at right was made by substituting an infrared image (centered at 938 nanometers) for the red color channel. The views were acquired at a distance of approximately 613,000 miles (986,000 kilometers) from Titan. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21624
WiseView: Visualizing motion and variability of faint WISE sources
NASA Astrophysics Data System (ADS)
Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume
2018-06-01
WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.
The selection of the optimal baseline in the front-view monocular vision system
NASA Astrophysics Data System (ADS)
Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.
Trans-pulmonary echocardiography as a guide for device closure of patent ductus arteriosus.
Kudo, Yoshiyuki; Suda, Kenji; Yoshimoto, Hironaga; Teramachi, Yozo; Kishimoto, Shintaro; Iemura, Motofumi; Matsuishi, Toyojiro
2015-08-01
The aim of this study was to develop trans-pulmonary echocardiography (TPE) to guide device closure of patent ductus arteriosus (DC-PDA). Aortography requires a large amount of contrast yet may give us an inadequate image to evaluate anatomy or residual shunt in patients with large PDA or dilated vessels and is precluded in patients with renal dysfunction. Practically, there is no imaging modality to monitor the entire procedure except for trans-esophageal echocardiography that requires general anesthesia. Subjects were seven patients with ages ranged from 6- to 77-years old and body weight > 15 kg. The size of the PDA ranged from 1.8 to 6.3 mm with pulmonary to systemic flow ratios from 1.2 to 2.2. During DC-PDA using Ampaltzer Duct Occluder or coil, an intra-cardiac echocardiographic (ICE) catheter was advanced into pulmonary arteries and standard views were developed to guide DC-PDA. We have developed two standard views; the main pulmonary artery view (MPA view) and the left pulmonary artery view (LPA view). The MPA view provided aortic short axis view equivalent to that seen by trans-thoracic echocardiography in children. The LPA view, obtained by the echo probe in the LPA and turned it up upside down, provided long axis view of the PDA allowing more precise anatomical evaluation. TPE allowed us to monitor the entire procedure and determine residual shunts. TPE in the MPA and LPA view can be an effective guide for DC-PDA. This report leads to new application of this imaging device. © 2015 Wiley Periodicals, Inc.
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
Zhao, Zhizhen; Singer, Amit
2014-01-01
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations. PMID:24631969
Large field of view, fast and low dose multimodal phase-contrast imaging at high x-ray energy.
Astolfo, Alberto; Endrizzi, Marco; Vittoria, Fabio A; Diemoz, Paul C; Price, Benjamin; Haig, Ian; Olivo, Alessandro
2017-05-19
X-ray phase contrast imaging (XPCI) is an innovative imaging technique which extends the contrast capabilities of 'conventional' absorption based x-ray systems. However, so far all XPCI implementations have suffered from one or more of the following limitations: low x-ray energies, small field of view (FOV) and long acquisition times. Those limitations relegated XPCI to a 'research-only' technique with an uncertain future in terms of large scale, high impact applications. We recently succeeded in designing, realizing and testing an XPCI system, which achieves significant steps toward simultaneously overcoming these limitations. Our system combines, for the first time, large FOV, high energy and fast scanning. Importantly, it is capable of providing high image quality at low x-ray doses, compatible with or even below those currently used in medical imaging. This extends the use of XPCI to areas which were unpractical or even inaccessible to previous XPCI solutions. We expect this will enable a long overdue translation into application fields such as security screening, industrial inspections and large FOV medical radiography - all with the inherent advantages of the XPCI multimodality.
Comet Wild 2 - Stardust Approach Image
NASA Technical Reports Server (NTRS)
2004-01-01
This image was taken during the close approach phase of Stardust's Jan 2, 2004 flyby of comet Wild 2. It is a distant side view of the roughly spherical comet nucleus. One hemisphere is in sunlight and the other is in shadow analogous to a view of the quarter moon. Several large depressed regions can be seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter.
Atmospheric Science Data Center
2013-04-17
article title: St. Petersburg, Russia View Larger Image ... The city in the south eastern portion of the image is Russia's St. Petersburg, which is the most northerly large city in the world at ...
McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon
2013-01-01
The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm2. Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes. PMID:23592185
McLeod, Euan; Luo, Wei; Mudanyali, Onur; Greenbaum, Alon; Ozcan, Aydogan
2013-06-07
The development of lensfree on-chip microscopy in the past decade has opened up various new possibilities for biomedical imaging across ultra-large fields of view using compact, portable, and cost-effective devices. However, until recently, its ability to resolve fine features and detect ultra-small particles has not rivalled the capabilities of the more expensive and bulky laboratory-grade optical microscopes. In this Frontier Review, we highlight the developments over the last two years that have enabled computational lensfree holographic on-chip microscopy to compete with and, in some cases, surpass conventional bright-field microscopy in its ability to image nano-scale objects across large fields of view, yielding giga-pixel phase and amplitude images. Lensfree microscopy has now achieved a numerical aperture as high as 0.92, with a spatial resolution as small as 225 nm across a large field of view e.g., >20 mm(2). Furthermore, the combination of lensfree microscopy with self-assembled nanolenses, forming nano-catenoid minimal surfaces around individual nanoparticles has boosted the image contrast to levels high enough to permit bright-field imaging of individual particles smaller than 100 nm. These capabilities support a number of new applications, including, for example, the detection and sizing of individual virus particles using field-portable computational on-chip microscopes.
Reference View Selection in DIBR-Based Multiview Coding.
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.
Light-sheet enhanced resolution of light field microscopy for rapid imaging of large volumes
NASA Astrophysics Data System (ADS)
Madrid Wolff, Jorge; Castro, Diego; Arbeláez, Pablo; Forero-Shelton, Manu
2018-02-01
Whole-brain imaging is challenging because it demands microscopes with high temporal and spatial resolution, which are often at odds, especially in the context of large fields of view. We have designed and built a light-sheet microscope with digital micromirror illumination and light-field detection. On the one hand, light sheets provide high resolution optical sectioning on live samples without compromising their viability. On the other hand, light field imaging makes it possible to reconstruct full volumes of relatively large fields of view from a single camera exposure; however, its enhanced temporal resolution comes at the expense of spatial resolution, limiting its applicability. We present an approach to increase the resolution of light field images using DMD-based light sheet illumination. To that end, we develop a method to produce synthetic resolution targets for light field microscopy and a procedure to correct the depth at which planes are refocused with rendering software. We measured the axial resolution as a function of depth and show a three-fold potential improvement with structured illumination, albeit by sacrificing some temporal resolution, also three-fold. This results in an imaging system that may be adjusted to specific needs without having to reassemble and realign it. This approach could be used to image relatively large samples at high rates.
A compact light-sheet microscope for the study of the mammalian central nervous system
Yang, Zhengyi; Haslehurst, Peter; Scott, Suzanne; Emptage, Nigel; Dholakia, Kishan
2016-01-01
Investigation of the transient processes integral to neuronal function demands rapid and high-resolution imaging techniques over a large field of view, which cannot be achieved with conventional scanning microscopes. Here we describe a compact light sheet fluorescence microscope, featuring a 45° inverted geometry and an integrated photolysis laser, that is optimized for applications in neuroscience, in particular fast imaging of sub-neuronal structures in mammalian brain slices. We demonstrate the utility of this design for three-dimensional morphological reconstruction, activation of a single synapse with localized photolysis, and fast imaging of neuronal Ca2+ signalling across a large field of view. The developed system opens up a host of novel applications for the neuroscience community. PMID:27215692
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Riffel, Philipp; Haneder, Stefan; Attenberger, Ulrike I; Brade, Joachim; Schoenberg, Stefan O; Michaely, Henrik J
2012-10-01
Different approaches exist for hybrid MRA of the calf station. So far, the order of the acquisition of the focused calf MRA and the large field-of-view MRA has not been scientifically evaluated. Therefore the aim of this study was to evaluate if the quality of the combined large field-of-view MRA (CTM MR angiography) and time-resolved MRA with stochastic interleaved trajectories (TWIST MRA) depends on the order of acquisition of the two contrast-enhanced studies. In this retrospective study, 40 consecutive patients (mean age 68.1 ± 8.7 years, 29 male/11 female) who had undergone an MR angiographic protocol that consisted of CTM-MRA (TR/TE, 2.4/1.0 ms; 21° flip angle; isotropic resolution 1.2mm; gadolinium dose, 0.07 mmol/kg) and TWIST-MRA (TR/TE 2.8/1.1; 20° flip angle; isotropic resolution 1.1mm; temporal resolution 5.5s, gadolinium dose, 0.03 mmol/kg), were included. In the first group (group 1) TWIST-MRA of the calf station was performed 1-2 min after CTM-MRA. In the second group (group 2) CTM-MRA was performed 1-2 min after TWIST-MRA of the calf station. The image quality of CTM-MRA and TWIST-MRA were evaluated by 2 two independent radiologists in consensus according to a 4-point Likert-like rating scale assessing overall image quality on a segmental basis. Venous overlay was assessed per examination. In the CTM-MRA, 1360 segments were included in the assessment of image quality. CTM-MRA was diagnostic in 95% (1289/1360) of segments. There was a significant difference (p<0.0001) between both groups with regard to the number of segments rated as excellent and moderate. The image quality was rated as excellent in group 1 in 80% (514/640 segments) and in group 2 in 67% (432/649), respectively (p<0.0001). In contrast, the image quality was rated as moderate in the first group in 5% (33/640) and in the second group in 19% (121/649) respectively (p<0.0001). The venous overlay was disturbing in 10% in group 1 and 20% in group 2 (p=n.s.). If a combined hybrid MRA approach with large field-of-view and time-resolved MRA is acquired the large field-of-view MRA should be acquired first in order for optimal image quality. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
In-Situ Visualization Experiments with ParaView Cinema in RAGE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kares, Robert John
2015-10-15
A previous paper described some numerical experiments performed using the ParaView/Catalyst in-situ visualization infrastructure deployed in the Los Alamos RAGE radiation-hydrodynamics code to produce images from a running large scale 3D ICF simulation. One challenge of the in-situ approach apparent in these experiments was the difficulty of choosing parameters likes isosurface values for the visualizations to be produced from the running simulation without the benefit of prior knowledge of the simulation results and the resultant cost of recomputing in-situ generated images when parameters are chosen suboptimally. A proposed method of addressing this difficulty is to simply render multiple images atmore » runtime with a range of possible parameter values to produce a large database of images and to provide the user with a tool for managing the resulting database of imagery. Recently, ParaView/Catalyst has been extended to include such a capability via the so-called Cinema framework. Here I describe some initial experiments with the first delivery of Cinema and make some recommendations for future extensions of Cinema’s capabilities.« less
Whole-animal imaging with high spatio-temporal resolution
NASA Astrophysics Data System (ADS)
Chhetri, Raghav; Amat, Fernando; Wan, Yinan; Höckendorf, Burkhard; Lemon, William C.; Keller, Philipp J.
2016-03-01
We developed isotropic multiview (IsoView) light-sheet microscopy in order to image fast cellular dynamics, such as cell movements in an entire developing embryo or neuronal activity throughput an entire brain or nervous system, with high resolution in all dimensions, high imaging speeds, good physical coverage and low photo-damage. To achieve high temporal resolution and high spatial resolution at the same time, IsoView microscopy rapidly images large specimens via simultaneous light-sheet illumination and fluorescence detection along four orthogonal directions. In a post-processing step, these four views are then combined by means of high-throughput multiview deconvolution to yield images with a system resolution of ≤ 450 nm in all three dimensions. Using IsoView microscopy, we performed whole-animal functional imaging of Drosophila embryos and larvae at a spatial resolution of 1.1-2.5 μm and at a temporal resolution of 2 Hz for up to 9 hours. We also performed whole-brain functional imaging in larval zebrafish and multicolor imaging of fast cellular dynamics across entire, gastrulating Drosophila embryos with isotropic, sub-cellular resolution. Compared with conventional (spatially anisotropic) light-sheet microscopy, IsoView microscopy improves spatial resolution at least sevenfold and decreases resolution anisotropy at least threefold. Compared with existing high-resolution light-sheet techniques, such as lattice lightsheet microscopy or diSPIM, IsoView microscopy effectively doubles the penetration depth and provides subsecond temporal resolution for specimens 400-fold larger than could previously be imaged.
Colonoscopy tutorial software made with a cadaver's sectioned images.
Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo
2016-11-01
Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.
Image Tiling for Profiling Large Objects
NASA Technical Reports Server (NTRS)
Venkataraman, Ajit; Schock, Harold; Mercer, Carolyn R.
1992-01-01
Three dimensional surface measurements of large objects arc required in a variety of industrial processes. The nature of these measurements is changing as optical instruments arc beginning to replace conventional contact probes scanned over the objects. A common characteristic of the optical surface profilers is the trade off between measurement accuracy and field of view. In order to measure a large object with high accuracy, multiple views arc required. An accurate transformation between the different views is needed to bring about their registration. In this paper, we demonstrate how the transformation parameters can be obtained precisely by choosing control points which lie in the overlapping regions of the images. A good starting point for the transformation parameters is obtained by having a knowledge of the scanner position. The selection of the control points arc independent of the object geometry. By successively recording multiple views and obtaining transformation with respect to a single coordinate system, a complete physical model of an object can be obtained. Since all data arc in the same coordinate system, it can thus be used for building automatic models for free form surfaces.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K
2016-11-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm 2 . The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm 2 ). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications.
Song, Shaozhen; Xu, Jingjiang; Wang, Ruikang K.
2016-01-01
Current optical coherence tomography (OCT) imaging suffers from short ranging distance and narrow imaging field of view (FOV). There is growing interest in searching for solutions to these limitations in order to expand further in vivo OCT applications. This paper describes a solution where we utilize an akinetic swept source for OCT implementation to enable ~10 cm ranging distance, associated with the use of a wide-angle camera lens in the sample arm to provide a FOV of ~20 x 20 cm2. The akinetic swept source operates at 1300 nm central wavelength with a bandwidth of 100 nm. We propose an adaptive calibration procedure to the programmable akinetic light source so that the sensitivity of the OCT system over ~10 cm ranging distance is substantially improved for imaging of large volume samples. We demonstrate the proposed swept source OCT system for in vivo imaging of entire human hands and faces with an unprecedented FOV (up to 400 cm2). The capability of large-volume OCT imaging with ultra-long ranging and ultra-wide FOV is expected to bring new opportunities for in vivo biomedical applications. PMID:27896012
Wide-area phase-contrast X-ray imaging using large X-ray interferometers
NASA Astrophysics Data System (ADS)
Momose, Atsushi; Takeda, Tohoru; Yoneyama, Akio; Koyama, Ichiro; Itai, Yuji
2001-07-01
Large X-ray interferometers are developed for phase-contrast X-ray imaging aiming at medical applications. A monolithic X-ray interferometer and a separate one are studied, and currently a 25 mm×20 mm view area can be generated. This paper describes the strategy of our research program and some recent developments.
Software for Viewing Landsat Mosaic Images
NASA Technical Reports Server (NTRS)
Watts, Zack; Farve, Catharine L.; Harvey, Craig
2003-01-01
A Windows-based computer program has been written to enable novice users (especially educators and students) to view images of large areas of the Earth (e.g., the continental United States) generated from image data acquired in the Landsat observations performed circa the year 1990. The large-area images are constructed as mosaics from the original Landsat images, which were acquired in several wavelength bands and each of which spans an area (in effect, one tile of a mosaic) of .5 in latitude by .6 in longitude. Whereas the original Landsat data are registered on a universal transverse Mercator (UTM) grid, the program converts the UTM coordinates of a mouse pointer in the image to latitude and longitude, which are continuously updated and displayed as the pointer is moved. The mosaic image currently on display can be exported as a Windows bitmap file. Other images (e.g., of state boundaries or interstate highways) can be overlaid on Landsat mosaics. The program interacts with the user via standard toolbar, keyboard, and mouse user interfaces. The program is supplied on a compact disk along with tutorial and educational information.
Non-ECG-gated unenhanced MRA of the carotids: optimization and clinical feasibility.
Raoult, H; Gauvrit, J Y; Schmitt, P; Le Couls, V; Bannier, E
2013-11-01
To optimise and assess the clinical feasibility of a carotid non-ECG-gated unenhanced MRA sequence. Sixteen healthy volunteers and 11 patients presenting with internal carotid artery (ICA) disease underwent large field-of-view balanced steady-state free precession (bSSFP) unenhanced MRA at 3T. Sampling schemes acquiring the k-space centre either early (kCE) or late (kCL) in the acquisition window were evaluated. Signal and image quality was scored in comparison to ECG-gated kCE unenhanced MRA and TOF. For patients, computed tomography angiography was used as the reference. In volunteers, kCE sampling yielded higher image quality than kCL and TOF, with fewer flow artefacts and improved signal homogeneity. kCE unenhanced MRA image quality was higher without ECG-gating. Arterial signal and artery/vein contrast were higher with both bSSFP sampling schemes than with TOF. The kCE sequence allowed correct quantification of ten significant stenoses, and it facilitated the identification of an infrapetrous dysplasia, which was outside of the TOF imaging coverage. Non-ECG-gated bSSFP carotid imaging offers high-quality images and is a promising sequence for carotid disease diagnosis in a short acquisition time with high spatial resolution and a large field of view. • Non-ECG-gated unenhanced bSSFP MRA offers high-quality imaging of the carotid arteries. • Sequences using early acquisition of the k-space centre achieve higher image quality. • Non-ECG-gated unenhanced bSSFP MRA allows quantification of significant carotid stenosis. • Short MR acquisition times and ungated sequences are helpful in clinical practice. • High 3D spatial resolution and a large field of view improve diagnostic performance.
Lensfree on-chip microscopy over a wide field-of-view using pixel super-resolution
Bishara, Waheb; Su, Ting-Wei; Coskun, Ahmet F.; Ozcan, Aydogan
2010-01-01
We demonstrate lensfree holographic microscopy on a chip to achieve ~0.6 µm spatial resolution corresponding to a numerical aperture of ~0.5 over a large field-of-view of ~24 mm2. By using partially coherent illumination from a large aperture (~50 µm), we acquire lower resolution lensfree in-line holograms of the objects with unit fringe magnification. For each lensfree hologram, the pixel size at the sensor chip limits the spatial resolution of the reconstructed image. To circumvent this limitation, we implement a sub-pixel shifting based super-resolution algorithm to effectively recover much higher resolution digital holograms of the objects, permitting sub-micron spatial resolution to be achieved across the entire sensor chip active area, which is also equivalent to the imaging field-of-view (24 mm2) due to unit magnification. We demonstrate the success of this pixel super-resolution approach by imaging patterned transparent substrates, blood smear samples, as well as Caenoharbditis Elegans. PMID:20588977
2007-03-01
front of a large area blackbody as background. The viewing angle , defined as the angle between surface normal and camera line of sight, was varied by...and polarization angle were derived from the Stokes parameters. The dependence of these polarization characteristics on viewing angle was investigated
Lim, Jun; Park, So Yeong; Huang, Jung Yun; Han, Sung Mi; Kim, Hong-Tae
2013-01-01
We developed an off-axis-illuminated zone-plate-based hard x-ray Zernike phase-contrast microscope beamline at Pohang Light Source. Owing to condenser optics-free and off-axis illumination, a large field of view was achieved. The pinhole-type Zernike phase plate affords high-contrast images of a cell with minimal artifacts such as the shade-off and halo effects. The setup, including the optics and the alignment, is simple and easy, and allows faster and easier imaging of large bio-samples.
Wide field fluorescence epi-microscopy behind a scattering medium enabled by speckle correlations
NASA Astrophysics Data System (ADS)
Hofer, Matthias; Soeller, Christian; Brasselet, Sophie; Bertolotti, Jacopo
2018-04-01
Fluorescence microscopy is widely used in biological imaging, however scattering from tissues strongly limits its applicability to a shallow depth. In this work we adapt a methodology inspired from stellar speckle interferometry, and exploit the optical memory effect to enable fluorescence microscopy through a turbid layer. We demonstrate efficient reconstruction of micrometer-size fluorescent objects behind a scattering medium in epi-microscopy, and study the specificities of this imaging modality (magnification, field of view, resolution) as compared to traditional microscopy. Using a modified phase retrieval algorithm to reconstruct fluorescent objects from speckle images, we demonstrate robust reconstructions even in relatively low signal to noise conditions. This modality is particularly appropriate for imaging in biological media, which are known to exhibit relatively large optical memory ranges compatible with tens of micrometers size field of views, and large spectral bandwidths compatible with emission fluorescence spectra of tens of nanometers widths.
Diedrichs, Phillippa C; Lee, Christina
2010-06-01
Increasing body size and shape diversity in media imagery may promote positive body image. While research has largely focused on female models and women's body image, men may also be affected by unrealistic images. We examined the impact of average-size and muscular male fashion models on men's and women's body image and perceived advertisement effectiveness. A sample of 330 men and 289 women viewed one of four advertisement conditions: no models, muscular, average-slim or average-large models. Men and women rated average-size models as equally effective in advertisements as muscular models. For men, exposure to average-size models was associated with more positive body image in comparison to viewing no models, but no difference was found in comparison to muscular models. Similar results were found for women. Internalisation of beauty ideals did not moderate these effects. These findings suggest that average-size male models can promote positive body image and appeal to consumers. 2010 Elsevier Ltd. All rights reserved.
Giga-pixel fluorescent imaging over an ultra-large field-of-view using a flatbed scanner.
Göröcs, Zoltán; Ling, Yuye; Yu, Meng Dai; Karahalios, Dimitri; Mogharabi, Kian; Lu, Kenny; Wei, Qingshan; Ozcan, Aydogan
2013-11-21
We demonstrate a new fluorescent imaging technique that can screen for fluorescent micro-objects over an ultra-wide field-of-view (FOV) of ~532 cm(2), i.e., 19 cm × 28 cm, reaching a space-bandwidth product of more than 2 billion. For achieving such a large FOV, we modified the hardware and software of a commercially available flatbed scanner, and added a custom-designed absorbing fluorescent filter, a two-dimensional array of external light sources for computer-controlled and high-angle fluorescent excitation. We also re-programmed the driver of the scanner to take full control of the scanner hardware and achieve the highest possible exposure time, gain and sensitivity for detection of fluorescent micro-objects through the gradient index self-focusing lens array that is positioned in front of the scanner sensor chip. For example, this large FOV of our imaging platform allows us to screen more than 2.2 mL of undiluted whole blood for detection of fluorescent micro-objects within <5 minutes. This high-throughput fluorescent imaging platform could be useful for rare cell research and cytometry applications by enabling rapid screening of large volumes of optically dense media. Our results constitute the first time that a flatbed scanner has been converted to a fluorescent imaging system, achieving a record large FOV.
Multi-view 3D echocardiography compounding based on feature consistency
NASA Astrophysics Data System (ADS)
Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.
2011-09-01
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.
Mid-level image representations for real-time heart view plane classification of echocardiograms.
Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson
2015-11-01
In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tiggemann, Marika; Brown, Zoe; Zaccardo, Mia; Thomas, Nicole
2017-06-01
The present experiment aimed to investigate the impact of the addition of disclaimer labels to fashion magazine shoots on women's body dissatisfaction. Participants were 320 female undergraduate students who viewed fashion shoots containing a thin and attractive model with no disclaimer label, or a small, large, or very large disclaimer label, or product images. Although thin-ideal fashion shoot images resulted in greater body dissatisfaction than product images, there was no significant effect of disclaimer label. Internalisation of the thin ideal was found to moderate the effect of disclaimer label, such that internalisation predicted increased body dissatisfaction in the no label and small label conditions, but not in the larger label conditions. Overall, the results showed no benefit for any size of disclaimer label in ameliorating the negative effect of viewing thin-ideal media images. It was concluded that more extensive research is required before the effective implementation of disclaimer labels. Copyright © 2017 Elsevier Ltd. All rights reserved.
SPED light sheet microscopy: fast mapping of biological system structure and function
Tomer, Raju; Lovett-Barron, Matthew; Kauvar, Isaac; Andalman, Aaron; Burns, Vanessa M.; Sankaran, Sethuraman; Grosenick, Logan; Broxton, Michael; Yang, Samuel; Deisseroth, Karl
2016-01-01
The goal of understanding living nervous systems has driven interest in high-speed and large field-of-view volumetric imaging at cellular resolution. Light-sheet microscopy approaches have emerged for cellular-resolution functional brain imaging in small organisms such as larval zebrafish, but remain fundamentally limited in speed. Here we have developed SPED light sheet microscopy, which combines large volumetric field-of-view via an extended depth of field with the optical sectioning of light sheet microscopy, thereby eliminating the need to physically scan detection objectives for volumetric imaging. SPED enables scanning of thousands of volumes-per-second, limited only by camera acquisition rate, through the harnessing of optical mechanisms that normally result in unwanted spherical aberrations. We demonstrate capabilities of SPED microscopy by performing fast sub-cellular resolution imaging of CLARITY mouse brains and cellular-resolution volumetric Ca2+ imaging of entire zebrafish nervous systems. Together, SPED light sheet methods enable high-speed cellular-resolution volumetric mapping of biological system structure and function. PMID:26687363
NASA Astrophysics Data System (ADS)
Hu, Bihe; Bolus, Daniel; Brown, J. Quincy
2018-02-01
Current gold-standard histopathology for cancerous biopsies is destructive, time consuming, and limited to 2D slices, which do not faithfully represent true 3D tumor micro-morphology. Light sheet microscopy has emerged as a powerful tool for 3D imaging of cancer biospecimens. Here, we utilize the versatile dual-view inverted selective plane illumination microscopy (diSPIM) to render digital histological images of cancer biopsies. Dual-view architecture enabled more isotropic resolution in X, Y, and Z; and different imaging modes, such as adding electronic confocal slit detection (eCSD) or structured illumination (SI), can be used to improve degraded image quality caused by background signal of large, scattering samples. To obtain traditional H&E-like images, we used DRAQ5 and eosin (D&E) staining, with 488nm and 647nm laser illumination, and multi-band filter sets. Here, phantom beads and a D&E stained buccal cell sample have been used to verify our dual-view method. We also show that via dual view imaging and deconvolution, more isotropic resolution has been achieved for optical cleared human prostate sample, providing more accurate quantitation of 3D tumor architecture than was possible with single-view SPIM methods. We demonstrate that the optimized diSPIM delivers more precise analysis of 3D cancer microarchitecture in human prostate biopsy than simpler light sheet microscopy arrangements.
How does c-view image quality compare with conventional 2D FFDM?
Nelson, Jeffrey S; Wells, Jered R; Baker, Jay A; Samei, Ehsan
2016-05-01
The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to compare the intrinsic image quality of synthesized 2D c-view and 2D FFDM images in terms of resolution, contrast, and noise. Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than c-view according to both the average observer and automated scores. In addition, between 50% and 70% of c-view images failed to meet the nominal minimum ACR accreditation requirements-primarily due to fiber breaks. Software analysis demonstrated that c-view provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the c-view image (11 lp/mm FFDM, 5 lp/mm c-view) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with c-view. Whereas the FFDM image contained approximately white noise texture, the c-view image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Their analysis demonstrates many instances where the c-view image quality differs from FFDM. Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of c-view images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + c-view performs relative to DBT + FFDM or FFDM alone.
Clementine Images of Earth and Moon
NASA Technical Reports Server (NTRS)
1997-01-01
During its flight and lunar orbit, the Clementine spacecraft returned images of the planet Earth and the Moon. This collection of UVVIS camera Clementine images shows the Earth from the Moon and 3 images of the Earth.
The image on the left shows the Earth as seen across the lunar north pole; the large crater in the foreground is Plaskett. The Earth actually appeared about twice as far above the lunar horizon as shown. The top right image shows the Earth as viewed by the UVVIS camera while Clementine was in transit to the Moon; swirling white cloud patterns indicate storms. The two views of southeastern Africa were acquired by the UVVIS camera while Clementine was in low Earth orbit early in the missionBirth of a Large Iceberg in Pine Island Bay, Antarctica
2001-11-14
A large tabular iceberg (42 kilometers x 17 kilometers) broke off Pine Island Glacier, West Antarctica (75ºS latitude, 102ºW longitude) sometime between November 4 and 12, 2001. Images of the glacier were acquired by the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra spacecraft. This event was preceded by the formation of a large crack across the glacier in mid 2000. Data gathered by other imaging instruments revealed the crack to be propagating through the shelf ice at a rate averaging 15 meters per day, accompanied by a slight rotation of about one percent per year at the seaward margin of the rift. The image set shows three views of Pine Island Glacier acquired by MISR's vertical-viewing (nadir) camera. The first was captured in late 2000, early in the development of the crack. The second and third views were acquired in November 2001, just before and just after the new iceberg broke off. The existence of the crack took the glaciological community by surprise, and the rapid rate at which the crack propagated was also not anticipated. Glaciologists predicted that the rift would reach the other side of the glacier sometime in 2002. However, the iceberg detached much sooner than anticipated, and the last 10-kilometer segment that was still attached to the ice shelf snapped off in a matter of days. http://photojournal.jpl.nasa.gov/catalog/PIA03431
Emotions' Impact on Viewing Behavior under Natural Conditions
Kaspar, Kai; Hloucal, Teresa-Maria; Kriz, Jürgen; Canzler, Sonja; Gameiro, Ricardo Ramos; Krapp, Vanessa; König, Peter
2013-01-01
Human overt attention under natural conditions is guided by stimulus features as well as by higher cognitive components, such as task and emotional context. In contrast to the considerable progress regarding the former, insight into the interaction of emotions and attention is limited. Here we investigate the influence of the current emotional context on viewing behavior under natural conditions. In two eye-tracking studies participants freely viewed complex scenes embedded in sequences of emotion-laden images. The latter primes constituted specific emotional contexts for neutral target images. Viewing behavior toward target images embedded into sets of primes was affected by the current emotional context, revealing the intensity of the emotional context as a significant moderator. The primes themselves were not scanned in different ways when presented within a block (Study 1), but when presented individually, negative primes were more actively scanned than positive primes (Study 2). These divergent results suggest an interaction between emotional priming and further context factors. Additionally, in most cases primes were scanned more actively than target images. Interestingly, the mere presence of emotion-laden stimuli in a set of images of different categories slowed down viewing activity overall, but the known effect of image category was not affected. Finally, viewing behavior remained largely constant on single images as well as across the targets' post-prime positions (Study 2). We conclude that the emotional context significantly influences the exploration of complex scenes and the emotional context has to be considered in predictions of eye-movement patterns. PMID:23326353
Elliptical field-of-view PROPELLER imaging.
Devaraj, Ajit; Pipe, James G
2009-09-01
Traditionally two-dimensional scans are designed to support an isotropic field-of-view (iFOV). When imaging elongated objects, significant savings in scan time can potentially be achieved by supporting an elliptical field-of-view (eFOV). This work presents an empirical closed-form solution to adapt the PROPELLER trajectory for an eFOV. The proposed solution is built on the geometry of the PROPELLER trajectory permitting the scan prescription and data reconstruction to remain largely similar to standard PROPELLER. The achieved FOV is experimentally validated by the point spread function (PSF) of a phantom scan. The details of potential savings in scan time and the signal-to-noise ratio (SNR) performance in comparison to iFOV scans for both phantom and in-vivo images are also described.
Wide-field-of-view millimeter-wave telescope design with ultra-low cross-polarization
NASA Astrophysics Data System (ADS)
Bernacki, Bruce E.; Kelly, James F.; Sheen, David; Hatchell, Brian; Valdez, Patrick; Tedeschi, Jonathan; Hall, Thomas; McMakin, Douglas
2012-06-01
As millimeter-wave arrays become available, off-axis imaging performance of the fore optics increases in importance due to the relatively large physical extent of the arrays. Typically, simple optical telescope designs are adapted to millimeter-wave imaging but single-mirror spherical or classic conic designs cannot deliver adequate image quality except near the optical axis. Since millimeter-wave designs are quasi-optical, optical ray tracing and commercial design software can be used to optimize designs to improve off-axis imaging as well as minimize cross-polarization. Methods that obey the Dragone-Mizuguchi condition for the design of reflective millimeter-wave telescopes with low cross-polarization also provide additional degrees of freedom that offer larger fields of view than possible with single-reflector designs. Dragone's graphical design method does not lend itself readily to computer-based optical design approaches, but subsequent authors expanded on Dragone's geometric design approach with analytic expressions that describe the location, shape, off-axis height and tilt of the telescope elements that satisfy Dragone's design rules and can be used as a first-order design for subsequent computer-based design and optimization. We investigate two design variants that obey the Dragone-Mizuguchi conditions that exhibit ultra-low cross-polarization and a large diffraction-limited field of view well suited to millimeter-wave imaging arrays.
3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse
Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.
2009-01-01
We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166
MISR and AirMISR Simultaneously Observe African Grassland Fires
NASA Technical Reports Server (NTRS)
2000-01-01
These images of northeastern South Africa, near Kruger National Park, were acquired on September 7, 2000. The left image shows an 85-kilometer wide x 200-kilometer long area captured by MISR's aftward-viewing 45-degree camera. At lower left are the Drakensberg Mountains; to the east of this range a large burn scar with thin smoke plumes from still-smoldering fires is visible. Near the top of the image another large burn scar with an open-pit mine at its western edge can be seen. Other burn scars are scattered throughout the image.
Just above the center of the lefthand image is a polygonal burn scar with a set of smoke plumes from actively burning fires at its southwestern tip. The righthand image, which is a 'zoomed-in' view of the area, was acquired almost simultaneously by MISR's airborne counterpart, AirMISR, aboard a NASA ER-2 high-altitude aircraft. AirMISR contains a single camera that rotates to different view angles; when this image was acquired the camera was pointed straight downward. Because the ER-2 aircraft flies at an altitude of 20 kilometers, whereas the Terra spacecraft orbits the Earth 700 kilometers above the ground, the AirMISR image has 35 times finer spatial resolution. The AirMISR image covers about 9 kilometers x 9 kilometers. Unlike the MISR view, the AirMISR data are in 'raw' form and processing to remove radiometric and geometric distortions has not yet been performed.Fires such as those shown in the images are deliberately set to burn off dry vegetation, and constitute a widespread agricultural practice in many parts of Africa. These MISR and AirMISR images are part of an international field, aircraft, and satellite data collection and analysis campaign known as SAFARI-2000, the Southern Africa Regional Science Initiative. SAFARI-2000 is designed, in part, to study the effects of large-scale human activities on the regional climate, meteorology, and ecosystems.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
NASA Astrophysics Data System (ADS)
Xie, Hongbo; Ren, Delun; Wang, Chao; Mao, Chensheng; Yang, Lei
2018-02-01
Ultrafast time stretch imaging offers unprecedented imaging speed and enables new discoveries in scientific research and engineering. One challenge in exploiting time stretch imaging in mid-infrared is the lack of high-quality diffractive optical elements (DOEs), which encode the image information into mid-infrared optical spectrum. This work reports the design and optimization of mid-infrared DOE with high diffraction-efficiency, broad bandwidth and large field of view. Using various typical materials with their refractive indices ranging from 1.32 to 4.06 in ? mid-infrared band, diffraction efficiencies of single-layer and double-layer DOEs have been studied in different wavelength bands with different field of views. More importantly, by replacing the air gap of double-layer DOE with carefully selected optical materials, one optimized ? triple-layer DOE, with efficiency higher than 95% in the whole ? mid-infrared window and field of view greater than ?, is designed and analyzed. This new DOE device holds great potential in ultrafast mid-infrared time stretch imaging and spectroscopy.
Preliminary analysis of Dione Regio, Venus: The final Magellan regional imaging gap
NASA Technical Reports Server (NTRS)
Keddie, S. T.
1993-01-01
In Sep. 1992, the Magellan spacecraft filled the final large gap in its coverage of Venus when it imaged an area west of Alpha Regio. F-BIDR's and some test MIDR's of parts of this area were available as of late December. Dione Regio was imaged by the Arecibo observatory and a preliminary investigation of Magellan images supports the interpretations made based on these earlier images: Dione Regio is a regional highland on which is superposed three large, very distinct volcanic edifices. The superior resolution and different viewing geometry of the Magellan images also clarified some uncertainties and revealed fascinating details about this region.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
Development of CAD prototype system for Crohn's disease
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Watanabe, Osamu; Ando, Takafumi; Goto, Hidemi; Mori, Kensaku
2010-03-01
The purpose of this paper is to present a CAD prototype system for Crohn's disease. Crohn's disease causes inflammation or ulcers of the gastrointestinal tract. The number of patients of Crohn's disease is increasing in Japan. Symptoms of Crohn's disease include intestinal stenosis, longitudinal ulcers, and fistulae. Optical endoscope cannot pass through intestinal stenosis in some cases. We propose a new CAD system using abdominal fecal tagging CT images for efficient diagnosis of Crohn's disease. The system displays virtual unfolded (VU), virtual endoscopic, curved planar reconstruction, multi planar reconstruction, and outside views of both small and large intestines. To generate the VU views, we employ a small and large intestines extraction method followed by a simple electronic cleansing method. The intestine extraction is based on the region growing process, which uses a characteristic that tagged fluid neighbor air in the intestine. The electronic cleansing enables observation of intestinal wall under tagged fluid. We change the height of the VU views according to the perimeter of the intestine. In addition, we developed a method to enhance the longitudinal ulcer on views of the system. We enhance concave parts on the intestinal wall, which are caused by the longitudinal ulcer, based on local intensity structure analysis. We examined the small and the large intestines of eleven CT images by the proposed system. The VU views enabled efficient observation of the intestinal wall. The height change of the VU views helps finding intestinal stenosis on the VU views. The concave region enhancement made longitudinal ulcers clear on the views.
Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh
2015-01-01
SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257
JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays
Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.
2004-01-01
JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
Role of stereoscopic imaging in the astronomical study of nearby stars and planetary systems
NASA Astrophysics Data System (ADS)
Mark, David S.; Waste, Corby
1997-05-01
The development of stereoscopic imaging as a 3D spatial mapping tool for planetary science is now beginning to find greater usefulness in the study of stellar atmospheres and planetary systems in general. For the first time, telescopes and accompanying spectrometers have demonstrated the capacity to depict the gyrating motion of nearby stars so precisely as to derive the existence of closely orbiting Jovian-type planets, which are gravitationally influencing the motion of the parent star. Also for the first time, remote space borne telescopes, unhindered by atmospheric effects, are recording and tracking the rotational characteristics of our nearby star, the sun, so accurately as to reveal and identify in great detail the heightened turbulence of the sun's corona. In order to perform new forms of stereo imaging and 3D reconstruction with such large scale objects as stars and planets, within solar systems, a set of geometrical parameters must be observed, and are illustrated here. The behavior of nearby stars can be studied over time using an astrometric approach, making use of the earth's orbital path as a semi- yearly stereo base for the viewing telescope. As is often the case in this method, the resulting stereo angle becomes too narrow to afford a beneficial stereo view, given the star's distance and the general level of detected noise in the signal. With the advent, though, of new earth based and space borne interferometers, operating within various wavelengths including IR, the capability of detecting and assembling the full 3-dimensional axes of motion of nearby gyrating stars can be achieved. In addition, the coupling of large interferometers with combined data sets can provide large stereo bases and low signal noise to produce converging 3- dimensional stereo views of nearby planetary systems. Several groups of new astronomical stereo imaging data sets are presented, including 3D views of the sun taken by the Solar and Heliospheric Observatory, coincident stereo views of the planet Jupiter during impact of comet Shoemaker-Levy 9, taken by the Galileo spacecraft and the Hubble Space Telescope, as well as views of nearby stars. Spatial ambiguities arising in singular 2-dimensional viewpoints are shown to be resolvable in twin perspective, 3-dimensional stereo views. Stereo imaging of this nature, therefore, occupies a complementary role in astronomical observing, provided the proper fields of view correspond with the path of the orbital geometry of the observing telescope.
Wolters, Mark A; Dean, C B
2017-01-01
Remote sensing images from Earth-orbiting satellites are a potentially rich data source for monitoring and cataloguing atmospheric health hazards that cover large geographic regions. A method is proposed for classifying such images into hazard and nonhazard regions using the autologistic regression model, which may be viewed as a spatial extension of logistic regression. The method includes a novel and simple approach to parameter estimation that makes it well suited to handling the large and high-dimensional datasets arising from satellite-borne instruments. The methodology is demonstrated on both simulated images and a real application to the identification of forest fire smoke.
Clementine Images of Earth and Moon
1999-06-12
During its flight and lunar orbit, NASA’s Clementine spacecraft returned images of the planet Earth and the Moon. This collection of UVVIS camera Clementine images shows the Earth from the Moon and 3 images of the Earth. The image on the left shows the Earth as seen across the lunar north pole; the large crater in the foreground is Plaskett. The Earth actually appeared about twice as far above the lunar horizon as shown. The top right image shows the Earth as viewed by the UVVIS camera while Clementine was in transit to the Moon; swirling white cloud patterns indicate storms. The two views of southeastern Africa were acquired by the UVVIS camera while Clementine was in low Earth orbit early in the mission. http://photojournal.jpl.nasa.gov/catalog/PIA00432
How does C-VIEW image quality compare with conventional 2D FFDM?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jeffrey S., E-mail: nelson.jeffrey@duke.edu; Wells, Jered R.; Baker, Jay A.
Purpose: The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to comparemore » the intrinsic image quality of synthesized 2D C-VIEW and 2D FFDM images in terms of resolution, contrast, and noise. Methods: Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Results: Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than C-VIEW according to both the average observer and automated scores. In addition, between 50% and 70% of C-VIEW images failed to meet the nominal minimum ACR accreditation requirements—primarily due to fiber breaks. Software analysis demonstrated that C-VIEW provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the C-VIEW image (11 lp/mm FFDM, 5 lp/mm C-VIEW) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with C-VIEW. Whereas the FFDM image contained approximately white noise texture, the C-VIEW image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Conclusions: Their analysis demonstrates many instances where the C-VIEW image quality differs from FFDM. Compared to FFDM, C-VIEW offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of C-VIEW images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + C-VIEW performs relative to DBT + FFDM or FFDM alone.« less
Formation of a White-Light Jet Within a Quadrupolar Magnetic Configuration
NASA Astrophysics Data System (ADS)
Filippov, Boris; Koutchmy, Serge; Tavabi, Ehsan
2013-08-01
We analyze multi-wavelength and multi-viewpoint observations of a large-scale event viewed on 7 April 2011, originating from an active-region complex. The activity leads to a white-light jet being formed in the outer corona. The topology and evolution of the coronal structures were imaged in high resolution using the Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO). In addition, large field-of-view images of the corona were obtained using the Sun Watcher using Active Pixel System detector and Image Processing (SWAP) telescope onboard the PRoject for Onboard Autonomy (PROBA2) microsatellite, providing evidence for the connectivity of the coronal structures with outer coronal features that were imaged with the Large Angle Spectrometric Coronagraph (LASCO) C2 on the S olar and Heliospheric Observatory (SOHO). The data sets reveal an Eiffel-tower type jet configuration extending into a narrow jet in the outer corona. The event starts from the growth of a dark area in the central part of the structure. The darkening was also observed in projection on the disk by the Solar TErrestrial RElations Observatory-Ahead (STEREO-A) spacecraft from a different point of view. We assume that the dark volume in the corona descends from a coronal cavity of a flux rope that moved up higher in the corona but still failed to erupt. The quadrupolar magnetic configuration corresponds to a saddle-like shape of the dark volume and provides a possibility for the plasma to escape along the open field lines into the outer corona, forming the white-light jet.
Understanding Clinical Mammographic Breast Density Assessment: a Deep Learning Perspective.
Mohamed, Aly A; Luo, Yahong; Peng, Hong; Jankowitz, Rachel C; Wu, Shandong
2017-09-20
Mammographic breast density has been established as an independent risk marker for developing breast cancer. Breast density assessment is a routine clinical need in breast cancer screening and current standard is using the Breast Imaging and Reporting Data System (BI-RADS) criteria including four qualitative categories (i.e., fatty, scattered density, heterogeneously dense, or extremely dense). In each mammogram examination, a breast is typically imaged with two different views, i.e., the mediolateral oblique (MLO) view and cranial caudal (CC) view. The BI-RADS-based breast density assessment is a qualitative process made by visual observation of both the MLO and CC views by radiologists, where there is a notable inter- and intra-reader variability. In order to maintain consistency and accuracy in BI-RADS-based breast density assessment, gaining understanding on radiologists' reading behaviors will be educational. In this study, we proposed to leverage the newly emerged deep learning approach to investigate how the MLO and CC view images of a mammogram examination may have been clinically used by radiologists in coming up with a BI-RADS density category. We implemented a convolutional neural network (CNN)-based deep learning model, aimed at distinguishing the breast density categories using a large (15,415 images) set of real-world clinical mammogram images. Our results showed that the classification of density categories (in terms of area under the receiver operating characteristic curve) using MLO view images is significantly higher than that using the CC view. This indicates that most likely it is the MLO view that the radiologists have predominately used to determine the breast density BI-RADS categories. Our study holds a potential to further interpret radiologists' reading characteristics, enhance personalized clinical training to radiologists, and ultimately reduce reader variations in breast density assessment.
Fu, Yong; Ji, Zhong; Ding, Wenzheng; Ye, Fanghao; Lou, Cunguang
2014-11-01
Previous studies demonstrated that thermoacoustic imaging (TAI) has great potential for breast tumor detection. However, large field of view (FOV) imaging remains a long-standing challenge for three-dimensional (3D) breast tumor localization. Here, the authors propose a practical TAI system for noninvasive 3D localization of breast tumors with large FOV through the use of ultrashort microwave pulse (USMP). A USMP generator was employed for TAI. The energy density required for quality imaging and the corresponding microwave-to-acoustic conversion efficiency were compared with that of conventional TAI. The microwave energy distribution, imaging depth, resolution, and 3D imaging capabilities were then investigated. Finally, a breast phantom embedded with a laboratory-grown tumor was imaged to evaluate the FOV performance of the USMP TAI system, under a simulated clinical situation. A radiation energy density equivalent to just 1.6%-2.2% of that for conventional submicrosecond microwave TAI was sufficient to obtain a thermoacoustic signal with the required signal-to-noise ratio. This result clearly demonstrated a significantly higher microwave-to-acoustic conversion efficiency of USMP TAI compared to that of conventional TAI. The USMP TAI system achieved 61 mm imaging depth and 12 × 12 cm(2) microwave radiation area. The volumetric image of a copper target measured at depth of 4-6 cm matched well with the actual shape and the resolution reaches 230 μm. The TAI of the breast phantom was precisely localized to an accuracy of 0.1 cm over an 8 × 8 cm(2) FOV. The experimental results demonstrated that the USMP TAI system offered significant potential for noninvasive clinical detection and 3D localization of deep breast tumors, with low microwave radiation dose and high spatial resolution over a sufficiently large FOV.
Towards real-time quantitative optical imaging for surgery
NASA Astrophysics Data System (ADS)
Gioux, Sylvain
2017-07-01
There is a pressing clinical need to provide image guidance during surgery. Currently, assessment of tissue that needs to be resected or avoided is performed subjectively leading to a large number of failures, patient morbidity and increased healthcare cost. Because near-infrared (NIR) optical imaging is safe, does not require contact, and can provide relatively deep information (several mm), it offers unparalleled capabilities for providing image guidance during surgery. In this work, we introduce a novel concept that enables the quantitative imaging of endogenous molecular information over large fields-of-view. Because this concept can be implemented in real-time, it is amenable to provide video-rate endogenous information during surgery.
13-fold resolution gain through turbid layer via translated unknown speckle illumination
Guo, Kaikai; Zhang, Zibang; Jiang, Shaowei; Liao, Jun; Zhong, Jingang; Eldar, Yonina C.; Zheng, Guoan
2017-01-01
Fluorescence imaging through a turbid layer holds great promise for various biophotonics applications. Conventional wavefront shaping techniques aim to create and scan a focus spot through the turbid layer. Finding the correct input wavefront without direct access to the target plane remains a critical challenge. In this paper, we explore a new strategy for imaging through turbid layer with a large field of view. In our setup, a fluorescence sample is sandwiched between two turbid layers. Instead of generating one focus spot via wavefront shaping, we use an unshaped beam to illuminate the turbid layer and generate an unknown speckle pattern at the target plane over a wide field of view. By tilting the input wavefront, we raster scan the unknown speckle pattern via the memory effect and capture the corresponding low-resolution fluorescence images through the turbid layer. Different from the wavefront-shaping-based single-spot scanning, the proposed approach employs many spots (i.e., speckles) in parallel for extending the field of view. Based on all captured images, we jointly recover the fluorescence object, the unknown optical transfer function of the turbid layer, the translated step size, and the unknown speckle pattern. Without direct access to the object plane or knowledge of the turbid layer, we demonstrate a 13-fold resolution gain through the turbid layer using the reported strategy. We also demonstrate the use of this technique to improve the resolution of a low numerical aperture objective lens allowing to obtain both large field of view and high resolution at the same time. The reported method provides insight for developing new fluorescence imaging platforms and may find applications in deep-tissue imaging. PMID:29359102
High-Resolution Large-Field-of-View Ultrasound Breast Imager
2013-06-01
record the display of the AO detector for image processing and storage. The measured resolution is 400 microns. • The noise present in the imaging...l T 4 O igure 7: (Le n cyst thickn ask 3: Inco .a. Incorpor ensitivity (U e have not ideo camera enses. ask 4: Desi .a. Determin ur initial pl
Leveraging Cognitive Context for Object Recognition
2014-06-01
learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by
Imaging detectors and electronics—a view of the future
NASA Astrophysics Data System (ADS)
Spieler, Helmuth
2004-09-01
Imaging sensors and readout electronics have made tremendous strides in the past two decades. The application of modern semiconductor fabrication techniques and the introduction of customized monolithic integrated circuits have made large-scale imaging systems routine in high-energy physics. This technology is now finding its way into other areas, such as space missions, synchrotron light sources, and medical imaging. I review current developments and discuss the promise and limits of new technologies. Several detector systems are described as examples of future trends. The discussion emphasizes semiconductor detector systems, but I also include recent developments for large-scale superconducting detector arrays.
Digital sun sensor multi-spot operation.
Rufino, Giancarlo; Grassi, Michele
2012-11-28
The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.
NASA Astrophysics Data System (ADS)
Guyon, O.; Pluzhnik, E.; Martinache, F.; Ridgway, S.; Galicher, R.
2004-12-01
Using 2 aspheric mirrors, it is possible to achromatically apodize a telescope beam without losing light (Phase-Induced Amplitude Apodization, PIAA). We propose a coronagraph concept using this technique: the telescope pupil is first apodized to yield a high contrast focal plane image, on which an occulting mask is placed. The exit pupil is then de-apodized to regain a large field of view. We show that the PIAAC combines all the qualities needed for efficient exoplanet imaging: full throughput, small inner working angle (1.2 l/d), high angular resolution (l/d), low sensitivity to tip-tilt, and large field of view (more than 200 l/d in diameter). We conclude that PIAAC is well adapted for exoplanet imaging with a 4m to 6m space telescope (TPF mission). This work was carried out under JPL contract numbers 1254445 and 1257767 for Development of Technologies for the Terrestrial Planet Finder Mission, with the support and hospitality of the National Astronomical Observatory of Japan.
Build YOUR All-Sky View with Aladin
NASA Astrophysics Data System (ADS)
Oberto, A.; Fernique, P.; Boch, T.; Bonnarel, F.
2011-07-01
From the need to extend the display outside the boundaries of a single image, the Aladin team recently developed a new feature to visualize wide areas or even all of the sky. This all-sky view is particularly useful for visualization of very large objects and, with coverage of the whole sky, maps from the Planck satellite. To improve on this capability, some catalogs and maps have been built from many surveys (e.g., DSS, IRIS, GLIMPSE, SDSS, 2MASS) in mixed resolutions, allowing progressive display. The maps are constructed by mosaicing individual images. Now, we provide a new tool to build an all-sky view with your own images. From the images you have selected, it will compose a mosaic with several resolutions (HEALPix tessellation), and organize them to allow their progressive display in Aladin. For convenience, you can export it to a HEALPix map, or share it with the community through Aladin from your web site or eventually from the CDS image collection.
The Multi-Spectral Imaging Diagnostic on Alcator C-MOD and TCV
NASA Astrophysics Data System (ADS)
Linehan, B. L.; Mumgaard, R. T.; Duval, B. P.; Theiler, C. G.; TCV Team
2017-10-01
The Multi-Spectral Imaging (MSI) diagnostic is a new instrument that captures simultaneous spectrally filtered images from a common sight view while maintaining a large tendue and high spatial resolution. The system uses a polychromator layout where each image is sequentially filtered. This procedure yields a high transmission for each spectral channel with minimal vignetting and aberrations. A four-wavelength system was installed on Alcator C-Mod and then moved to TCV. The system uses industrial cameras to simultaneously image the divertor region at 95 frames per second at f/# 2.8 via a coherent fiber bundle (C-Mod) or a lens-based relay optic (TCV). The images are absolutely calibrated and spatially registered enabling accurate measurement of atomic line ratios and absolute line intensities. The images will be used to study divertor detachment by imaging impurities and Balmer series emissions. Furthermore, the large field of view and an ability to support many types of detectors opens the door for other novel approaches to optically measuring plasma with high temporal, spatial, and spectral resolution. Such measurements will allow for the study of Stark broadening and divertor turbulence. Here, we present the first measurements taken with this cavity imaging system. USDoE awards DE-FC02-99ER54512 and award DE-AC05-06OR23100, ORISE, administered by ORAU.
Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme
NASA Astrophysics Data System (ADS)
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-08-01
The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793 ± 0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.
2014-11-21
The puzzling, fascinating surface of Jupiter icy moon Europa looms large in this newly-reprocessed [sic] color view, made from images taken by NASA Galileo spacecraft in the late 1990s. This is the color view of Europa from Galileo that shows the largest portion of the moon's surface at the highest resolution. The view was previously released as a mosaic with lower resolution and strongly enhanced color (see PIA02590). To create this new version, the images were assembled into a realistic color view of the surface that approximates how Europa would appear to the human eye. The scene shows the stunning diversity of Europa's surface geology. Long, linear cracks and ridges crisscross the surface, interrupted by regions of disrupted terrain where the surface ice crust has been broken up and re-frozen into new patterns. Color variations across the surface are associated with differences in geologic feature type and location. For example, areas that appear blue or white contain relatively pure water ice, while reddish and brownish areas include non-ice components in higher concentrations. The polar regions, visible at the left and right of this view, are noticeably bluer than the more equatorial latitudes, which look more white. This color variation is thought to be due to differences in ice grain size in the two locations. Images taken through near-infrared, green and violet filters have been combined to produce this view. The images have been corrected for light scattered outside of the image, to provide a color correction that is calibrated by wavelength. Gaps in the images have been filled with simulated color based on the color of nearby surface areas with similar terrain types. This global color view consists of images acquired by the Galileo Solid-State Imaging (SSI) experiment on the spacecraft's first and fourteenth orbits through the Jupiter system, in 1995 and 1998, respectively. Image scale is 1 mile (1.6 kilometers) per pixel. North on Europa is at right. http://photojournal.jpl.nasa.gov/catalog/PIA19048
Three-dimensional holographic display of ultrasound computed tomograms
NASA Astrophysics Data System (ADS)
Andre, Michael P.; Janee, Helmar S.; Ysrael, Mariana Z.; Hodler, Jeurg; Olson, Linda K.; Leopold, George R.; Schulz, Raymond
1997-05-01
Breast ultrasound is a valuable adjunct to mammography but is limited by a very small field of view, particularly with high-resolution transducers necessary for breast diagnosis. We have been developing an ultrasound system based on a diffraction tomography method that provides slices through the breast on a large 20-cm diameter circular field of view. Eight to fifteen images are typically produced in sequential coronal planes from the nipple to the chest wall with either 0.25 or 0.5 mm pixels. As a means to simplify the interpretation of this large set of images, we report experience with 3D life-sized displays of the entire breast of human volunteers using a digital holographic technique. The compound 3D holographic images are produced from the digital image matrix, recorded on 14 X 17 inch transparency and projected on a special white-light viewbox. Holographic visualization of the entire breast has proved to be the preferred method for 3D display of ultrasound computed tomography images. It provides a unique perspective on breast anatomy and may prove useful for biopsy guidance and surgical planning.
Active browsing using similarity pyramids
NASA Astrophysics Data System (ADS)
Chen, Jau-Yuen; Bouman, Charles A.; Dalton, John C.
1998-12-01
In this paper, we describe a new approach to managing large image databases, which we call active browsing. Active browsing integrates relevance feedback into the browsing environment, so that users can modify the database's organization to suit the desired task. Our method is based on a similarity pyramid data structure, which hierarchically organizes the database, so that it can be efficiently browsed. At coarse levels, the similarity pyramid allows users to view the database as large clusters of similar images. Alternatively, users can 'zoom into' finer levels to view individual images. We discuss relevance feedback for the browsing process, and argue that it is fundamentally different from relevance feedback for more traditional search-by-query tasks. We propose two fundamental operations for active browsing: pruning and reorganization. Both of these operations depend on a user-defined relevance set, which represents the image or set of images desired by the user. We present statistical methods for accurately pruning the database, and we propose a new 'worm hole' distance metric for reorganizing the database, so that members of the relevance set are grouped together.
Wide-Field-of-View Millimeter-Wave Telescope Design with Ultra-Low Cross-Polarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernacki, Bruce E.; Kelly, James F.; Sheen, David M.
2012-05-01
As millimeter-wave arrays become available, off-axis imaging performance of the fore optics increases in importance due to the relatively large physical extent of the arrays. Typically, simple optical telescope designs are adapted to millimeter-wave imaging but single-mirror spherical or classic conic designs cannot deliver adequate image quality except near the optical axis. Since most millimeter-wave designs are quasi-optical, optical ray tracing and commercial design software can be used to optimize designs to improve off-axis imaging as well as minimize cross-polarization. Methods that obey the Dragone-Mizuguchi condition for the design of reflective millimeter-wave telescopes with low cross-polarization also provide additional degreesmore » of freedom that offer larger fields of view than possible with single-reflector designs. Dragone’s graphical design method does not lend itself readily to computer-based optical design approaches, but subsequent authors expanded on Dragone’s geometric design approach with analytic expressions that describe the location, shape, off-axis height and tilt of the telescope elements that satisfy Dragone’s design rules and can be used as a first-order design for subsequent computer-based design and optimization. We investigate two design variants that obey the Dragone-Mizuguchi conditions that exhibit ultra-low polarization crosstalk and a large diffraction-limited field of view well suited to millimeter-wave imaging arrays.« less
An imaging vector magnetograph for the next solar maximum
NASA Technical Reports Server (NTRS)
Canfield, Richard C.; Mickey, Donald L.
1988-01-01
Measurements of the vector magnetic field in the solar atmosphere with high spatial and temporal resolution over a large field of view are critical to understanding the nature and evolution of currents in active regions. Such measurements, when combined with the thermal and nonthermal X-ray images from the upcoming Solar-A mission, will reveal the large-scale relationship between these currents and sites of heating and particle acceleration in flaring coronal magnetic flux tubes. The conceptual design of an imaging vector magnetograph that combines a modest solar telescope with a rotating quarter-wave plate, an acousto-optical tunable prefilter as a blocker for a servo-controlled Fabry-Perot etalon, CCD cameras, and a rapid digital tape recorder are described. Its high spatial resolution (1/2 arcsec pixel size) over a large field of view (4 x 5 arcmin) will be sufficient to significantly measure, for the first time, the magnetic energy dissipated in major solar flares. Its millisecond tunability and wide spectra range (5000 to 8000 A) enable nearly simultaneous vector magnetic field measurements in the gas-pressure-dominated photosphere and magnetically dominated chromosphere, as well as effective co-alignment with Solar-A's X-ray images.
Retinal slit lamp video mosaicking.
De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael
2016-06-01
To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Trelease, R B
1996-01-01
Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.
The Propeller Belts in Saturn A Ring
2017-01-30
This image from NASA's Cassini mission shows a region in Saturn's A ring. The level of detail is twice as high as this part of the rings has ever been seen before. The view contains many small, bright blemishes due to cosmic rays and charged particle radiation near the planet. The view shows a section of the A ring known to researchers for hosting belts of propellers -- bright, narrow, propeller-shaped disturbances in the ring produced by the gravity of unseen embedded moonlets. Several small propellers are visible in this view. These are on the order of 10 times smaller than the large, bright propellers whose orbits scientists have routinely tracked (and which are given nicknames for famous aviators). This image is a lightly processed version, with minimal enhancement, preserving all original details present in the image. he image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 33,000 miles (54,000 kilometers) from the rings and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (330 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21059
A beam-splitter-type 3-D endoscope for front view and front-diagonal view images.
Kamiuchi, Hiroki; Masamune, Ken; Kuwana, Kenta; Dohi, Takeyoshi; Kim, Keri; Yamashita, Hiromasa; Chiba, Toshio
2013-01-01
In endoscopic surgery, surgeons must manipulate an endoscope inside the body cavity to observe a large field-of-view while estimating the distance between surgical instruments and the affected area by reference to the size or motion of the surgical instruments in 2-D endoscopic images on a monitor. Therefore, there is a risk of the endoscope or surgical instruments physically damaging body tissues. To overcome this problem, we developed a Ø7- mm 3-D endoscope that can switch between providing front and front-diagonal view 3-D images by simply rotating its sleeves. This 3-D endoscope consists of a conventional 3-D endoscope and an outer and inner sleeve with a beam splitter and polarization plates. The beam splitter was used for visualizing both the front and front-diagonal view and was set at 25° to the outer sleeve's distal end in order to eliminate a blind spot common to both views. Polarization plates were used to avoid overlap of the two views. We measured signal-to-noise ratio (SNR), sharpness, chromatic aberration (CA), and viewing angle of this 3-D endoscope and evaluated its feasibility in vivo. Compared to the conventional 3-D endoscope, SNR and sharpness of this 3-D endoscope decreased by 20 and 7 %, respectively. No significant difference was found in CA. The viewing angle for both the front and front-diagonal views was about 50°. In the in vivo experiment, this 3-D endoscope can provide clear 3-D images of both views by simply rotating its inner sleeve. The developed 3-D endoscope can provide the front and front-diagonal view by simply rotating the inner sleeve, therefore the risk of damage to fragile body tissues can be significantly decreased.
AlJaroudi, Wael A; Lloyd, Steven G; Chaudhry, Farooq A; Hage, Fadi G
2017-06-01
This review summarizes key imaging studies that were presented in the American Heart Association Scientific Sessions 2016 related to the fields of nuclear cardiology, cardiac computed tomography, cardiac magnetic resonance, and echocardiography. This bird's eye view will inform readers about multiple studies from these different modalities. We hope that this general overview will be useful for those that did not attend the conference as well as to those that did since it is often difficult to get exposure to many abstracts at large meetings. The review, therefore, aims to help readers stay updated on the newest imaging studies presented at the meeting.
NASA Astrophysics Data System (ADS)
Banakh, Viktor A.; Sazanovich, Valentina M.; Tsvik, Ruvim S.
1997-09-01
The influence of diffraction on the object, coherently illuminated and viewed through a random medium from the same point, on the image quality betterment caused by the counter wave correlation is studied experimentally. The measurements were carried out with the use of setup modeling artificial convective turbulence. It is shown that in the case of spatially limited reflector with the Fresnel number of the reflector surface radius r ranging from 3 to 12 the contribution of the counter wave correlation into image intensity distribution is maximal as compared with the point objects (r
NASA Astrophysics Data System (ADS)
Zaki, Farzana; Hou, Isabella; Huang, Qiongdan; Cooper, Denver; Patel, Divya; Liu, Xuan; Yang, Yi
2017-02-01
Optical coherence tomography (OCT) has great potential for the examination of oil paintings, particularly for celebrated masterpieces by great artists in history. We developed an OCT system for large field of view (FOV), high definition (HD) imaging of oil paintings. To achieve large FOV, we translated the sample using a pair of high-precision linear motors and performed sequential volumetric imaging on adjacent, non-overlapping regions. Through 3D OCT imaging, the surface terrain and subsurface microarchitecture of the paintings have been characterized and visualized.
AlJaroudi, Wael A; Lloyd, Steven G; Hage, Fadi G
2018-04-01
This review summarizes key imaging studies that were presented in the American Heart Association Scientific Sessions 2017 related to the fields of nuclear cardiology, cardiac computed tomography, cardiac magnetic resonance, and echocardiography. The aim of this bird's eye view is to inform readers about multiple studies reported at the meeting from these different imaging modalities. While such a review is most useful for those that did not attend the conference, we find that a general overview may also be useful to those that did since it is often difficult to get exposure to many abstracts at large meetings. The review, therefore, aims to help readers stay updated on the newest imaging studies presented at the meeting and will hopefully stimulate new ideas for future research in imaging.
Icebergs Adrift in the Amundsen Sea
NASA Technical Reports Server (NTRS)
2002-01-01
The Thwaites Ice Tongue is a large sheet of glacial ice extending from the West Antarctic mainland into the southern Amundsen Sea. A large crack in the Thwaites Tongue was discovered in imagery from Terra's Moderate Resolution Imaging Spectroradiometer (MODIS). Subsequent widening of the crack led to the calving of a large iceberg. The development of this berg, designated B-22 by the National Ice Center, can be observed in these images from the Multi-angle Imaging SpectroRadiometer, also aboard Terra. The two views were acquired by MISR's nadir (vertical-viewing) camera on March 10 and 24, 2002. The B-22 iceberg, located below and to the left of image center, measures approximately 82 kilometers long x 62 kilometers wide. Comparison of the two images shows the berg to have drifted away from the ice shelf edge. The breakup of ice near the shelf edge, in the area surrounding B-22, is also visible in the later image. These natural-color images were acquired during Terra orbits 11843 and 12047, respectively. At the right-hand edge is Pine Island Bay, where the calving of another large iceberg (B-21) occurred in November 2001. B-21 subsequently split into two smaller bergs, both of which are visible to the right of B-22. Antarctic researchers have reported an increase in the frequency of iceberg calvings in recent years. Whether this is the result of a regional climate variation, or connected to the global warming trend, has not yet been established. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology. Image credit: NASA/GSFC/LaRC/JPL, MISR Team.
Commissioning and Science Verification of JAST/T80
NASA Astrophysics Data System (ADS)
Ederoclte, A.; Cenarro, A. J.; Marín-Franch, A.; Cristóbal-Hornillos, D.; Vázquez Ramió, H.; Varela, J.; Hurier, G.; Moles, M.; Lamadrid, J. L.; Díaz-Martín, M. C.; Iglesias Marzoa, R.; Tilve, V.; Rodríguez, S.; Maícas, N.; Abri, J.
2017-03-01
Located at the Observatorio Astrofísico de Javalambre, the ’’Javalambre Auxiliary Survey Telescope’’ is an 80cm telescope with a unvignetted 2 square degrees field of view. The telescope is equipped with T80Cam, a camera with a large format CCD and two filter wheels which can host, at any given time, 12 filters. The telescope has been designed to provide optical quality all across the field of view, which is achieved with a field corrector. In this talk, I will review the commissioning of the telescope. The optical performance in the centre of the field of view has been tested with lucky imaging technique, providing a telescope PSF of 0.4’’, which is close to the one expected from theory. Moreover, the tracking of the telescope does not affect the image quality, as it has been shown that stars appear round even in exposures of 10minutes obtained without guiding. Most importantly, we present the preliminary results of science verification observations which combine the two main characteristics of this telescope: the large field of view and the special filter set.
WFIRST: Astrometry with the Wide-Field Imager
NASA Astrophysics Data System (ADS)
Bellini, Andrea; WFIRST Astrometry Working Group
2018-01-01
The wide field of view and stable, sharp images delivered by WFIRST's Wide-Field Imager make it an excellent instrument for astrometry, one of five major discovery areas identified in the 2010 Decadal Survey. Compared to the Hubble Space Telescope, WFIRST's wider field of view with similar image quality will provide hundreds more astrometric targets per image as well as background galaxies and stars with precise positions in the Gaia catalog. In addition, WFIRST will operate in the infrared, a wavelength regime where the most precise astrometry has so far been achieved with adaptive optics images from large ground-based telescopes. WFIRST will provide at least a factor of three improvement in astrometry over the current state of the art in this wavelength range, while spanning a field of view thousands of times larger. WFIRST is thus poised to make major contributions to multiple science topics in which astrometry plays an important role, without major alterations to the planned mission or instrument. We summarize a few of the most compelling science cases where WFIRST astrometry could prove transformational.
The ideal imaging AR waveguide
NASA Astrophysics Data System (ADS)
Grey, David J.
2017-06-01
Imaging waveguides are a key development that are helping to create the Augmented Reality revolution. They have the ability to use a small projector as an input and produce a wide field of view, large eyebox, full colour, see-through image with good contrast and resolution. WaveOptics is at the forefront of this AR technology and has developed and demonstrated an approach which is readily scalable. This paper presents our view of the ideal near-to-eye imaging AR waveguide. This will be a single-layer waveguide which can be manufactured in high volume and low cost, and is suitable for small form factor applications and all-day wear. We discuss the requirements of the waveguide for an excellent user experience. When enhanced (AR) viewing is not required, the waveguide should have at least 90% transmission, no distracting artifacts and should accommodate the user's ophthalmic prescription. When enhanced viewing is required, additionally, the waveguide requires excellent imaging performance, this includes resolution to the limit of human acuity, wide field of view, full colour, high luminance uniformity and contrast. Imaging waveguides are afocal designs and hence cannot provide ophthalmic correction. If the user requires this correction then they must wear either contact lenses, prescription spectacles or inserts. The ideal imaging waveguide would need to cope with all of these situations so we believe it must be capable of providing an eyebox at an eye relief suitable for spectacle wear which covers a significant range of population inter-pupillary distances. We describe the current status of our technology and review existing imaging waveguide technologies against the ideal component.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuipers, Jeroen; Boer, Pascal de; Giepmans, Ben N.G., E-mail: b.n.g.giepmans@umcg.nl
Scanning electron microscopy (SEM) is increasing its application in life sciences for electron density measurements of ultrathin sections. These are traditionally analyzed with transmission electron microscopy (TEM); by most labs, SEM analysis still is associated with surface imaging only. Here we report several advantages of SEM for thin sections over TEM, both for structural inspection, as well as analyzing immuno-targeted labels such as quantum dots (QDs) and gold, where we find that QD-labeling is ten times more efficient than gold-labeling. Furthermore, we find that omitting post-staining with uranyl and lead leads to QDs readily detectable over the ultrastructure, but undermore » these conditions ultrastructural contrast was even almost invisible in TEM examination. Importantly, imaging in SEM with STEM detection leads to both outstanding QDs and ultrastructural contrast. STEM imaging is superior over back-scattered electron imaging of these non-contrasted samples, whereas secondary electron detection cannot be used at all. We conclude that examination of ultrathin sections by SEM, which may be immunolabeled with QDs, will allow rapid and straightforward analysis of large fields with more efficient labeling than can be achieved with immunogold. The large fields of view routinely achieved with SEM, but not with TEM, allows straightforward raw data sharing using virtual microscopy, also known as nanotomy when this concerns EM data in the life sciences. - Highlights: • High resolution and large fields of view via nanotomy or virtual microscopy. • Highly relevant for EM‐datasets where information density is high. • Sample preparation with low contrast good for STEM, not TEM. • Quantum dots now stand out in STEM‐based detection. • 10 Times more efficient labeling with quantum dots compared to gold.« less
Gutman, David A.; Dunn, William D.; Cobb, Jake; Stoner, Richard M.; Kalpathy-Cramer, Jayashree; Erickson, Bradley
2014-01-01
Advances in web technologies now allow direct visualization of imaging data sets without necessitating the download of large file sets or the installation of software. This allows centralization of file storage and facilitates image review and analysis. XNATView is a light framework recently developed in our lab to visualize DICOM images stored in The Extensible Neuroimaging Archive Toolkit (XNAT). It consists of a PyXNAT-based framework to wrap around the REST application programming interface (API) and query the data in XNAT. XNATView was developed to simplify quality assurance, help organize imaging data, and facilitate data sharing for intra- and inter-laboratory collaborations. Its zero-footprint design allows the user to connect to XNAT from a web browser, navigate through projects, experiments, and subjects, and view DICOM images with accompanying metadata all within a single viewing instance. PMID:24904399
Rapid microscopy measurement of very large spectral images.
Lindner, Moshe; Shotan, Zav; Garini, Yuval
2016-05-02
The spectral content of a sample provides important information that cannot be detected by the human eye or by using an ordinary RGB camera. The spectrum is typically a fingerprint of the chemical compound, its environmental conditions, phase and geometry. Thus measuring the spectrum at each point of a sample is important for a large range of applications from art preservation through forensics to pathological analysis of a tissue section. To date, however, there is no system that can measure the spectral image of a large sample in a reasonable time. Here we present a novel method for scanning very large spectral images of microscopy samples even if they cannot be viewed in a single field of view of the camera. The system is based on capturing information while the sample is being scanned continuously 'on the fly'. Spectral separation implements Fourier spectroscopy by using an interferometer mounted along the optical axis. High spectral resolution of ~5 nm at 500 nm could be achieved with a diffraction-limited spatial resolution. The acquisition time is fairly high and takes 6-8 minutes for a sample size of 10mm x 10mm measured under a bright-field microscope using a 20X magnification.
NASA Technical Reports Server (NTRS)
Jenkins, Luther N.; Yao, Chung-Sheng; Bartram, Scott M.; Harris, Jerome; Allan, Brian; Wong, Oliver; Mace, W. Derry
2009-01-01
A Large Field-of-View Particle Image Velocimetry (LFPIV) system has been developed for rotor wake diagnostics in the 14-by 22-Foot Subsonic Tunnel. The system has been used to measure three components of velocity in a plane as large as 1.524 meters by 0.914 meters in both forward flight and hover tests. Overall, the system performance has exceeded design expectations in terms of accuracy and efficiency. Measurements synchronized with the rotor position during forward flight and hover tests have shown that the system is able to capture the complex interaction of the body and rotor wakes as well as basic details of the blade tip vortex at several wake ages. Measurements obtained with traditional techniques such as multi-hole pressure probes, Laser Doppler Velocimetry (LDV), and 2D Particle Image Velocimetry (PIV) show good agreement with LFPIV measurements.
Technologies for imaging neural activity in large volumes
Ji, Na; Freeman, Jeremy; Smith, Spencer L.
2017-01-01
Neural circuitry has evolved to form distributed networks that act dynamically across large volumes. Collecting data from individual planes, conventional microscopy cannot sample circuitry across large volumes at the temporal resolution relevant to neural circuit function and behaviors. Here, we review emerging technologies for rapid volume imaging of neural circuitry. We focus on two critical challenges: the inertia of optical systems, which limits image speed, and aberrations, which restrict the image volume. Optical sampling time must be long enough to ensure high-fidelity measurements, but optimized sampling strategies and point spread function engineering can facilitate rapid volume imaging of neural activity within this constraint. We also discuss new computational strategies for the processing and analysis of volume imaging data of increasing size and complexity. Together, optical and computational advances are providing a broader view of neural circuit dynamics, and help elucidate how brain regions work in concert to support behavior. PMID:27571194
NASA Astrophysics Data System (ADS)
Hansen, C. J.; Ravine, M. A.; Caplinger, M. A.; Orton, G. S.; Ingersoll, A. P.; Jensen, E.; Lipkaman, L.; Krysak, D.; Zimdar, R.; Bolton, S. J.
2016-12-01
JunoCam is a visible imager on the Juno spacecraft in orbit around Jupiter. It is a wide angle camera (58 deg field of view) with 4 color filters: red, green and blue (RGB) and methane at 889 nm, designed for optimal imaging of Jupiter's poles. Juno's elliptical polar orbit will offer unique views of Jupiter's polar regions with a spatial scale of 50 km/pixel. At closest approach the images will have a spatial scale of 3 km/pixel. As a push-frame imager on a rotating spacecraft, JunoCam uses time-delayed integration to take advantage of the spacecraft spin to extend integration time to increase signal. Images of Jupiter's poles reveal a largely uncharted region of Jupiter, as nearly all earlier spacecraft have orbited or flown by in the equatorial plane. Most of the images of Jupiter will be acquired in the +/-2 hours surrounding closest approach. The polar vortex, polar cloud morphology, and winds will be investigated. RGB color images of the aurora will be acquired if detectable. Stereo images and images taken with the methane filter will allow us to estimate cloud-top heights. Images of the cloud-tops will aid in understanding the data collected by other instruments on Juno that probe deeper in the atmosphere. During the two months that Jupiter is too close to the sun for ground-based observers to collect data, JunoCam will take images routinely to monitor large-scale features. Occasional, opportunistic images of the Galilean moons will be acquired.
NASA Astrophysics Data System (ADS)
Salmon, Neil A.
2017-10-01
Aperture synthesis for passive millimetre wave imaging provides a means to screen people for concealed threats in the extreme near-field configuration of a portal, a regime where the imager to subject distance is of the order of both the required depth-of-field and the field-of-view. Due to optical aberrations, focal plane array imagers cannot deliver the large depth-of-fields and field-of-views required in this regime. Active sensors on the other hand can deliver these but face challenges of illumination, speckle and multi-path issues when imaging canyon regions of the body. Fortunately an aperture synthesis passive millimetre wave imaging system can deliver large depth-of-fields and field-of-views, whilst having no speckle effects, as the radiometric emission from the human body is spatially incoherent. Furthermore, as in portal security screening scenarios the aperture synthesis imaging technique delivers a half-wavelength spatial resolution, it can effectively screen the whole of the human body. Some recent measurements are presented that demonstrate the three-dimensional imaging capability of extended sources using a 22 GHz aperture synthesis system. A comparison is made between imagery generated via the analytic Fourier transform and a gridding fast Fourier transform method. The analytic Fourier transform enables aliasing in the imagery to be more clearly identified. Some initial results are also presented of how the Gerchberg technique, an image enhancement algorithm used in radio astronomy, is adapted for three-dimensional imaging in security screening. This technique is shown to be able to improve the quality of imagery, without adding extra receivers to the imager. The requirements of a walk through security screening system for use at entrances to airport departure lounges are discussed, concluding that these can be met by an aperture synthesis imager.
Thermalnet: a Deep Convolutional Network for Synthetic Thermal Image Generation
NASA Astrophysics Data System (ADS)
Kniaz, V. V.; Gorbatsevich, V. S.; Mizginov, V. A.
2017-05-01
Deep convolutional neural networks have dramatically changed the landscape of the modern computer vision. Nowadays methods based on deep neural networks show the best performance among image recognition and object detection algorithms. While polishing of network architectures received a lot of scholar attention, from the practical point of view the preparation of a large image dataset for a successful training of a neural network became one of major challenges. This challenge is particularly profound for image recognition in wavelengths lying outside the visible spectrum. For example no infrared or radar image datasets large enough for successful training of a deep neural network are available to date in public domain. Recent advances of deep neural networks prove that they are also capable to do arbitrary image transformations such as super-resolution image generation, grayscale image colorisation and imitation of style of a given artist. Thus a natural question arise: how could be deep neural networks used for augmentation of existing large image datasets? This paper is focused on the development of the Thermalnet deep convolutional neural network for augmentation of existing large visible image datasets with synthetic thermal images. The Thermalnet network architecture is inspired by colorisation deep neural networks.
Icebergs Adrift in the Amundsen Sea
NASA Technical Reports Server (NTRS)
2002-01-01
The Thwaites Ice Tongue is a large sheet of glacial ice extending from the West Antarctic mainland into the southern Amundsen Sea. A large crack in the Thwaites Tongue was discovered in imagery from Terra's Moderate Resolution Imaging SpectroRadiometer (MODIS). Subsequent widening of the crack led to the calving of a large iceberg. The development of this berg, designated B-22 by the National Ice Center, can be observed in these images from the Multi-angle Imaging SpectroRadiometer, also aboard Terra. The two views were acquired by MISR's nadir (vertical-viewing)camera on March 10 and 24, 2002.The B-22 iceberg, located below and to the left of image center, measures approximately 82 kilometers long x 62 kilometers wide. Comparison of the two images shows the berg to have drifted away from the ice shelf edge. The breakup of ice near the shelf edge, in the area surrounding B-22, is also visible in the later image.These natural-color images were acquired during Terra orbits 11843 and 12047, respectively. At the right-hand edge is Pine Island Bay, where the calving of another large iceberg (B-21) occurred in November 2001. B-21 subsequently split into two smaller bergs, both of which are visible to the right of B-22.Antarctic researchers have reported an increase in the frequency of iceberg calvings in recent years. Whether this is the result of a regional climate variation, or connected to the global warming trend, has not yet been established.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.NASA Technical Reports Server (NTRS)
2006-01-01
15 April 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a mid-summer view of a portion of the south polar residual cap of Mars. The large, relatively flat-lying, puzzle-like pieces in this scene are mesas composed largely of solid carbon dioxide. Location near: 85.5oS, 76.8oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern SummerGreenView and GreenLand Applications Development on SEE-GRID Infrastructure
NASA Astrophysics Data System (ADS)
Mihon, Danut; Bacu, Victor; Gorgan, Dorian; Mészáros, Róbert; Gelybó, Györgyi; Stefanut, Teodor
2010-05-01
The GreenView and GreenLand applications [1] have been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) FP7 project co-funded by the European Commission [2]. The development of environment applications is a challenge for Grid technologies and software development methodologies. This presentation exemplifies the development of the GreenView and GreenLand applications over the SEE-GRID infrastructure by the Grid Application Development Methodology [3]. Today's environmental applications are used in vary domains of Earth Science such as meteorology, ground and atmospheric pollution, ground metal detection or weather prediction. These applications run on satellite images (e.g. Landsat, MERIS, MODIS, etc.) and the accuracy of output results depends mostly of the quality of these images. The main drawback of such environmental applications regards the need of computation power and storage power (some images are almost 1GB in size), in order to process such a large data volume. Actually, almost applications requiring high computation resources have approached the migration onto the Grid infrastructure. This infrastructure offers the computing power by running the atomic application components on different Grid nodes in sequential or parallel mode. The middleware used between the Grid infrastructure and client applications is ESIP (Environment Oriented Satellite Image Processing Platform), which is based on gProcess platform [4]. In its current format, gProcess is used for launching new processes on the Grid nodes, but also for monitoring the execution status of these processes. This presentation highlights two case studies of Grid based environmental applications, GreenView and GreenLand [5]. GreenView is used in correlation with MODIS (Moderate Resolution Imaging Spectroradiometer) satellite images and meteorological datasets, in order to produce pseudo colored temperature and vegetation maps for different geographical CEE (Central Eastern Europe) regions. On the other hand, GreenLand is used for generating maps for different vegetation indexes (e.g. NDVI, EVI, SAVI, GEMI) based on Landsat satellite images. Both applications are using interpolation and random value generation algorithms, but also specific formulas for computing vegetation index values. The GreenView and GreenLand applications have been experimented over the SEE-GRID infrastructure and the performance evaluation is reported in [6]. The improvement of the execution time (obtained through a better parallelization of jobs), the extension of geographical areas to other parts of the Earth, and new user interaction techniques on spatial data and large set of satellite images are the goals of the future work. References [1] GreenView application on Wiki, http://wiki.egee-see.org/index.php/GreenView [2] SEE-GRID-SCI Project, http://www.see-grid-sci.eu/ [3] Gorgan D., Stefanut T., Bâcu V., Mihon D., Grid based Environment Application Development Methodology, SCICOM, 7th International Conference on "Large-Scale Scientific Computations", 4-8 June, 2009, Sozopol, Bulgaria, (To be published by Springer), (2009). [4] Gorgan D., Bacu V., Stefanut T., Rodila D., Mihon D., Grid based Satellite Image Processing Platform for Earth Observation Applications Development. IDAACS'2009 - IEEE Fifth International Workshop on "Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications", 21-23 September, Cosenza, Italy, IEEE Published in Computer Press, 247-252 (2009). [5] Mihon D., Bacu V., Stefanut T., Gorgan D., "Grid Based Environment Application Development - GreenView Application". ICCP2009 - IEEE 5th International Conference on Intelligent Computer Communication and Processing, 27 Aug, 2009 Cluj-Napoca. Published by IEEE Computer Press, pp. 275-282 (2009). [6] Danut Mihon, Victor Bacu, Dorian Gorgan, Róbert Mészáros, Györgyi Gelybó, Teodor Stefanut, Practical Considerations on the GreenView Application Development and Execution over SEE-GRID. SEE-GRID-SCI User Forum, 9-10 Dec 2009, Bogazici University, Istanbul, Turkey, ISBN: 978-975-403-510-0, pp. 167-175 (2009).
Large area, label-free imaging of extracellular matrix using telecentricity
NASA Astrophysics Data System (ADS)
Visbal Onufrak, Michelle A.; Konger, Raymond L.; Kim, Young L.
2017-02-01
Subtle alterations in stromal tissue structures and organizations within the extracellular matrix (ECM) have been observed in several types of tissue abnormalities, including early skin cancer and wounds. Current microscopic imaging methods often lack the ability to accurately determine the extent of malignancy over a large area, due to their limited field of view. In this research we focus on the development of simple mesoscopic (i.e. between microscopic and macroscopic) biomedical imaging device for non-invasive assessment of ECM alterations over a large, heterogeneous area. In our technology development, a telecentric lens, commonly used in machine vision systems but rarely used in biomedical imaging, serves as a key platform to visualize alterations in tissue microenvironments in a label-free manner over a clinically relevant area. In general, telecentric imaging represents a simple, alternative method for reducing unwanted scattering or diffuse light caused by the highly anisotropic scattering properties of biological tissue. In particular, under telecentric imaging the light intensity backscattered from biological tissue is mainly sensitive to the scattering anisotropy factor, possibly associated with the ECM. We demonstrate the inherent advantages of combining telecentric lens systems with hyperspectral imaging for providing optical information of tissue scattering in biological tissue of murine models, as well as light absorption of hemoglobin in blood vessel tissue phantoms. Thus, we envision that telecentric imaging could potentially serve for simple site-specific, tissue-based assessment of stromal alterations over a clinically relevant field of view in a label-free manner, for studying diseases associated with disruption of homeostasis in ECM.
A Sensitive Measurement for Estimating Impressions of Image-Contents
NASA Astrophysics Data System (ADS)
Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao
We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.
Computer vision research with new imaging technology
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Liu, Fei; Sun, Zhenan
2015-12-01
Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.
Integrated large view angle hologram system with multi-slm
NASA Astrophysics Data System (ADS)
Yang, ChengWei; Liu, Juan
2017-10-01
Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.
AlJaroudi, Wael A; Einstein, Andrew J; Chaudhry, Farooq A; Lloyd, Steven G; Hage, Fadi G
2015-04-01
A large number of studies were presented at the 2014 American Heart Association Scientific Sessions. In this review, we will summarize key studies in nuclear cardiology, computed tomography, echocardiography, and cardiac magnetic resonance imaging. This brief review will be helpful for readers of the Journal who are interested in being updated on the latest research covering these imaging modalities.
NASA Technical Reports Server (NTRS)
1997-01-01
These two views of Io were acquired by NASA's Galileo spacecraft during its seventh orbit (G7) of Jupiter. The images were designed to view large features on Io at low sun angles when the lighting conditions emphasize the topography or relief of the volcanic satellite. Sun angles are low near the terminator which is the day-night boundary near the left side of the images. These images reveal that the topography is very flat near the active volcanic centers such as Loki Patera (the large dark horseshoe-shaped feature near the terminator in the left-hand image) and that a variety of mountains and plateaus exist elsewhere.
North is to the top of the picture. The resolution is about 6 kilometers per picture element (6.1 for the left hand image and 5.7 for the right). The images were taken on April 4th, 1997 at a ranges of 600,000 kilometers (left image) and 563,000 kilometers (right image) by the solid state imaging (CCD) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoNASA Astrophysics Data System (ADS)
Cepa, J.; Alfaro, E. J.; Castañeda, H. O.; Gallego, J.; González-Serrano, J. I.; González, J. J.; Jones, D. H.; Pérez-García, A. M.; Sánchez-Portal, M.
2007-06-01
OSIRIS is the Spanish Day One instrument for the GTC 10.4-m telescope. OSIRIS is a general purpose instrument for imaging, low-resolution long slit and multi-object spectroscopy (MOS). OSIRIS has a field of view of 8.6×8.6 arcminutes, which makes it ideal for deep surveys, and operates in the optical wavelength range from 365 through 1000nm. The main characteristic that makes OSIRIS unique amongst other instruments in 8-10m class telescopes is the use of Tunable Filters (Bland-Hawthorn & Jones 1998). These allow a continuous selection of both the central wavelength and the width, thus providing scanning narrow band imaging within the OSIRIS wavelength range. The combination of the large GTC aperture, large OSIRIS field of view and availability of the TFs makes OTELO a truly unique emission line survey.
Detection of large-scale concentric gravity waves from a Chinese airglow imager network
NASA Astrophysics Data System (ADS)
Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao
2018-06-01
Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.
Recent Developments in VSD Imaging of Small Neuronal Networks
ERIC Educational Resources Information Center
Hill, Evan S.; Bruno, Angela M.; Frost, William N.
2014-01-01
Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit…
More than Memories: Studying Home Movies and the Families Who Made Them
ERIC Educational Resources Information Center
Uhrich, Andy
2008-01-01
Unfairly viewed as poorly made and unwatchable, home movies actually constitute a wide variety of events and images that make them an invaluable and largely unexplored resource for scholars and researchers. The images captured in home movies--first birthdays, parades, vacations, family gatherings, etc.--were originally made by family members to…
Observatories Combine to Crack Open the Crab Nebula
2017-12-08
Astronomers have produced a highly detailed image of the Crab Nebula, by combining data from telescopes spanning nearly the entire breadth of the electromagnetic spectrum, from radio waves seen by the Karl G. Jansky Very Large Array (VLA) to the powerful X-ray glow as seen by the orbiting Chandra X-ray Observatory. And, in between that range of wavelengths, the Hubble Space Telescope's crisp visible-light view, and the infrared perspective of the Spitzer Space Telescope. This video starts with a composite image of the Crab Nebula, a supernova remnant that was assembled by combining data from five telescopes spanning nearly the entire breadth of the electromagnetic spectrum: the Very Large Array, the Spitzer Space Telescope, the Hubble Space Telescope, the XMM-Newton Observatory, and the Chandra X-ray Observatory. The video dissolves to the red-colored radio-light view that shows how a neutron star’s fierce “wind” of charged particles from the central neutron star energized the nebula, causing it to emit the radio waves. The yellow-colored infrared image includes the glow of dust particles absorbing ultraviolet and visible light. The green-colored Hubble visible-light image offers a very sharp view of hot filamentary structures that permeate this nebula. The blue-colored ultraviolet image and the purple-colored X-ray image shows the effect of an energetic cloud of electrons driven by a rapidly rotating neutron star at the center of the nebula. Read more: go.nasa.gov/2r0s8VC Credits: NASA, ESA, J. DePasquale (STScI)
Recent developments in VSD imaging of small neuronal networks
Hill, Evan S.; Bruno, Angela M.
2014-01-01
Voltage-sensitive dye (VSD) imaging is a powerful technique that can provide, in single experiments, a large-scale view of network activity unobtainable with traditional sharp electrode recording methods. Here we review recent work using VSDs to study small networks and highlight several results from this approach. Topics covered include circuit mapping, network multifunctionality, the network basis of decision making, and the presence of variably participating neurons in networks. Analytical tools being developed and applied to large-scale VSD imaging data sets are discussed, and the future prospects for this exciting field are considered. PMID:25225295
NASA Astrophysics Data System (ADS)
Li, Zhengji; Teng, Qizhi; He, Xiaohai; Yue, Guihua; Wang, Zhengyong
2017-09-01
The parameter evaluation of reservoir rocks can help us to identify components and calculate the permeability and other parameters, and it plays an important role in the petroleum industry. Until now, computed tomography (CT) has remained an irreplaceable way to acquire the microstructure of reservoir rocks. During the evaluation and analysis, large samples and high-resolution images are required in order to obtain accurate results. Owing to the inherent limitations of CT, however, a large field of view results in low-resolution images, and high-resolution images entail a smaller field of view. Our method is a promising solution to these data collection limitations. In this study, a framework for sparse representation-based 3D volumetric super-resolution is proposed to enhance the resolution of 3D voxel images of reservoirs scanned with CT. A single reservoir structure and its downgraded model are divided into a large number of 3D cubes of voxel pairs and these cube pairs are used to calculate two overcomplete dictionaries and the sparse-representation coefficients in order to estimate the high frequency component. Future more, to better result, a new feature extract method with combine BM4D together with Laplacian filter are introduced. In addition, we conducted a visual evaluation of the method, and used the PSNR and FSIM to evaluate it qualitatively.
NASA Technical Reports Server (NTRS)
West, E. A.
1993-01-01
Magnetographs, which measure polarized light, allow solar astronomers to infer the magnetic field intensity on the Sun. The Marshall Space Flight Center (MSFC) Vector Magnetograph is such an imaging instrument. The instrument requires rapid modulation between polarization states to minimize seeing effects. The accuracy of those polarization measurements is dependent on stable modulators with small field-of-view errors. Although these devices are very important in ground-based telescopes, extending the field of view of electro-optical crystals such as KD*Ps (potassium di-deuterium phosphate) could encourage the development of these devices for other imaging applications. The work that was done at MSFC as part of the Center Director's Discretionary Fund (CDDF) to reduce the field-of-view errors of instruments that use KD*P modulators in their polarimeters is described.
2006-06-30
This MOC image shows dunes in the north polar region of Mars. In this springtime view, the dunes are largely covered by frozen carbon dioxide that was deposited during the winter months in the northern hemisphere
Li, Jianqi; Wang, Yi; Jiang, Yu; Xie, Haibin; Li, Gengying
2009-09-01
An open permanent magnet system with vertical B(0) field and without self-shielding can be quite susceptible to perturbations from external magnetic sources. B(0) variation in such a system located close to a subway station was measured to be greater than 0.7 microT by both MRI and a fluxgate magnetometer. This B(0) variation caused image artifacts. A navigator echo approach that monitored and compensated the view-to-view variation in magnetic resonance signal phase was developed to correct for image artifacts. Human brain imaging experiments using a multislice gradient-echo sequence demonstrated that the ghosting and blurring artifacts associated with B(0) variations were effectively removed using the navigator method.
Fukuta, Shoji; Tsutsui, Takahiko; Amari, Rui; Wada, Keizo; Sairyo, Koichi
2016-07-01
Muscle atrophy and fatty degeneration of the rotator cuff muscles have been reported as negative prognostic indicators after rotator cuff repair. Although the Y-shaped view is widely used for measuring the cross-sectional area of the supraspinatus muscle, the contribution of retraction of the torn tendon as well as muscle atrophy must be considered. The purpose of this study was to clarify the relationship between cross-sectional area and tendon retraction or size of the tear. This study included 76 shoulders that were evaluated arthroscopically for the presence and size of tears. Cross-sectional areas of rotator cuff muscles were measured from the Y-shaped view to 3 more medial slices. The occupation ratio and tangent sign were evaluated on the Y-shaped view. The retraction of torn tendon was also measured on the oblique coronal images. On the Y-shaped view, the cross-sectional area of the supraspinatus and the occupation ratio decreased in conjunction with the increase in tear size. A significant decrease in cross-sectional area was noted only in large and massive tears on more medial slices from the Y-shaped view. Significant decreases in the cross-sectional area of the infraspinatus were observed in large and massive tears on all images. A negative correlation was found between tendon retraction and cross-sectional area, which was strongest on the Y-shaped view. To avoid the influence of retraction of the supraspinatus tendon, sufficient medial slices from the musculotendinous junction should be used for evaluation of muscle atrophy. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Perspective View, SRTM / Landsat, Los Angeles, Calif
NASA Technical Reports Server (NTRS)
2002-01-01
Los Angeles, Calif., is one of the world's largest metropolitan areas with a population of about 15 million people. The urban areas mostly cover the coastal plains and lie within the inland valleys. The intervening and adjacent mountains are generally too rugged for much urban development. This in large part because the mountains are 'young', meaning they are still building (and eroding) in this seismically active (earthquake prone) region. Earthquake faults commonly lie between the mountains and the lowlands. The San Andreas fault, the largest fault in California, likewise divides the very rugged San Gabriel Mountains from the low-relief Mojave Desert, thus forming a straight topographic boundary between the top center and lower right corner of the image. We present two versions of this perspective image from NASA's Shuttle Radar Topography Mission (SRTM): one with and one without a graphic overlay that maps faults that have been active in Late Quaternary times (white lines). The fault database was provided by the U.S. Geological Survey.For the annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site] (Large image: 2 mB jpeg) The Landsat image used here was acquired on May 4, 2001, about seven weeks before the summer solstice, so natural terrain shading is not particularly strong. It is also not especially apparent given a view direction (northwest) nearly parallel to the sun illumination (shadows generally fall on the backsides of mountains). Consequently, topographic shading derived from the SRTM elevation model was added to the Landsat image, with a false sun illumination from the left (southwest). This synthetic shading enhances the appearance of the topography. Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and substantially helps in analyzing the large and growing Landsat image archive. This Landsat 7 Thematic Mapper image was provided to the SRTM project by the United States Geological Survey, Earth Resources Observation Systems (EROS) Data Center, Sioux Falls, S.D. Elevation data used in this image was acquired by the SRTM aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C. Size: View width 134 kilometers (83 miles); view distance 150 kilometers (93 miles) Location: 34.3 degrees North latitude, 118.4 degrees West longitude Orientation: View west-northwest, 1.8 X vertical exaggeration Image Data: Landsat Bands 3, 2+4, 1 as red, green, blue, respectively Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Landsat 30 meters (98 feet) Graphic Data: earthquake faults active in Late Quaternary times Date Acquired: February 2000 (SRTM), May 4, 2001 (Landsat).Optical system design of CCD star sensor with large aperture and wide field of view
NASA Astrophysics Data System (ADS)
Wang, Chao; Jiang, Lun; Li, Ying-chao; Liu, Zhuang
2017-10-01
The star sensor is one of the sensors which are used to determine the spatial attitude of the space vehicle. An optical system of star sensor with large aperture and wide field of view was designed in this paper. The effective focal length of the optics was 16mm, and the F-number is 1.2, the field of view of the optical system is 20°.The working spectrum is 500 to 800 nanometer. The lens system selects a similar complicated Petzval structure and special glass-couple, and get a high imaging quality in the whole spectrum range. For each field-of-view point, the values of the modulation transfer function at 50 cycles/mm is higher than 0.3. On the detecting plane, the encircled energy in a circle of 14μm diameter could be up to 80% of the total energy. In the whole range of the field of view, the dispersion spot diameter in the imaging plane is no larger than 13μm. The full field distortion was less than 0.1%, which was helpful to obtain the accurate location of the reference star through the picture gotten by the star sensor. The lateral chromatic aberration is less than 2μm in the whole spectrum range.
NASA Astrophysics Data System (ADS)
Kwee, Edward; Peterson, Alexander; Stinson, Jeffrey; Halter, Michael; Yu, Liya; Majurski, Michael; Chalfoun, Joe; Bajcsy, Peter; Elliott, John
2018-02-01
Induced pluripotent stem cells (iPSCs) are reprogrammed cells that can have heterogeneous biological potential. Quality assurance metrics of reprogrammed iPSCs will be critical to ensure reliable use in cell therapies and personalized diagnostic tests. We present a quantitative phase imaging (QPI) workflow which includes acquisition, processing, and stitching multiple adjacent image tiles across a large field of view (LFOV) of a culture vessel. Low magnification image tiles (10x) were acquired with a Phasics SID4BIO camera on a Zeiss microscope. iPSC cultures were maintained using a custom stage incubator on an automated stage. We implement an image acquisition strategy that compensates for non-flat illumination wavefronts to enable imaging of an entire well plate, including the meniscus region normally obscured in Zernike phase contrast imaging. Polynomial fitting and background mode correction was implemented to enable comparability and stitching between multiple tiles. LFOV imaging of reference materials indicated that image acquisition and processing strategies did not affect quantitative phase measurements across the LFOV. Analysis of iPSC colony images demonstrated mass doubling time was significantly different than area doubling time. These measurements were benchmarked with prototype microsphere beads and etched-glass gratings with specified spatial dimensions designed to be QPI reference materials with optical pathlength shifts suitable for cell microscopy. This QPI workflow and the use of reference materials can provide non-destructive traceable imaging method for novel iPSC heterogeneity characterization.
Skeletal camera network embedded structure-from-motion for 3D scene reconstruction from UAV images
NASA Astrophysics Data System (ADS)
Xu, Zhihua; Wu, Lixin; Gerke, Markus; Wang, Ran; Yang, Huachao
2016-11-01
Structure-from-Motion (SfM) techniques have been widely used for 3D scene reconstruction from multi-view images. However, due to the large computational costs of SfM methods there is a major challenge in processing highly overlapping images, e.g. images from unmanned aerial vehicles (UAV). This paper embeds a novel skeletal camera network (SCN) into SfM to enable efficient 3D scene reconstruction from a large set of UAV images. First, the flight control data are used within a weighted graph to construct a topologically connected camera network (TCN) to determine the spatial connections between UAV images. Second, the TCN is refined using a novel hierarchical degree bounded maximum spanning tree to generate a SCN, which contains a subset of edges from the TCN and ensures that each image is involved in at least a 3-view configuration. Third, the SCN is embedded into the SfM to produce a novel SCN-SfM method, which allows performing tie-point matching only for the actually connected image pairs. The proposed method was applied in three experiments with images from two fixed-wing UAVs and an octocopter UAV, respectively. In addition, the SCN-SfM method was compared to three other methods for image connectivity determination. The comparison shows a significant reduction in the number of matched images if our method is used, which leads to less computational costs. At the same time the achieved scene completeness and geometric accuracy are comparable.
Thin and Slow Smoke Detection by Using Frequency Image
NASA Astrophysics Data System (ADS)
Zheng, Guang; Oe, Shunitiro
In this paper, a new method to detect thin and slow smoke for early fire alarm by using frequency image has been proposed. The correlation coefficient of the frequency image between the current stage and the initial stage are calculated, so are the gray image correlation coefficient of the color image. When the thin smoke close to transparent enters into the camera view, the correlation coefficient of the frequency image becomes small, while the gray image correlation coefficient of the color image hardly change and keep large. When something which is not transparent, like human beings, etc., enters into the camera view, the correlation coefficient of the frequency image becomes small, as well as that of color image. Based on the difference of correlation coefficient between frequency image and color image in different situations, the thin smoke can be detected. Also, considering the movement of the thin smoke, miss detection caused by the illustration change or noise can be avoided. Several experiments in different situations are carried out, and the experimental results show the effect of the proposed method.
Vidavsky, Netta; Akiva, Anat; Kaplan-Ashiri, Ifat; Rechav, Katya; Addadi, Lia; Weiner, Steve; Schertel, Andreas
2016-12-01
Many important biological questions can be addressed by studying in 3D large volumes of intact, cryo fixed hydrated tissues (⩾10,000μm 3 ) at high resolution (5-20nm). This can be achieved using serial FIB milling and block face surface imaging under cryo conditions. Here we demonstrate the unique potential of the cryo-FIB-SEM approach using two extensively studied model systems; sea urchin embryos and the tail fin of zebrafish larvae. We focus in particular on the environment of mineral deposition sites. The cellular organelles, including mitochondria, Golgi, ER, nuclei and nuclear pores are made visible by the image contrast created by differences in surface potential of different biochemical components. Auto segmentation and/or volume rendering of the image stacks and 3D reconstruction of the skeleton and the cellular environment, provides a detailed view of the relative distribution in space of the tissue/cellular components, and thus of their interactions. Simultaneous acquisition of secondary and back-scattered electron images adds additional information. For example, a serial view of the zebrafish tail reveals the presence of electron dense mineral particles inside mitochondrial networks extending more than 20μm in depth in the block. Large volume imaging using cryo FIB SEM, as demonstrated here, can contribute significantly to the understanding of the structures and functions of diverse biological tissues. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bumstead, Jonathan; Côté, Daniel C.; Culver, Joseph P.
2017-02-01
Spontaneous neuronal activity has been measured at cellular resolution in mice, zebrafish, and C. elegans using optical sectioning microscopy techniques, such as light sheet microscopy (LSM) and two photon microscopy (TPM). Recent improvements in these modalities and genetically encoded calcium indicators (GECI's) have enabled whole brain imaging of calcium dynamics in zebrafish and C. elegans. However, these whole brain microscopy studies have not been extended to mice due to the limited field of view (FOV) of TPM and the cumbersome geometry of LSM. Conventional TPM is restricted to diffraction limited imaging over this small FOV (around 500 x 500 microns) due to the use of high magnification objectives (e.g. 1.0 NA; 20X) and the aberrations introduced by relay optics used in scanning the beam across the sample. To overcome these limitations, we have redesigned the entire optical path of the two photon microscope (scanning optics and objective lens) to support a field of view of Ø7 mm with relatively high spatial resolution (<10 microns). Using optical engineering software Zemax, we designed our system with commercially available optics that minimize astigmatism, field curvature, chromatic focal shift, and vignetting. Performance of the system was also tested experimentally with fluorescent beads in agarose, fixed samples, and in vivo structural imaging. Our large-FOV TPM provides a modality capable of studying distributed brain networks in mice at cellular resolution.
An iterative algorithm for soft tissue reconstruction from truncated flat panel projections
NASA Astrophysics Data System (ADS)
Langan, D.; Claus, B.; Edic, P.; Vaillant, R.; De Man, B.; Basu, S.; Iatrou, M.
2006-03-01
The capabilities of flat panel interventional x-ray systems continue to expand, enabling a broader array of medical applications to be performed in a minimally invasive manner. Although CT is providing pre-operative 3D information, there is a need for 3D imaging of low contrast soft tissue during interventions in a number of areas including neurology, cardiac electro-physiology, and oncology. Unlike CT systems, interventional angiographic x-ray systems provide real-time large field of view 2D imaging, patient access, and flexible gantry positioning enabling interventional procedures. However, relative to CT, these C-arm flat panel systems have additional technical challenges in 3D soft tissue imaging including slower rotation speed, gantry vibration, reduced lateral patient field of view (FOV), and increased scatter. The reduced patient FOV often results in significant data truncation. Reconstruction of truncated (incomplete) data is known an "interior problem", and it is mathematically impossible to obtain an exact reconstruction. Nevertheless, it is an important problem in 3D imaging on a C-arm to address the need to generate a 3D reconstruction representative of the object being imaged with minimal artifacts. In this work we investigate the application of an iterative Maximum Likelihood Transmission (MLTR) algorithm to truncated data. We also consider truncated data with limited views for cardiac imaging where the views are gated by the electrocardiogram(ECG) to combat motion artifacts.
Single cell magnetic imaging using a quantum diamond microscope
Park, H.; Weissleder, R.; Yacoby, A.; Lukin, M. D.; Lee, H.; Walsworth, R. L.; Connolly, C. B.
2015-01-01
We apply a quantum diamond microscope to detection and imaging of immunomagnetically labeled cells. This instrument uses nitrogen-vacancy (NV) centers in diamond for correlated magnetic and fluorescence imaging. Our device provides single-cell resolution and two orders of magnitude larger field of view (~1 mm2) than previous NV imaging technologies, enabling practical applications. To illustrate, we quantify cancer biomarkers expressed by rare tumor cells in a large population of healthy cells. PMID:26098019
SRTM Perspective View with Landsat Overlay: Santa Barbara, California
NASA Technical Reports Server (NTRS)
2001-01-01
Santa Barbara, California, is often called 'America's Riviera.' It enjoys a Mediterranean climate, a mountain backdrop, and a long and varied coastline. This perspective view of the Santa Barbara region was generated using data from the Shuttle Radar Topography Mission (SRTM) and an enhanced Landsat satellite image. The view is toward the northeast, from the Goleta Valley in the foreground to a snow-capped Mount Abel (elevation 2526 m or 8286 feet) along the skyline. The coast here generally faces south. Consequently, Fall and Winter sunrises occur over the ocean, which is unusual for the U.S. west coast. The Santa Barbara 'back country' is very rugged and largely remains as undeveloped wilderness and an important watershed for local communities. Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data match the 30-meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive. For visualization purposes, topographic heights displayed in this image are exaggerated two times. Colors approximate natural colors.
The elevation data used in this image was acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's land surface.To collect the 3-D SRTM data, engineers added a mast 60 meters (about 200-feet) long, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif, for NASA's Earth Science Enterprise,Washington, D.C. JPL is a division of the California Institute of Technology in Pasadena.Location (Isla Vista): 34.41 deg. North lat., 119.85 deg. West lon. View: East Scale: Scale Varies in this Perspective Date Acquired: February 16, 2000 SRTM, December 14, 1984 LandsatImaging doppler lidar for wind turbine wake profiling
Bossert, David J.
2015-11-19
An imaging Doppler lidar (IDL) enables the measurement of the velocity distribution of a large volume, in parallel, and at high spatial resolution in the wake of a wind turbine. Because the IDL is non-scanning, it can be orders of magnitude faster than conventional coherent lidar approaches. Scattering can be obtained from naturally occurring aerosol particles. Furthermore, the wind velocity can be measured directly from Doppler shifts of the laser light, so the measurement can be accomplished at large standoff and at wide fields-of-view.
MR imaging with remote reception using a coil array
NASA Astrophysics Data System (ADS)
Vazquez, F.; Marrufo, O.; Martin, R.; Rodriguez, A. O.
2012-10-01
A strategy for imaging a large field-of-view has recently been proposed applying remote detection with a waveguide and single loop coils. RF coils produce a traveling-wave propagating through the bore of the magnet, which is large enough so the cutoff frequency is below the Larmor frequency. This assumption also considers that a human subject inside the magnet bore. We applied the travelling-wave concept to generate images of a human leg at 3 Tesla. Two circular-shaped coils were used as the reception device and a whole-body coil was used for transmission. Images showed a good signal-to-noise ratio along the entire leg. This experimental results contradict the assumption that a whole-body 7T/65cm imager or higher was necessary to generate images with this approach.
Biomedical imaging and sensing using flatbed scanners.
Göröcs, Zoltán; Ozcan, Aydogan
2014-09-07
In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600-700 cm(2)) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings.
Biomedical Imaging and Sensing using Flatbed Scanners
Göröcs, Zoltán; Ozcan, Aydogan
2014-01-01
In this Review, we provide an overview of flatbed scanner based biomedical imaging and sensing techniques. The extremely large imaging field-of-view (e.g., ~600–700 cm2) of these devices coupled with their cost-effectiveness provide unique opportunities for digital imaging of samples that are too large for regular optical microscopes, and for collection of large amounts of statistical data in various automated imaging or sensing tasks. Here we give a short introduction to the basic features of flatbed scanners also highlighting the key parameters for designing scientific experiments using these devices, followed by a discussion of some of the significant examples, where scanner-based systems were constructed to conduct various biomedical imaging and/or sensing experiments. Along with mobile phones and other emerging consumer electronics devices, flatbed scanners and their use in advanced imaging and sensing experiments might help us transform current practices of medicine, engineering and sciences through democratization of measurement science and empowerment of citizen scientists, science educators and researchers in resource limited settings. PMID:24965011
Improved Image-Guided Laparoscopic Prostatectomy
2012-08-01
prevalent technique used in widening the field of view (FOV) of medical ultrasound images. Also referred to as stitching or panorama , the ultra- sound mosaic...tissue which can add valu- able features to the B-mode panorama . Many clinical applications deal with large cancerous lesions which expand beyond the...1999) 203–233 2. Varghese, T., Zagzebski, J., Lee Jr., F.: Elastographic imaging of thermal lesions in the liver in vivo following radiofrequency
NASA Technical Reports Server (NTRS)
1995-01-01
The 1100C Virtual Window is based on technology developed under NASA Small Business Innovation (SBIR) contracts to Ames Research Center. For example, under one contract Dimension Technologies, Inc. developed a large autostereoscopic display for scientific visualization applications. The Virtual Window employs an innovative illumination system to deliver the depth and color of true 3D imaging. Its applications include surgery and Magnetic Resonance Imaging scans, viewing for teleoperated robots, training, and in aviation cockpit displays.
Custom Super-Resolution Microscope for the Structural Analysis of Nanostructures
2018-05-29
research community. As part of our validation of the new design approach, we performed two - color imaging of pairs of adjacent oligo probes hybridized...nanostructures and biological targets. Our microscope features a large field of view and custom optics that facilitate 3D imaging and enhanced contrast in...our imaging throughput by creating two microscopy platforms for high-throughput, super-resolution materials characterization, with the AO set-up being
The composite classification problem in optical information processing
NASA Technical Reports Server (NTRS)
Hall, Eric B.
1995-01-01
Optical pattern recognition allows objects to be recognized from their images and permits their positional parameters to be estimated accurately in real time. The guiding principle behind optical pattern recognition is that a lens focusing a beam of coherent light modulated with an image produces the two-dimensinal Fourier transform of that image. When the resulting output is further transformed by the matched filter corresponding to the original image, one obtains the autocorrelation function of the original image, which has a peak at the origin. Such a device is called an optical correlator and may be used to recognize the locate the image for which it is designed. (From a practical perspective, an approximation to the matched filter must be used since the spatial light modulator (SLM) on which the filter is implemented usually does not allow one to independently control both the magnitude and phase of the filter.) Generally, one is not just concerned with recognizing a single image but is interested in recognizing a variety of rotated and scaled views of a particular image. In order to recognize these different views using an optical correlator, one may select a subset of these views (whose elements are called training images) and then use a composite filter that is designed to produce a correlation peak for each training image. Presumably, these peaks should be sharp and easily distinguishable from the surrounding correlation plane values. In this report we consider two areas of research regarding composite optical correlators. First, we consider the question of how best to choose the training images that are used to design the composite filter. With regard to quantity, the number of training images should be large enough to adequately represent all possible views of the targeted object yet small enough to ensure that the resolution of the filter is not exhausted. As for the images themselves, they should be distinct enough to avoid numerical difficulties yet similar enough to avoid gaps in which certain views of the target will be unrecognized. One method that we introduce to study this problem is called probing and involves the creation of the artificial imagery. The second problem we consider involves the clasification of the composite filter's correlation plane data. In particular, we would like to determine not only whether or not we are viewing a training image, but, in the former case, we would like to determine which training image is being viewed. This second problem is investigated using traditional M-ary hypothesis testing techniques.
2015-08-20
This view from NASA Cassini spacecraft looks toward Saturn icy moon Dione, with giant Saturn and its rings in the background, just prior to the mission final close approach to the moon on August 17, 2015. At lower right is the large, multi-ringed impact basin named Evander, which is about 220 miles (350 kilometers) wide. The canyons of Padua Chasma, features that form part of Dione's bright, wispy terrain, reach into the darkness at left. Imaging scientists combined nine visible light (clear spectral filter) images to create this mosaic view: eight from the narrow-angle camera and one from the wide-angle camera, which fills in an area at lower left. The scene is an orthographic projection centered on terrain at 0.2 degrees north latitude, 179 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. North on Dione is up. The view was acquired at distances ranging from approximately 106,000 miles (170,000 kilometers) to 39,000 miles (63,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 35 degrees. Image scale is about 1,500 feet (450 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19650
2015-02-12
Presented here are side-by-side comparisons of a traditional Cassini Synthetic Aperture Radar (SAR) view and one made using a new technique for handling electronic noise that results in clearer views of Titan's surface. The technique, called despeckling, produces images that can be easier for researchers to interpret. The view is a mosaic of SAR swaths over Ligeia Mare, one of the large hydrocarbons seas on Titan. In particular, despeckling improves the visibility of channels flowing down to the sea. http://photojournal.jpl.nasa.gov/catalog/PIA19052
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
... Every year, there are thousands of sleep-related deaths among babies. View large image and text description ... 2AZh9Bn Supporting research to better understand sleep-related deaths and strategies to improve safe sleep practices. Healthcare ...
NASA Astrophysics Data System (ADS)
Graham-Rowe, Duncan
2007-12-01
As the size of handheld gadgets decreases, their displays become harder to view. The solution could lie with integrated projectors that can project crisp, large images from mobile devices onto any chosen surface. Duncan Graham-Rowe reports.
2010-10-07
This enhanced-color view of Saturn moon Mimas was made from images obtained by NASA Cassini spacecraft. It highlights the bluish band around the icy moon equator. The large round gouge on the surface is Herschel Crater.
Okefenokee Swamp Fire, Georgia
2002-05-22
Large smoke plumes were produced by the Blackjack complex fire in southeastern Georgia Okefenokee Swamp as seen by the MISR instrument aboard NASA Terra spacecraft May 8, 2002. 3D glasses are necessary to view this image.
High-throughput isotropic mapping of whole mouse brain using multi-view light-sheet microscopy
NASA Astrophysics Data System (ADS)
Nie, Jun; Li, Yusha; Zhao, Fang; Ping, Junyu; Liu, Sa; Yu, Tingting; Zhu, Dan; Fei, Peng
2018-02-01
Light-sheet fluorescence microscopy (LSFM) uses an additional laser-sheet to illuminate selective planes of the sample, thereby enabling three-dimensional imaging at high spatial-temporal resolution. These advantages make LSFM a promising tool for high-quality brain visualization. However, even by the use of LSFM, the spatial resolution remains insufficient to resolve the neural structures across a mesoscale whole mouse brain in three dimensions. At the same time, the thick-tissue scattering prevents a clear observation from the deep of brain. Here we use multi-view LSFM strategy to solve this challenge, surpassing the resolution limit of standard light-sheet microscope under a large field-of-view (FOV). As demonstrated by the imaging of optically-cleared mouse brain labelled with thy1-GFP, we achieve a brain-wide, isotropic cellular resolution of 3μm. Besides the resolution enhancement, multi-view braining imaging can also recover complete signals from deep tissue scattering and attenuation. The identification of long distance neural projections across encephalic regions can be identified and annotated as a result.
Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror
NASA Astrophysics Data System (ADS)
Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng
2016-07-01
To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.
Recognizing 3 D Objects from 2D Images Using Structural Knowledge Base of Genetic Views
1988-08-31
technical report. [BIE85] I. Biederman , "Human image understanding: Recent research and a theory", Computer Vision, Graphics, and Image Processing, vol...model bases", Technical Report 87-85, COINS Dept, University of Massachusetts, Amherst, MA 01003, August 1987 . [BUR87b) Burns, J. B. and L. J. Kitchen...34Recognition in 2D images of 3D objects from large model bases using prediction hierarchies", Proc. IJCAI-10, 1987 . [BUR891 J. B. Burns, forthcoming
NASA Technical Reports Server (NTRS)
2001-01-01
Several mountain ranges and a portion of the Amur River are visible in this set of MISR images of Russia's far east Khabarovsk region. The images were acquired on May 13, 2001 during Terra orbit 7452. The view from MISR's 70-degree forward-looking camera is at the top left; the 26-degree forward-looking view is at the top right. The larger image at the bottom is a stereo 'anaglyph' created using the cameras at two intermediate angles. To view the stereo image in 3-D you need red/blue glasses with the red filter placed over your left eye. All of the images are oriented with north to the left to facilitate stereo viewing. Each image covers an area about 345 kilometers x 278 kilometers.The Amur River, in the upper right, and Lake Bolon, at the top center, are most prominent in the 26-degree view due to sunglint (mirror-like reflection of the Sun's rays by the water). The Amur River valley is a primary breeding ground for storks and cranes and a stopover for large numbers of migratory birds. About 20% of the Amur wetlands are protected by official conservation measures, but human development has converted large portions to agricultural uses. Other notable features in these images are several mountain chains, including the Badzhal'skiy to the left of center and the Bureiskiy in the lower left.Smoke plumes from several forest fires can be seen. They are especially apparent in the 70-degree view where the smoke's visibility is accentuated, in part, by the long slant path through the atmosphere. The largest plumes are in the lower left and upper right, with some smaller plumes above and to the right of the image centers. In the upper images the hazy region in the vicinity of these smaller plumes has the appearance of low-altitude smoke, but depth perception provided by the stereo anaglyph shows that it is actually a distinct layer of high-altitude cirrus clouds. Whether the cirrus is related to the fires is uncertain. It is possible, however, for the fires have to have heated the lower atmosphere enough to create bubbles of hot air. As such bubbles rise, they can force stable, nearly saturated air above to move even higher, triggering the formation of ice clouds. Visualization of other three-dimensional characteristics of the scene, such as the intermediate-altitude layer of cumulus clouds along the left side, is made possible by the stereo imagery.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Recent advances in multiview distributed video coding
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj
2007-04-01
We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.
Three-dimensional digital breast histopathology imaging
NASA Astrophysics Data System (ADS)
Clarke, G. M.; Peressotti, C.; Mawdsley, G. E.; Eidt, S.; Ge, M.; Morgan, T.; Zubovits, J. T.; Yaffe, M. J.
2005-04-01
We have developed a digital histology imaging system that has the potential to improve the accuracy of surgical margin assessment in the treatment of breast cancer by providing finer sampling and 3D visualization. The system is capable of producing a 3D representation of histopathology from an entire lumpectomy specimen. We acquire digital photomicrographs of a stack of large (120 x 170 mm) histology slides cut serially through the entire specimen. The images are then registered and displayed in 2D and 3D. This approach dramatically improves sampling and can improve visualization of tissue structures compared to current, small-format histology. The system consists of a brightfield microscope, adapted with a freeze-frame digital video camera and a large, motorized translation stage. The image of each slide is acquired as a mosaic of adjacent tiles, each tile representing one field-of-view of the microscope, and the mosaic is assembled into a seamless composite image. The assembly is done by a program developed to build image sets at six different levels within a multiresolution pyramid. A database-linked viewing program has been created to efficiently register and display the animated stack of images, which occupies about 80 GB of disk space per lumpectomy at full resolution, on a high-resolution (3840 x 2400 pixels) colour monitor. The scanning or tiling approach to digitization is inherently susceptible to two artefacts which disrupt the composite image, and which impose more stringent requirements on system performance. Although non-uniform illumination across any one isolated tile may not be discernible, the eye readily detects this non-uniformity when the entire assembly of tiles is viewed. The pattern is caused by deficiencies in optical alignment, spectrum of the light source, or camera corrections. The imaging task requires that features as small as 3.2 &mum in extent be seamlessly preserved. However, inadequate accuracy in positioning of the translation stage produces visible discontinuities between adjacent features. Both of these effects can distract the viewer from the perception of diagnostically important features. Here we describe the system design and discuss methods for the correction of these artefacts. In addition, we outline our approach to rendering the processing and display of these large images computationally feasible.
Müllerian duct cyst: diagnosis with MR imaging.
Thurnher, S; Hricak, H; Tanagho, E A
1988-07-01
The value of magnetic resonance (MR) imaging in diagnosing clinically suspected müllerian duct cysts was assessed in six patients. MR imaging correctly demonstrated the abnormality to be intraprostatic, consistent with the diagnosis of müllerian duct cysts in four patients, and allowed the diagnosis to be excluded in the other two. The demonstration of prostatic zonal anatomy, the ability to obtain direct images in all three orthogonal planes, and a large field of view make MR imaging valuable in the study of suspected müllerian duct cysts.
Large-field high-resolution mosaic movies
NASA Astrophysics Data System (ADS)
Hammerschlag, Robert H.; Sliepen, Guus; Bettonvil, Felix C. M.; Jägers, Aswin P. L.; Sütterlin, Peter; Martin, Sara F.
2012-09-01
Movies with fields-of-view larger than normal for high-resolution telescopes will give a better understanding of processes on the Sun, such as filament and active region developments and their possible interactions. New active regions can influence, by their emergence, their environment to the extent of possibly serving as an igniter of the eruption of a nearby filament. A method to create a large field-of-view is to join several fields-of-view into a mosaic. Fields are imaged quickly one after another using fast telescope-pointing. Such a pointing cycle has been automated at the Dutch Open Telescope (DOT), a high-resolution solar telescope located on the Canary Island La Palma. The observer can draw with the computer mouse the desired total field in the guider-telescope image of the whole Sun. The guider telescope is equipped with an H-alpha filter and electronic enhancement of contrast in the image for good visibility of filaments and prominences. The number and positions of the subfields are calculated automatically and represented by an array of bright points indicating the subfield centers inside the drawn rectangle of the total field on the computer screen with the whole-sun image. When the exposures start the telescope repeats automatically the sequence of subfields. Automatic production of flats is also programmed including defocusing and fast motion over the solar disk of the image field. For the first time mosaic movies were programmed from stored information on automated telescope motions from one field to the next. The mosaic movies fill the gap between whole-sun images with limited resolution of synoptic telescopes including space instruments and small-field high-cadence movies of high-resolution solar telescopes.
Biological tissue imaging with a position and time sensitive pixelated detector.
Jungmann, Julia H; Smith, Donald F; MacAleese, Luke; Klinkert, Ivo; Visser, Jan; Heeren, Ron M A
2012-10-01
We demonstrate the capabilities of a highly parallel, active pixel detector for large-area, mass spectrometric imaging of biological tissue sections. A bare Timepix assembly (512 × 512 pixels) is combined with chevron microchannel plates on an ion microscope matrix-assisted laser desorption time-of-flight mass spectrometer (MALDI TOF-MS). The detector assembly registers position- and time-resolved images of multiple m/z species in every measurement frame. We prove the applicability of the detection system to biomolecular mass spectrometry imaging on biologically relevant samples by mass-resolved images from Timepix measurements of a peptide-grid benchmark sample and mouse testis tissue slices. Mass-spectral and localization information of analytes at physiologic concentrations are measured in MALDI-TOF-MS imaging experiments. We show a high spatial resolution (pixel size down to 740 × 740 nm(2) on the sample surface) and a spatial resolving power of 6 μm with a microscope mode laser field of view of 100-335 μm. Automated, large-area imaging is demonstrated and the Timepix' potential for fast, large-area image acquisition is highlighted.
Video-rate volumetric neuronal imaging using 3D targeted illumination.
Xiao, Sheng; Tseng, Hua-An; Gritton, Howard; Han, Xue; Mertz, Jerome
2018-05-21
Fast volumetric microscopy is required to monitor large-scale neural ensembles with high spatio-temporal resolution. Widefield fluorescence microscopy can image large 2D fields of view at high resolution and speed while remaining simple and costeffective. A focal sweep add-on can further extend the capacity of widefield microscopy by enabling extended-depth-of-field (EDOF) imaging, but suffers from an inability to reject out-of-focus fluorescence background. Here, by using a digital micromirror device to target only in-focus sample features, we perform EDOF imaging with greatly enhanced contrast and signal-to-noise ratio, while reducing the light dosage delivered to the sample. Image quality is further improved by the application of a robust deconvolution algorithm. We demonstrate the advantages of our technique for in vivo calcium imaging in the mouse brain.
Atmospheric Science Data Center
2013-04-16
... Birth of a Large Iceberg in Pine Island Bay, Antarctica View Larger Image ... revealed the crack to be propagating through the shelf ice at a rate averaging 15 meters per day, accompanied by a slight rotation of ...
] View Images Details ID: SIL32-035-02 Enlarge Image View Images Details ID: SIL32-038-02 Enlarge Image View Images Details ID: SIL-2004_CT_6_1 Enlarge Image View Images Details ID: SIL32-010-01 Enlarge Image View Images Details ID: SIL32-013-05 Enlarge Image View Images Details ID: SIL32-014-02 Enlarge
NASA Astrophysics Data System (ADS)
Gaskill, Jack D.; Curtis, Craig H.
1995-10-01
Physical demonstrations of diffraction and image formation for educational purposes have long been hampered by limitations of equipment and viewing facilities: it has usually been possible to demonstrate diffraction and image formation for only a few simple apertures or objects; it has often been time consuming to set up the optical bench used for the demonstration and difficult to keep it aligned; a darkened demonstration room has normally been required; and, it has usually been possible for only small groups of people to view the diffraction patterns and images. In 1990, the Optical Sciences Center was awarded an AT&T Special Purpose Grant to construct a device that would allow diffraction and image formation demonstrations to be conducted while avoiding the limitations noted above. This device, which was completed in the fall of 1992 and is affectionately called 'The Defractionator', makes use of video technology to permit demonstrations of diffraction, image formation and spatial filtering for large audiences in regular classrooms or auditoria. In addition, video tapes of the demonstrations can be recorded for viewing at sites where use of the actual demonstrator is inconvenient. A description of the system will be given, and video tapes will be used to display previously recorded diffraction phenomena and spatial filtering demonstrations.
Generating Artificial Reference Images for Open Loop Correlation Wavefront Sensors
NASA Astrophysics Data System (ADS)
Townson, M. J.; Love, G. D.; Saunter, C. D.
2018-05-01
Shack-Hartmann wavefront sensors for both solar and laser guide star adaptive optics (with elongated spots) need to observe extended objects. Correlation techniques have been successfully employed to measure the wavefront gradient in solar adaptive optics systems and have been proposed for laser guide star systems. In this paper we describe a method for synthesising reference images for correlation Shack-Hartmann wavefront sensors with a larger field of view than individual sub-apertures. We then show how these supersized reference images can increase the performance of correlation wavefront sensors in regimes where large relative shifts are induced between sub-apertures, such as those observed in open-loop wavefront sensors. The technique we describe requires no external knowledge outside of the wavefront-sensor images, making it available as an entirely "software" upgrade to an existing adaptive optics system. For solar adaptive optics we show the supersized reference images extend the magnitude of shifts which can be accurately measured from 12% to 50% of the field of view of a sub-aperture and in laser guide star wavefront sensors the magnitude of centroids that can be accurately measured is increased from 12% to 25% of the total field of view of the sub-aperture.
An open architecture for medical image workstation
NASA Astrophysics Data System (ADS)
Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun
2005-04-01
Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.
Simultaneous fast scanning XRF, dark field, phase-, and absorption contrast tomography
NASA Astrophysics Data System (ADS)
Medjoubi, Kadda; Bonissent, Alain; Leclercq, Nicolas; Langlois, Florent; Mercère, Pascal; Somogyi, Andrea
2013-09-01
Scanning hard X-ray nanoprobe imaging provides a unique tool for probing specimens with high sensitivity and large penetration depth. Moreover, the combination of complementary techniques such as X-ray fluorescence, absorption, phase contrast and dark field imaging gives complete quantitative information on the sample structure, composition and chemistry. The multi-technique "FLYSCAN" data acquisition scheme developed at Synchrotron SOLEIL permits to perform fast continuous scanning imaging and as such makes scanning tomography techniques feasible in a time-frame well-adapted to typical user experiments. Here we present the recent results of simultaneous fast scanning multi-technique tomography performed at Soleil. This fast scanning scheme will be implemented at the Nanoscopium beamline for large field of view 2D and 3D multimodal imaging.
Application of optical coherence tomography based microangiography for cerebral imaging
NASA Astrophysics Data System (ADS)
Baran, Utku; Wang, Ruikang K.
2016-03-01
Requirements of in vivo rodent brain imaging are hard to satisfy using traditional technologies such as magnetic resonance imaging and two-photon microscopy. Optical coherence tomography (OCT) is an emerging tool that can easily reach at high speeds and provide high resolution volumetric images with a relatively large field of view for rodent brain imaging. Here, we provide the overview of recent developments of functional OCT based imaging techniques for neuroscience applications on rodents. Moreover, a summary of OCT-based microangiography (OMAG) studies for stroke and traumatic brain injury cases on rodents are provided.
Complementary compressive imaging for the telescopic system
Yu, Wen-Kai; Liu, Xue-Feng; Yao, Xu-Ri; Wang, Chao; Zhai, Yun; Zhai, Guang-Jie
2014-01-01
Conventional single-pixel cameras recover images only from the data recorded in one arm of the digital micromirror device, with the light reflected to the other direction not to be collected. Actually, the sampling in these two reflection orientations is correlated with each other, in view of which we propose a sampling concept of complementary compressive imaging, for the first time to our knowledge. We use this method in a telescopic system and acquire images of a target at about 2.0 km range with 20 cm resolution, with the variance of the noise decreasing by half. The influence of the sampling rate and the integration time of photomultiplier tubes on the image quality is also investigated experimentally. It is evident that this technique has advantages of large field of view over a long distance, high-resolution, high imaging speed, high-quality imaging capabilities, and needs fewer measurements in total than any single-arm sampling, thus can be used to improve the performance of all compressive imaging schemes and opens up possibilities for new applications in the remote-sensing area. PMID:25060569
Ultra-large field-of-view two-photon microscopy.
Tsai, Philbert S; Mateo, Celine; Field, Jeffrey J; Schaffer, Chris B; Anderson, Matthew E; Kleinfeld, David
2015-06-01
We present a two-photon microscope that images the full extent of murine cortex with an objective-limited spatial resolution across an 8 mm by 10 mm field. The lateral resolution is approximately 1 µm and the maximum scan speed is 5 mm/ms. The scan pathway employs large diameter compound lenses to minimize aberrations and performs near theoretical limits. We demonstrate the special utility of the microscope by recording resting-state vasomotion across both hemispheres of the murine brain through a transcranial window and by imaging histological sections without the need to stitch.
Patterns of Brain Activation when Mothers View Their Own Child and Dog: An fMRI Study
Gollub, Randy L.; Niemi, Steven M.; Evins, Anne Eden
2014-01-01
Neural substrates underlying the human-pet relationship are largely unknown. We examined fMRI brain activation patterns as mothers viewed images of their own child and dog and an unfamiliar child and dog. There was a common network of brain regions involved in emotion, reward, affiliation, visual processing and social cognition when mothers viewed images of both their child and dog. Viewing images of their child resulted in brain activity in the midbrain (ventral tegmental area/substantia nigra involved in reward/affiliation), while a more posterior cortical brain activation pattern involving fusiform gyrus (visual processing of faces and social cognition) characterized a mother's response to her dog. Mothers also rated images of their child and dog as eliciting similar levels of excitement (arousal) and pleasantness (valence), although the difference in the own vs. unfamiliar child comparison was larger than the own vs. unfamiliar dog comparison for arousal. Valence ratings of their dog were also positively correlated with ratings of the attachment to their dog. Although there are similarities in the perceived emotional experience and brain function associated with the mother-child and mother-dog bond, there are also key differences that may reflect variance in the evolutionary course and function of these relationships. PMID:25279788
Patterns of brain activation when mothers view their own child and dog: an fMRI study.
Stoeckel, Luke E; Palley, Lori S; Gollub, Randy L; Niemi, Steven M; Evins, Anne Eden
2014-01-01
Neural substrates underlying the human-pet relationship are largely unknown. We examined fMRI brain activation patterns as mothers viewed images of their own child and dog and an unfamiliar child and dog. There was a common network of brain regions involved in emotion, reward, affiliation, visual processing and social cognition when mothers viewed images of both their child and dog. Viewing images of their child resulted in brain activity in the midbrain (ventral tegmental area/substantia nigra involved in reward/affiliation), while a more posterior cortical brain activation pattern involving fusiform gyrus (visual processing of faces and social cognition) characterized a mother's response to her dog. Mothers also rated images of their child and dog as eliciting similar levels of excitement (arousal) and pleasantness (valence), although the difference in the own vs. unfamiliar child comparison was larger than the own vs. unfamiliar dog comparison for arousal. Valence ratings of their dog were also positively correlated with ratings of the attachment to their dog. Although there are similarities in the perceived emotional experience and brain function associated with the mother-child and mother-dog bond, there are also key differences that may reflect variance in the evolutionary course and function of these relationships.
Distributed health care imaging information systems
NASA Astrophysics Data System (ADS)
Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.
1997-05-01
We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.
Ultrahigh speed en face OCT capsule for endoscopic imaging
Liang, Kaicheng; Traverso, Giovanni; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Wang, Zhao; Potsaid, Benjamin; Giacomelli, Michael; Jayaraman, Vijaysekhar; Barman, Ross; Cable, Alex; Mashimo, Hiroshi; Langer, Robert; Fujimoto, James G.
2015-01-01
Depth resolved and en face OCT visualization in vivo may have important clinical applications in endoscopy. We demonstrate a high speed, two-dimensional (2D) distal scanning capsule with a micromotor for fast rotary scanning and a pneumatic actuator for precision longitudinal scanning. Longitudinal position measurement and image registration were performed by optical tracking of the pneumatic scanner. The 2D scanning device enables high resolution imaging over a small field of view and is suitable for OCT as well as other scanning microscopies. Large field of view imaging for screening or surveillance applications can also be achieved by proximally pulling back or advancing the capsule while scanning the distal high-speed micromotor. Circumferential en face OCT was demonstrated in living swine at 250 Hz frame rate and 1 MHz A-scan rate using a MEMS tunable VCSEL light source at 1300 nm. Cross-sectional and en face OCT views of the upper and lower gastrointestinal tract were generated with precision distal pneumatic longitudinal actuation as well as proximal manual longitudinal actuation. These devices could enable clinical studies either as an adjunct to endoscopy, attached to an endoscope, or as a swallowed tethered capsule for non-endoscopic imaging without sedation. The combination of ultrahigh speed imaging and distal scanning capsule technology could enable both screening and surveillance applications. PMID:25909001
Ultrahigh speed en face OCT capsule for endoscopic imaging.
Liang, Kaicheng; Traverso, Giovanni; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Wang, Zhao; Potsaid, Benjamin; Giacomelli, Michael; Jayaraman, Vijaysekhar; Barman, Ross; Cable, Alex; Mashimo, Hiroshi; Langer, Robert; Fujimoto, James G
2015-04-01
Depth resolved and en face OCT visualization in vivo may have important clinical applications in endoscopy. We demonstrate a high speed, two-dimensional (2D) distal scanning capsule with a micromotor for fast rotary scanning and a pneumatic actuator for precision longitudinal scanning. Longitudinal position measurement and image registration were performed by optical tracking of the pneumatic scanner. The 2D scanning device enables high resolution imaging over a small field of view and is suitable for OCT as well as other scanning microscopies. Large field of view imaging for screening or surveillance applications can also be achieved by proximally pulling back or advancing the capsule while scanning the distal high-speed micromotor. Circumferential en face OCT was demonstrated in living swine at 250 Hz frame rate and 1 MHz A-scan rate using a MEMS tunable VCSEL light source at 1300 nm. Cross-sectional and en face OCT views of the upper and lower gastrointestinal tract were generated with precision distal pneumatic longitudinal actuation as well as proximal manual longitudinal actuation. These devices could enable clinical studies either as an adjunct to endoscopy, attached to an endoscope, or as a swallowed tethered capsule for non-endoscopic imaging without sedation. The combination of ultrahigh speed imaging and distal scanning capsule technology could enable both screening and surveillance applications.
Perspective View with Landsat Overlay, Los Angeles Basin
NASA Technical Reports Server (NTRS)
2002-01-01
Most of Los Angeles is visible in this computer-generated north-northeast perspective viewed from above the Pacific Ocean. In the foreground the hilly Palos Verdes peninsula lies to the left of the harbor at Long Beach, and in the middle distance the various communities that comprise the greater Los Angeles area appear as shades of grey and white. In the distance the San Gabriel Mountains rise up to separate the basin from the Mojave Desert, which can be seen near the top of the image.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and an enhanced color Landsat 5satellite image mosaic. Topographic expression is exaggerated one and one-half times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 70 kilometers (42 miles), View distance 160 kilometers(100 miles) Location: 34.0 deg. North lat., 118.2 deg. West lon. Orientation: View north-northeast Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively Date Acquired: February 2000 (SRTM)Surface composition of Mars: A Viking multispectral view
NASA Technical Reports Server (NTRS)
Adams, John B.; Smith, Milton O.; Arvidson, Raymond E.; Dale-Bannister, Mary; Guinness, Edward A.; Singer, Robert; Adams, John B.
1987-01-01
A new method of analyzing multispectral images takes advantage of the spectral variation from pixel to pixel that is typical for natural planetary surfaces, and treats all pixels as potential mixtures of spectrally distinct materials. For Viking Lander images, mixtures of only three spectral end members (rock, soil, and shade) are sufficient to explain the observed spectral variation to the level of instrumental noise. It was concluded that a large portion of the Martian surface consists of only two spectrally distinct materials, basalt and palgonitic soil. It is emphasized, however, that as viewed through the three broad bandpasses of Viking Orbiter, other materials cannot be distinguished from the mixtures.
NASA Astrophysics Data System (ADS)
Anstey, Josephine; Pape, Dave
2013-03-01
In this paper we discuss Mrs. Squandertime, a real-time, persistent simulation of a virtual character, her living room, and the view from her window, designed to be a wall-size, projected art installation. Through her large picture window, the eponymous Mrs. Squandertime watches the sea: boats, clouds, gulls, the tide going in and out, people on the sea wall. The hundreds of images that compose the view are drawn from historical printed sources. The program that assembles and animates these images is driven by weather, time, and tide data constantly updated from a real physical location. The character herself is rendered photographically in a series of slowly dissolving stills which correspond to the character's current behavior.
Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles
NASA Astrophysics Data System (ADS)
Kazemzadeh, Farnoud; Wong, Alexander
2016-12-01
Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) "true/false" SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved "true" zones could determine the corrosion rate in any of the zones.
Laser Light-field Fusion for Wide-field Lensfree On-chip Phase Contrast Microscopy of Nanoparticles.
Kazemzadeh, Farnoud; Wong, Alexander
2016-12-13
Wide-field lensfree on-chip microscopy, which leverages holography principles to capture interferometric light-field encodings without lenses, is an emerging imaging modality with widespread interest given the large field-of-view compared to lens-based techniques. In this study, we introduce the idea of laser light-field fusion for lensfree on-chip phase contrast microscopy for detecting nanoparticles, where interferometric laser light-field encodings acquired using a lensfree, on-chip setup with laser pulsations at different wavelengths are fused to produce marker-free phase contrast images of particles at the nanometer scale. As a proof of concept, we demonstrate, for the first time, a wide-field lensfree on-chip instrument successfully detecting 300 nm particles across a large field-of-view of ~30 mm 2 without any specialized or intricate sample preparation, or the use of synthetic aperture- or shift-based techniques.
From ATLASGAL to SEDIGISM: Towards a Complete 3D View of the Dense Galactic Interstellar Medium
NASA Astrophysics Data System (ADS)
Schuller, F.; Urquhart, J.; Bronfman, L.; Csengeri, T.; Bontemps, S.; Duarte-Cabral, A.; Giannetti, A.; Ginsburg, A.; Henning, T.; Immer, K.; Leurini, S.; Mattern, M.; Menten, K.; Molinari, S.; Muller, E.; Sánchez-Monge, A.; Schisano, E.; Suri, S.; Testi, L.; Wang, K.; Wyrowski, F.; Zavagno, A.
2016-09-01
The ATLASGAL survey has provided the first unbiased view of the inner Galactic Plane at sub-millimetre wavelengths. This is the largest ground-based survey of its kind to date, covering 420 square degrees at a wavelength of 870 µm. The reduced data, consisting of images and a catalogue of > 104 compact sources, are available from the ESO Science Archive Facility through the Phase 3 infrastructure. The extremely rich statistics of this survey initiated several follow-up projects, including spectroscopic observations to explore molecular complexity and high angular resolution imaging with the Atacama Large Millimeter/submillimeter Array (ALMA), aimed at resolving individual protostars. The most extensive follow-up project is SEDIGISM, a 3D mapping of the dense interstellar medium over a large fraction of the inner Galaxy. Some notable results of these surveys are highlighted.
Enterprise PACS and image distribution.
Huang, H K
2003-01-01
Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.
Impact of large field angles on the requirements for deformable mirror in imaging satellites
NASA Astrophysics Data System (ADS)
Kim, Jae Jun; Mueller, Mark; Martinez, Ty; Agrawal, Brij
2018-04-01
For certain imaging satellite missions, a large aperture with wide field-of-view is needed. In order to achieve diffraction limited performance, the mirror surface Root Mean Square (RMS) error has to be less than 0.05 waves. In the case of visible light, it has to be less than 30 nm. This requirement is difficult to meet as the large aperture will need to be segmented in order to fit inside a launch vehicle shroud. To reduce this requirement and to compensate for the residual wavefront error, Micro-Electro-Mechanical System (MEMS) deformable mirrors can be considered in the aft optics of the optical system. MEMS deformable mirrors are affordable and consume low power, but are small in size. Due to the major reduction in pupil size for the deformable mirror, the effective field angle is magnified by the diameter ratio of the primary and deformable mirror. For wide field of view imaging, the required deformable mirror correction is field angle dependant, impacting the required parameters of a deformable mirror such as size, number of actuators, and actuator stroke. In this paper, a representative telescope and deformable mirror system model is developed and the deformable mirror correction is simulated to study the impact of the large field angles in correcting a wavefront error using a deformable mirror in the aft optics.
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Cui, Yang; Hanley, Luke
2015-01-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872
NASA Astrophysics Data System (ADS)
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
System Performance Simulations of the RatCAP Awake Rat Brain Scanner
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Vaska, P.; Schlyer, D. J.; Stoll, S. P.; Villanueva, A.; Kriplani, A.; Woody, C. L.
2005-10-01
The capability to create high quality images from data acquired by the Rat Conscious Animal PET tomograph (RatCAP) has been evaluated using modified versions of the PET Monte Carlo code Simulation System for Emission Tomography (SimSET). The proposed tomograph consists of lutetium oxyorthosilicate (LSO) crystals arranged in 12 4 /spl times/ 8 blocks. The effects of the RatCAPs small ring diameter (/spl sim/40 mm) and its block detector geometry on image quality for small animal studies have been investigated. Since the field of view will be almost as large as the ring diameter, radial elongation artifacts due to parallax error are expected to degrade the spatial resolution and thus the image quality at the edge of the field of view. In addition to Monte Carlo simulations, some preliminary results of experimentally acquired images in both two-dimensional (2-D) and 3-D modes are presented.
NASA Astrophysics Data System (ADS)
Wolf, Ivo; Nolden, Marco; Schwarz, Tobias; Meinzer, Hans-Peter
2010-02-01
The Medical Imaging Interaction Toolkit (MITK) and the eXtensible Imaging Platform (XIP) both aim at facilitating the development of medical imaging applications, but provide support on different levels. MITK offers support from the toolkit level, whereas XIP comes with a visual programming environment. XIP is strongly based on Open Inventor. Open Inventor with its scene graph-based rendering paradigm was not specifically designed for medical imaging, but focuses on creating dedicated visualizations. MITK has a visualization concept with a model-view-controller like design that assists in implementing multiple, consistent views on the same data, which is typically required in medical imaging. In addition, MITK defines a unified means of describing position, orientation, bounds, and (if required) local deformation of data and views, supporting e.g. images acquired with gantry tilt and curved reformations. The actual rendering is largely delegated to the Visualization Toolkit (VTK). This paper presents an approach of how to integrate the visualization concept of MITK with XIP, especially into the XIP-Builder. This is a first step of combining the advantages of both platforms. It enables experimenting with algorithms in the XIP visual programming environment without requiring a detailed understanding of Open Inventor. Using MITK-based add-ons to XIP, any number of data objects (images, surfaces, etc.) produced by algorithms can simply be added to an MITK DataStorage object and rendered into any number of slice-based (2D) or 3D views. Both MITK and XIP are open-source C++ platforms. The extensions presented in this paper will be available from www.mitk.org.
Portable optical-resolution photoacoustic microscopy for volumetric imaging of multiscale organisms.
Jin, Tian; Guo, Heng; Yao, Lei; Xie, Huikai; Jiang, Huabei; Xi, Lei
2018-04-01
Photoacoustic microscopy (PAM) provides a fundamentally new tool for a broad range of studies of biological structures and functions. However, the use of PAM has been largely limited to small vertebrates due to the large size/weight and the inconvenience of the equipment. Here, we describe a portable optical-resolution photoacoustic microscopy (pORPAM) system for 3-dimensional (3D) imaging of small-to-large rodents and humans with a high spatiotemporal resolution and a large field of view. We show extensive applications of pORPAM to multiscale animals including mice and rabbits. In addition, we image the 3D vascular networks of human lips, and demonstrate the feasibility of pORPAM to observe the recovery process of oral ulcer and cancer-associated capillary loops in human oral cavities. This technology is promising for broad biomedical studies from fundamental biology to clinical diseases. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Remote gaze tracking system on a large display.
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-10-07
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Remote Gaze Tracking System on a Large Display
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-01-01
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351
Wide-Field Imaging Interferometry Spatial-Spectral Image Synthesis Algorithms
NASA Technical Reports Server (NTRS)
Lyon, Richard G.; Leisawitz, David T.; Rinehart, Stephen A.; Memarsadeghi, Nargess; Sinukoff, Evan J.
2012-01-01
Developed is an algorithmic approach for wide field of view interferometric spatial-spectral image synthesis. The data collected from the interferometer consists of a set of double-Fourier image data cubes, one cube per baseline. These cubes are each three-dimensional consisting of arrays of two-dimensional detector counts versus delay line position. For each baseline a moving delay line allows collection of a large set of interferograms over the 2D wide field detector grid; one sampled interferogram per detector pixel per baseline. This aggregate set of interferograms, is algorithmically processed to construct a single spatial-spectral cube with angular resolution approaching the ratio of the wavelength to longest baseline. The wide field imaging is accomplished by insuring that the range of motion of the delay line encompasses the zero optical path difference fringe for each detector pixel in the desired field-of-view. Each baseline cube is incoherent relative to all other baseline cubes and thus has only phase information relative to itself. This lost phase information is recovered by having point, or otherwise known, sources within the field-of-view. The reference source phase is known and utilized as a constraint to recover the coherent phase relation between the baseline cubes and is key to the image synthesis. Described will be the mathematical formalism, with phase referencing and results will be shown using data collected from NASA/GSFC Wide-Field Imaging Interferometry Testbed (WIIT).
Liu, Xinming; Shaw, Chris C; Wang, Tianpeng; Chen, Lingyun; Altunbas, Mustafa C; Kappadath, S Cheenu
2006-02-28
We developed and investigated a scanning sampled measurement (SSM) technique for scatter measurement and correction in cone beam breast CT imaging. A cylindrical polypropylene phantom (water equivalent) was mounted on a rotating table in a stationary gantry experimental cone beam breast CT imaging system. A 2-D array of lead beads, with the beads set apart about ~1 cm from each other and slightly tilted vertically, was placed between the object and x-ray source. A series of projection images were acquired as the phantom is rotated 1 degree per projection view and the lead beads array shifted vertically from one projection view to the next. A series of lead bars were also placed at the phantom edge to produce better scatter estimation across the phantom edges. Image signals in the lead beads/bars shadow were used to obtain sampled scatter measurements which were then interpolated to form an estimated scatter distribution across the projection images. The image data behind the lead bead/bar shadows were restored by interpolating image data from two adjacent projection views to form beam-block free projection images. The estimated scatter distribution was then subtracted from the corresponding restored projection image to obtain the scatter removed projection images.Our preliminary experiment has demonstrated that it is feasible to implement SSM technique for scatter estimation and correction for cone beam breast CT imaging. Scatter correction was successfully performed on all projection images using scatter distribution interpolated from SSM and restored projection image data. The resultant scatter corrected projection image data resulted in elevated CT number and largely reduced the cupping effects.
Son, Jung-Young; Saveljev, Vladmir V; Kim, Jae-Soon; Kim, Sung-Sik; Javidi, Bahram
2004-09-10
The viewing zone of autostereoscopic imaging systems that use lenticular, parallax-barrier, and microlens-array plates as the viewing-zone-forming optics is analyzed in order to verify the image-quality differences between different locations of the zone. The viewing zone consists of many subzones. The images seen at most of these subzones are composed of at least one image strip selected from the total number of different view images displayed. These different view images are not mixed but patched to form a complete image. This image patching deteriorates the quality of the image seen at different subzones. We attempt to quantify the quality of the image seen at these viewing subzones by taking the inverse of the number of different view images patched together at different subzones. Although the combined viewing zone can be extended to almost all of the front space of the imaging system, in reality it is limited mainly by the image quality.
NASA Astrophysics Data System (ADS)
Angelhed, Jan-Erik; Carlsson, Goeran; Gustavsson, Staffan; Karlsson, Anders; Larsson, Lars E. G.; Svensson, Sune; Tylen, Ulf
1998-07-01
An Image Management And Communication (IMAC) system adapted to the X-ray department at Sahlgrenska University Hospital has been developed using standard components. Two user demands have been considered primary: Rapid access to (display of) images and an efficient worklist management. To fulfil these demands a connection between the IMAC system and the existing Radiological Information System (RIS) has been implemented. The functional modules are: check of information consistency in data exported from image sources, a (logically) central storage of image data, viewing facility for high speed-, large volume-, clinical work, and an efficient interface to the RIS. Also, an image related database extension has been made to the RIS. The IMAC system has a strictly modular design with a simple structure. The image archive and short term storage are logically the same and acts as a huge disk. Through NFS all image data is available to all the connected workstations. All patient selection for viewing is through worklists, which are created by selection criteria in the RIS, by the use of barcodes, or, in singular cases, by entering the patient ID by hand.
Ultra-widefield retinal MHz-OCT imaging with up to 100 degrees viewing angle.
Kolb, Jan Philip; Klein, Thomas; Kufner, Corinna L; Wieser, Wolfgang; Neubauer, Aljoscha S; Huber, Robert
2015-05-01
We evaluate strategies to maximize the field of view (FOV) of in vivo retinal OCT imaging of human eyes. Three imaging modes are tested: Single volume imaging with 85° FOV as well as with 100° and stitching of five 60° images to a 100° mosaic (measured from the nodal point). We employ a MHz-OCT system based on a 1060nm Fourier domain mode locked (FDML) laser with a depth scan rate of 1.68MHz. The high speed is essential for dense isotropic sampling of the large areas. Challenges caused by the wide FOV are discussed and solutions to most issues are presented. Detailed information on the design and characterization of our sample arm optics is given. We investigate the origin of an angle dependent signal fall-off which we observe towards larger imaging angles. It is present in our 85° and 100° single volume images, but not in the mosaic. Our results suggest that 100° FOV OCT is possible with current swept source OCT technology.
Ultra-widefield retinal MHz-OCT imaging with up to 100 degrees viewing angle
Kolb, Jan Philip; Klein, Thomas; Kufner, Corinna L.; Wieser, Wolfgang; Neubauer, Aljoscha S.; Huber, Robert
2015-01-01
We evaluate strategies to maximize the field of view (FOV) of in vivo retinal OCT imaging of human eyes. Three imaging modes are tested: Single volume imaging with 85° FOV as well as with 100° and stitching of five 60° images to a 100° mosaic (measured from the nodal point). We employ a MHz-OCT system based on a 1060nm Fourier domain mode locked (FDML) laser with a depth scan rate of 1.68MHz. The high speed is essential for dense isotropic sampling of the large areas. Challenges caused by the wide FOV are discussed and solutions to most issues are presented. Detailed information on the design and characterization of our sample arm optics is given. We investigate the origin of an angle dependent signal fall-off which we observe towards larger imaging angles. It is present in our 85° and 100° single volume images, but not in the mosaic. Our results suggest that 100° FOV OCT is possible with current swept source OCT technology. PMID:26137363
2005-09-11
Taking advantage of extra solar energy collected during the day, NASA's Mars Exploration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. The first two images in this sequence show gradual enhancements in the surface detail of Mars' largest moon, Phobos, made possible through a combination technique known as "stacking." In "stacking," scientists use a mathematical process known as Laplacian sharpening to reinforce features that appear consistently in repetitive images and minimize features that show up only intermittently. In this view of Phobos, the large crater named Stickney is just out of sight on the moon's upper right limb. Spirit acquired the first two images with the panoramic camera on the night of sol 585 (Aug. 26,2005). The far right image of Phobos, for comparison, was taken by the High Resolution Stereo Camera on Mars Express, a European Space Agency orbiter. The third image in this sequence was derived from the far right image by making it blurrier for comparison with the panoramic camera images to the left http://photojournal.jpl.nasa.gov/catalog/PIA06335
Pine Island Glacier, Antarctica, MISR Multi-angle Composite
Atmospheric Science Data Center
2013-12-17
... View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...
2015-03-02
Some might see a pancake, and others a sand dollar, in this new image from NASA Dawn mission. Astronomers are puzzling over a mysterious large circular feature located south of the equator and slightly to the right of center in this view.
Atmospheric Science Data Center
2013-04-16
... SpectroRadiometer (MISR) nadir-camera images of eastern China compare a somewhat hazy summer view from July 9, 2000 (left) with a ... arid and sparsely vegetated surfaces of Mongolia and western China pick up large quantities of yellow dust. Airborne dust clouds from the ...
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-03-17
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing.
Milojkovic, Predrag; Christensen, Marc P; Haney, Michael W
2006-07-01
The FAST-Net (Free-space Accelerator for Switching Terabit Networks) concept uses an array of wide-field-of-view imaging lenses to realize a high-density shuffle interconnect pattern across an array of smart-pixel integrated circuits. To simplify the optics we evaluated the efficiency gained in replacing spherical surfaces with aspherical surfaces by exploiting the large disparity between narrow vertical cavity surface emitting laser (VCSEL) beams and the wide field of view of the imaging optics. We then analyzed trade-offs between lens complexity and chip real estate utilization and determined that there exists an optimal numerical aperture for VCSELs that maximizes their area density. The results provide a general framework for the design of wide-field-of-view free-space interconnection systems that incorporate high-density VCSEL arrays.
Fast imaging of live organisms with sculpted light sheets
NASA Astrophysics Data System (ADS)
Chmielewski, Aleksander K.; Kyrsting, Anders; Mahou, Pierre; Wayland, Matthew T.; Muresan, Leila; Evers, Jan Felix; Kaminski, Clemens F.
2015-04-01
Light-sheet microscopy is an increasingly popular technique in the life sciences due to its fast 3D imaging capability of fluorescent samples with low photo toxicity compared to confocal methods. In this work we present a new, fast, flexible and simple to implement method to optimize the illumination light-sheet to the requirement at hand. A telescope composed of two electrically tuneable lenses enables us to define thickness and position of the light-sheet independently but accurately within milliseconds, and therefore optimize image quality of the features of interest interactively. We demonstrated the practical benefit of this technique by 1) assembling large field of views from tiled single exposure each with individually optimized illumination settings; 2) sculpting the light-sheet to trace complex sample shapes within single exposures. This technique proved compatible with confocal line scanning detection, further improving image contrast and resolution. Finally, we determined the effect of light-sheet optimization in the context of scattering tissue, devising procedures for balancing image quality, field of view and acquisition speed.
Greenland's Coast in Holiday Colors
NASA Technical Reports Server (NTRS)
2003-01-01
Vibrant reds, emerald greens, brilliant whites, and pastel blues adorn this view of the area surrounding the Jakobshavn Glacier on the western coast of Greenland. The image is a false-color (near-infrared, green, blue) view acquired by the Multi-angle Imaging SpectroRadiometer's nadir camera. The brightness of vegetation in the near-infrared contributes to the reddish hues; glacial silt gives rise to the green color of the water; and blue-colored melt ponds are visible in the bright white ice. A scattering of small icebergs in Disco Bay adds a touch of glittery sparkle to the scene.
The large island in the upper left is called Qeqertarsuaq. To the east of this island, and just above image center, is the outlet of the fast-flowing Jakobshavn (or Ilulissat) glacier. Jakobshavn is considered to have the highest iceberg production of all Greenland glaciers and is a major drainage outlet for a large portion of the western side of the ice sheet. Icebergs released from the glacier drift slowly with the ocean currents and pose hazards for shipping along the coast.The Multi-angle Imaging SpectroRadiometer views the daylit Earth continuously and the entire globe between 82 degrees north and 82 degrees south latitude is observed every 9 days. These data products were generated from a portion of the imagery acquired on June 18, 2003 during Terra orbit 18615. The image cover an area of about 254 kilometers x 210 kilometers, and use data from blocks 34 to 35 within World Reference System-2 path 10.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.A dual-view digital tomosynthesis imaging technique for improved chest imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng
Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of themore » reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD) values computed using the digital chest phantom or the CBCT images as the reference in the simulation and experimental study, respectively. High-contrast wires with vertical, oblique, and horizontal orientations in a PA view plane were also imaged to investigate the spatial resolutions and how the wire signals spread in the PA view and lateral view slice images. Results: Both the digital phantom images (simulated) and the anthropomorphic phantom images (experimentally generated) demonstrated that the dual-view DTS technique resulted in improved spatial resolution in the depth (PA) direction, more accurate representation of the anatomy, and significantly reduced artifacts. The RMSD values corroborate well with visual observations with substantially lower RMSD values measured for the dual-view DTS images as compared to those measured for the single-view DTS images. The imaging experiment with the high-contrast wires shows that while the vertical and oblique wires could be resolved in the lateral view in both single- and dual-view DTS images, the horizontal wire could only be resolved in the dual-view DTS images. This indicates that with single-view DTS, the wire signals spread liberally to off-fulcrum planes and generated wire shadow there. Conclusions: The authors have demonstrated both visually and quantitatively that the dual-view DTS technique can be used to achieve more accurate rendition of the anatomy and to obtain slice images with improved resolution and reduced artifacts as compared to the single-view DTS technique, thus allowing the 3D image data to be viewed in views other than the PA view. These advantages could make the dual-view DTS technique useful in situations where better separation of the objects-of-interest from the off-fulcrum structures or more accurate 3D rendition of the anatomy are required while a regular CT examination is undesirable due to radiation dose considerations.« less
High-Resolution Large-Field-of-View Three-Dimensional Hologram Display System and Method Thereof
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor); Mintz, Frederick W. (Inventor); Tsou, Peter (Inventor); Bryant, Nevin A. (Inventor)
2001-01-01
A real-time, dynamic, free space-virtual reality, 3-D image display system is enabled by using a unique form of Aerogel as the primary display media. A preferred embodiment of this system comprises a 3-D mosaic topographic map which is displayed by fusing four projected hologram images. In this embodiment, four holographic images are projected from four separate holograms. Each holographic image subtends a quadrant of the 4(pi) solid angle. By fusing these four holographic images, a static 3-D image such as a featured terrain map would be visible for 360 deg in the horizontal plane and 180 deg in the vertical plane. An input, either acquired by 3-D image sensor or generated by computer animation, is first converted into a 2-D computer generated hologram (CGH). This CGH is then downloaded into large liquid crystal (LC) panel. A laser projector illuminates the CGH-filled LC panel and generates and displays a real 3-D image in the Aerogel matrix.
Generation of High-Resolution Geo-referenced Photo-Mosaics From Navigation Data
NASA Astrophysics Data System (ADS)
Delaunoy, O.; Elibol, A.; Garcia, R.; Escartin, J.; Fornari, D.; Humphris, S.
2006-12-01
Optical images of the ocean floor are a rich source of data to understand biological and geological processes. However, due to the attenuation of light in sea water, the area covered by the optical systems is very reduced, and a large number of images are then needed in order to cover an area of interest, as individually they do not provide a global view of the surveyed area. Therefore, generating a composite view (or photo-mosaic) from multiple overlapping images is usually the most practical and flexible solution to visually cover a wide area, allowing the analysis of the site in one single representation of the ocean floor. In most of the camera surveys which are carried out nowadays, some sort of positioning information is available (e.g., USBL, DVL, INS, gyros, etc). If it is a towed camera an estimation of the length of the tether and the mother ship GPS reading could also serve as navigation data. In any case, a photo-mosaic can be build just by taking into account the position and orientation of the camera. On the other hand, most of the regions of interest for the scientific community are quite large (>1Km2) and since better resolution is always required, the final photo-mosaic can be very large (>1,000,000 × 1,000,000 pixels), and cannot be handled by commonly available software. For this reason, we have developed a software package able to load a navigation file and the sequence of acquired images to automatically build a geo-referenced mosaic. This navigated mosaic provides a global view of the interest site, at the maximum available resolution. The developed package includes a viewer, allowing the user to load, view and annotate these geo-referenced photo-mosaics on a personal computer. A software library has been developed to allow the viewer to manage such very big images. Therefore, the size of the resulting mosaic is now only limited by the size of the hard drive. Work is being carried out to apply image processing techniques to the navigated mosaic, with the intention of locally improving image alignment. Tests have been conducted using the data acquired during the cruise LUSTRE'96 (LUcky STRike Exploration, 37°17'N 32°17'W) by WHOI. During this cruise, the ARGO-II tethered vehicle acquired ~21,000 images in a ~1Km2 area of the seafloor to map at high resolution the geology of this hydrothermal field. The obtained geo-referenced photo-mosaic has a resolution of 1.5cm per pixel, with a coverage of ~25% of the Lucky Strike area. Data and software will be made publicly available.
High resolution microphotonic needle for endoscopic imaging (Conference Presentation)
NASA Astrophysics Data System (ADS)
Tadayon, Mohammad Amin; Mohanty, Aseema; Roberts, Samantha P.; Barbosa, Felippe; Lipson, Michal
2017-02-01
GRIN (Graded index) lens have revolutionized micro endoscopy enabling deep tissue imaging with high resolution. The challenges of traditional GRIN lenses are their large size (when compared with the field of view) and their limited resolution. This is because of the relatively weak NA in standard graded index lenses. Here we introduce a novel micro-needle platform for endoscopy with much higher resolution than traditional GRIN lenses and a FOV that corresponds to the whole cross section of the needle. The platform is based on polymeric (SU-8) waveguide integrated with a microlens micro fabricated on a silicon substrate using a unique molding process. Due to the high index of refraction of the material the NA of the needle is much higher than traditional GRIN lenses. We tested the probe in a fluorescent dye solution (19.6 µM Alexa Flour 647 solution) and measured a numerical aperture of 0.25, focal length of about 175 µm and minimal spot size of about 1.6 µm. We show that the platform can image a sample with the field of view corresponding to the cross sectional area of the waveguide (80x100 µm2). The waveguide size can in principle be modified to vary size of the imaging field of view. This demonstration, combined with our previous work demonstrating our ability to implant the high NA needle in a live animal, shows that the proposed system can be used for deep tissue imaging with very high resolution and high field of view.
Holographic Waveguide Array Rollable Display.
1997-04-01
scale lithography for fabrication. Projection systems offer large images, in the range of 40 - 60 inches diagonal, and both front-view and rear-view...Boulder, CO, and a l-D array of digital micromirrors ( DMD ) from Texas Instruments. The linear format permits simple driving electronics and high...TI’s DMD , or a CMOS-SLM. A collimated laser beaming (combine three colors) or a collimated white light beam from a high intensity halogen lamp can be
Use of mobile devices for medical imaging.
Hirschorn, David S; Choudhri, Asim F; Shih, George; Kim, Woojin
2014-12-01
Mobile devices have fundamentally changed personal computing, with many people forgoing the desktop and even laptop computer altogether in favor of a smaller, lighter, and cheaper device with a touch screen. Doctors and patients are beginning to expect medical images to be available on these devices for consultative viewing, if not actual diagnosis. However, this raises serious concerns with regard to the ability of existing mobile devices and networks to quickly and securely move these images. Medical images often come in large sets, which can bog down a network if not conveyed in an intelligent manner, and downloaded data on a mobile device are highly vulnerable to a breach of patient confidentiality should that device become lost or stolen. Some degree of regulation is needed to ensure that the software used to view these images allows all relevant medical information to be visible and manipulated in a clinically acceptable manner. There also needs to be a quality control mechanism to ensure that a device's display accurately conveys the image content without loss of contrast detail. Furthermore, not all mobile displays are appropriate for all types of images. The smaller displays of smart phones, for example, are not well suited for viewing entire chest radiographs, no matter how small and numerous the pixels of the display may be. All of these factors should be taken into account when deciding where, when, and how to use mobile devices for the display of medical images. Copyright © 2014 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Large field-of-view tiled grating structures for X-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Schröter, Tobias J.; Koch, Frieder J.; Meyer, Pascal; Kunka, Danays; Meiser, Jan; Willer, Konstantin; Gromann, Lukas; Marco, Fabio D.; Herzen, Julia; Noel, Peter; Yaroshenko, Andre; Hofmann, Andreas; Pfeiffer, Franz; Mohr, Jürgen
2017-01-01
X-ray grating-based interferometry promises unique new diagnostic possibilities in medical imaging and materials analysis. To transfer this method from scientific laboratories or small-animal applications to clinical radiography applications, compact setups with a large field of view (FoV) are required. Currently the FoV is limited by the grating area, which is restricted due to the complex manufacturing process. One possibility to increase the FoV is tiling individual grating tiles to create one large area grating mounted on a carrier substrate. We investigate theoretically the accuracy needed for a tiling process in all degrees of freedom by applying a simulation approach. We show how the resulting precision requirements can be met using a custom-built frame for exact positioning. Precise alignment is achieved by comparing the fringe patterns of two neighboring grating tiles in a grating interferometer. With this method, the FoV can be extended to practically any desired length in one dimension. First results of a phase-contrast scanning setup with a full FoV of 384 mm × 24 mm show the suitability of this method.
Rebling, Johannes; Estrada, Héctor; Gottschalk, Sven; Sela, Gali; Zwack, Michael; Wissmeyer, Georg; Ntziachristos, Vasilis; Razansky, Daniel
2018-04-19
A critical link exists between pathological changes of cerebral vasculature and diseases affecting brain function. Microscopic techniques have played an indispensable role in the study of neurovascular anatomy and functions. Yet, investigations are often hindered by suboptimal trade-offs between the spatiotemporal resolution, field-of-view (FOV) and type of contrast offered by the existing optical microscopy techniques. We present a hybrid dual-wavelength optoacoustic (OA) biomicroscope capable of rapid transcranial visualization of large-scale cerebral vascular networks. The system offers 3-dimensional views of the morphology and oxygenation status of the cerebral vasculature with single capillary resolution and a FOV exceeding 6 × 8 mm 2 , thus covering the entire cortical vasculature in mice. The large-scale OA imaging capacity is complemented by simultaneously acquired pulse-echo ultrasound (US) biomicroscopy scans of the mouse skull. The new approach holds great potential to provide better insights into cerebrovascular function and facilitate efficient studies into neurological and vascular abnormalities of the brain. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Branching out with filmless radiology.
Carbajal, R; Honea, R
1999-05-01
Texas Children's Hospital, a 456 bed pediatric hospital located in the Texas Medical Center, has been constructing a large-scale picture archiving and communications system (PACS), including ultrasound (US), computed tomography (CT), magnetic resonance (MR), and computed radiography (CR). Until recently, filmless radiology operations have been confined to the imaging department, the outpatient treatment center, and the emergency center. As filmless services expand to other clinical services, the PACS staff must engage each service in a dialog to determine the appropriate level of support required. The number and type of image examinations, the use of multiple modalities and comparison examinations, and the relationship between viewing and direct patient care activities have a bearing on the number and type of display stations provided. Some of the information about customer services is contained in documentation already maintained by the imaging department. For example, by a custom report from the radiology information system (RIS), we were able to determine the number and type of examinations ordered by each referring physician for the previous 6 months. By compiling these by clinical service, we were able to determine our biggest customers by examination type and volume. Another custom report was used to determine who was requesting old examinations from the film library. More information about imaging usage was gathered by means of a questionnaire. Some customers view images only where patients are also seen, while some services view images independently from the patient. Some services use their conference rooms for critical image viewing such as treatment planning. Additional information was gained from geographical surveys of where films are currently produced, delivered by the film library, and viewed. In some areas, available space dictates the type and configuration of display station that can be used. Active participation in the decision process by the clinical service is a key element to successful filmless operations.
ASTER Views Large Calving Event at Petermann Glacier, Greenland
2010-08-12
This image of Petermann Glacier and the new iceberg was acquired from NASA Terra spacecraft on Aug. 12, 2010. On Aug. 5, 2010, an enormous chunk of ice broke off the Petermann Glacier along the northwestern coast of Greenland.
2016-03-02
NASA Dawn spacecraft obtained this view of Azacca Crater on Ceres. The rim of this crater has terraces descending from its rim down to its floor. The crater floor is relatively free of large impact scars and is named for the Haitian god of agriculture
2000-06-20
A plume from a large brush fire that burned about 15,000 acres in 2000 is visible at the western edge of the Big Cypress Swamp in southern Florida. NASA Terra satellite captured acquired this image on April 9, 2000. 3D glasses are necessary.
Murray, Trevor; Zeil, Jochen
2017-01-01
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its 'catchment area') has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the 'catchment volumes' within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots.
Quantifying navigational information: The catchment volumes of panoramic snapshots in outdoor scenes
Zeil, Jochen
2017-01-01
Panoramic views of natural environments provide visually navigating animals with two kinds of information: they define locations because image differences increase smoothly with distance from a reference location and they provide compass information, because image differences increase smoothly with rotation away from a reference orientation. The range over which a given reference image can provide navigational guidance (its ‘catchment area’) has to date been quantified from the perspective of walking animals by determining how image differences develop across the ground plane of natural habitats. However, to understand the information available to flying animals there is a need to characterize the ‘catchment volumes’ within which panoramic snapshots can provide navigational guidance. We used recently developed camera-based methods for constructing 3D models of natural environments and rendered panoramic views at defined locations within these models with the aim of mapping navigational information in three dimensions. We find that in relatively open woodland habitats, catchment volumes are surprisingly large extending for metres depending on the sensitivity of the viewer to image differences. The size and the shape of catchment volumes depend on the distance of visual features in the environment. Catchment volumes are smaller for reference images close to the ground and become larger for reference images at some distance from the ground and in more open environments. Interestingly, catchment volumes become smaller when only above horizon views are used and also when views include a 1 km distant panorama. We discuss the current limitations of mapping navigational information in natural environments and the relevance of our findings for our understanding of visual navigation in animals and autonomous robots. PMID:29088300
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.
Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L
2016-11-01
Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.
Color and 3D views of the Sierra Nevada mountains
NASA Technical Reports Server (NTRS)
2002-01-01
A stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras provides a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left. Some prominent features are Mono Lake, in the center of the image; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges. Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.
Ultra-large field-of-view two-photon microscopy
Tsai, Philbert S.; Mateo, Celine; Field, Jeffrey J.; Schaffer, Chris B.; Anderson, Matthew E.; Kleinfeld, David
2015-01-01
We present a two-photon microscope that images the full extent of murine cortex with an objective-limited spatial resolution across an 8 mm by 10 mm field. The lateral resolution is approximately 1 µm and the maximum scan speed is 5 mm/ms. The scan pathway employs large diameter compound lenses to minimize aberrations and performs near theoretical limits. We demonstrate the special utility of the microscope by recording resting-state vasomotion across both hemispheres of the murine brain through a transcranial window and by imaging histological sections without the need to stitch. PMID:26072755
SRTM Perspective View with Landsat Overlay: Caliente Range and Cuyama Valley, California
NASA Technical Reports Server (NTRS)
2001-01-01
Before the arrival of Europeans, California's Cuyama Valley was inhabited by Native Americans who were culturally and politically tied to the Chumash tribes of coastal Santa Barbara County. Centuries later, the area remains the site of noted Native American rock art paintings. In the 1800s, when Europeans established large cattle and horse-breeding ranches in the valley, the early settlers reported the presence of small villages along the Cuyama River. This perspective view looks upstream toward the southeast through the Cuyama Valley. The Caliente Range, with maximum elevations of 1,550 meters (5,085 feet), borders the valley on the left. The Cuyama River, seen as a bright meandering line on the valley floor, enters the valley from headwaters more than 2,438 meters (8,000 feet) above sea level near Mount Abel and flows 154 kilometers (96 miles) before emptying into the Pacific Ocean. The river's course has been determined in large part by displacement along numerous faults.
Today, the Cuyama Valley is the home of large ranches and small farms. The area has a population of 1,120 and is more than an hour and a half drive from the nearest city in the county.This image was generated by draping an enhanced Landsat satellite image over elevation data from the Shuttle Radar Topography Mission (SRTM). Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive. For visualization purposes, topographic heights displayed in this image are exaggerated two times. Colors approximate natural colors.The elevation data used in this image was acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's land surface. To collect the 3-D SRTM data, engineers added a mast 60 meters (about 200 feet)long, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif, for NASA's Earth Science Enterprise, Washington, D.C. JPL is a division of the California Institute of Technology in Pasadena.Location (Center): 34.97 deg. North lat., 119.70 deg. West lon. View: Southeast Scale: Scale Varies in this Perspective Date Acquired: February 16, 2000 SRTM, December 14, 1984 LandsatNASA Technical Reports Server (NTRS)
2002-01-01
This Sea-viewing Wide Field-of-view Sensor (SeaWiFS) image of eastern Asia from October 14, 2001, shows large amounts of aerosol in the air. A few possible point sources of smoke, probably fires, are visible north of the Amur River at the very top of the image. One of the larger of these plumes can be seen down river of the confluence of the Songhua and Amur rivers. At lower left, the Yangtze River plume in the East China Sea is also very prominent. Sediment suspended in the ocean water is quite brown near the shore, but becomes much greener as it diffuses into the water. The increasing greenness of the river plume is probably an indication of enhanced phytoplankton growth driven by the nutrients in the river runoff. Image courtesy the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE
Perspective on Kraken Mare Shores
2015-02-12
This Cassini Synthetic Aperture Radar (SAR) image is presented as a perspective view and shows a landscape near the eastern shoreline of Kraken Mare, a hydrocarbon sea in Titan's north polar region. This image was processed using a technique for handling noise that results in clearer views that can be easier for researchers to interpret. The technique, called despeckling, also is useful for producing altimetry data and 3-D views called digital elevation maps. Scientists have used a technique called radargrammetry to determine the altitude of surface features in this view at a resolution of approximately half a mile, or 1 kilometer. The altimetry reveals that the area is smooth overall, with a maximum amplitude of 0.75 mile (1.2 kilometers) in height. The topography also shows that all observed channels flow downhill. The presence of what scientists call "knickpoints" -- locations on a river where a sharp change in slope occurs -- might indicate stratification in the bedrock, erosion mechanisms at work or a particular way the surface responds to runoff events, such as floods following large storms. One such knickpoint is visible just above the lower left corner, where an area of bright slopes is seen. The image was obtained during a flyby of Titan on April 10, 2007. A more traditional radar image of this area on Titan is seen in PIA19046. http://photojournal.jpl.nasa.gov/catalog/PIA19051
Multi-scale approaches for high-speed imaging and analysis of large neural populations
Ahrens, Misha B.; Yuste, Rafael; Peterka, Darcy S.; Paninski, Liam
2017-01-01
Progress in modern neuroscience critically depends on our ability to observe the activity of large neuronal populations with cellular spatial and high temporal resolution. However, two bottlenecks constrain efforts towards fast imaging of large populations. First, the resulting large video data is challenging to analyze. Second, there is an explicit tradeoff between imaging speed, signal-to-noise, and field of view: with current recording technology we cannot image very large neuronal populations with simultaneously high spatial and temporal resolution. Here we describe multi-scale approaches for alleviating both of these bottlenecks. First, we show that spatial and temporal decimation techniques based on simple local averaging provide order-of-magnitude speedups in spatiotemporally demixing calcium video data into estimates of single-cell neural activity. Second, once the shapes of individual neurons have been identified at fine scale (e.g., after an initial phase of conventional imaging with standard temporal and spatial resolution), we find that the spatial/temporal resolution tradeoff shifts dramatically: after demixing we can accurately recover denoised fluorescence traces and deconvolved neural activity of each individual neuron from coarse scale data that has been spatially decimated by an order of magnitude. This offers a cheap method for compressing this large video data, and also implies that it is possible to either speed up imaging significantly, or to “zoom out” by a corresponding factor to image order-of-magnitude larger neuronal populations with minimal loss in accuracy or temporal resolution. PMID:28771570
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Real-time free-viewpoint DIBR for large-size 3DLED
NASA Astrophysics Data System (ADS)
Wang, NengWen; Sang, Xinzhu; Guo, Nan; Wang, Kuiru
2017-10-01
Three-dimensional (3D) display technologies make great progress in recent years, and lenticular array based 3D display is a relatively mature technology, which most likely to commercial. In naked-eye-3D display, the screen size is one of the most important factors that affect the viewing experience. In order to construct a large-size naked-eye-3D display system, the LED display is used. However, the pixel misalignment is an inherent defect of the LED screen, which will influences the rendering quality. To address this issue, an efficient image synthesis algorithm is proposed. The Texture-Plus-Depth(T+D) format is chosen for the display content, and the modified Depth Image Based Rendering (DIBR) method is proposed to synthesize new views. In order to achieve realtime, the whole algorithm is implemented on GPU. With the state-of-the-art hardware and the efficient algorithm, a naked-eye-3D display system with a LED screen size of 6m × 1.8m is achieved. Experiment shows that the algorithm can process the 43-view 3D video with 4K × 2K resolution in real time on GPU, and vivid 3D experience is perceived.
Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration.
Behrisch, Michael; Bach, Benjamin; Hund, Michael; Delz, Michael; Von Ruden, Laura; Fekete, Jean-Daniel; Schreck, Tobias
2017-01-01
In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks.
2015-07-06
New Horizons' Long Range Reconnaissance Imager (LORRI) obtained these three images of Pluto between July 1-3 ,2015, as the spacecraft closed in on its July 14 encounter with the dwarf planet and its moons. The left image shows, on the right side of the disk, a large bright area on the hemisphere opposite Charon; this is the side of Pluto that will be seen in close-up by New Horizons on July 14. The three images together show the full extent of a continuous swath of dark terrain that wraps around Pluto's equatorial region between longitudes 40° and 160°. The western end of the swath, west of longitude 40°, breaks up into a series of striking dark regularly-spaced spots on the anti-Charon hemisphere (right image) that were first noted in New Horizons images taken on Pluto's previous rotation. Intriguing details are beginning to emerge in the bright material north of the dark region, in particular a series of bright and dark patches that are conspicuous just below the center of the disk in the right-hand image. In all three black-and-white views, the apparent jagged bottom edge of Pluto is the result of image processing. http://photojournal.jpl.nasa.gov/catalog/PIA19698
Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary
2011-08-01
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.
a New Graduation Algorithm for Color Balance of Remote Sensing Image
NASA Astrophysics Data System (ADS)
Zhou, G.; Liu, X.; Yue, T.; Wang, Q.; Sha, H.; Huang, S.; Pan, Q.
2018-05-01
In order to expand the field of view to obtain more data and information when doing research on remote sensing image, workers always need to mosaicking images together. However, the image after mosaic always has the large color differences and produces the gap line. This paper based on the graduation algorithm of tarigonometric function proposed a new algorithm of Two Quarter-rounds Curves (TQC). The paper uses the Gaussian filter to solve the program about the image color noise and the gap line. The paper used one of Greenland compiled data acquired in 1963 from Declassified Intelligence Photography Project (DISP) by ARGON KH-5 satellite, and used the photography of North Gulf, China, by Landsat satellite to experiment. The experimental results show that the proposed method has improved the accuracy of the results in two parts: on the one hand, for the large color differences remote sensing image will become more balanced. On the other hands, the remote sensing image will achieve more smooth transition.
MISR Sees the Sierra Nevadas in Stereo
NASA Technical Reports Server (NTRS)
2000-01-01
These MISR images of the Sierra Nevada mountains near the California-Nevada border were acquired on August 12, 2000 during Terra orbit 3472. On the left is an image from the vertical-viewing (nadir) camera. On the right is a stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras, providing a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left.Some prominent features are Mono Lake, in the center of the images; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges.Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Earth and Moon as viewed from Mars
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-368, 22 May 2003
[figure removed for brevity, see original site] Globe diagram illustrates the Earth's orientation as viewed from Mars (North and South America were in view). Earth/Moon: This is the first image of Earth ever taken from another planet that actually shows our home as a planetary disk. Because Earth and the Moon are closer to the Sun than Mars, they exhibit phases, just as the Moon, Venus, and Mercury do when viewed from Earth. As seen from Mars by MGS on 8 May 2003 at 13:00 GMT (6:00 AM PDT), Earth and the Moon appeared in the evening sky. The MOC Earth/Moon image has been specially processed to allow both Earth (with an apparent magnitude of -2.5) and the much darker Moon (with an apparent magnitude of +0.9) to be visible together. The bright area at the top of the image of Earth is cloud cover over central and eastern North America. Below that, a darker area includes Central America and the Gulf of Mexico. The bright feature near the center-right of the crescent Earth consists of clouds over northern South America. The image also shows the Earth-facing hemisphere of the Moon, since the Moon was on the far side of Earth as viewed from Mars. The slightly lighter tone of the lower portion of the image of the Moon results from the large and conspicuous ray system associated with the crater Tycho.A note about the coloring process: The MGS MOC high resolution camera only takes grayscale (black-and-white) images. To 'colorize' the image, a Mariner 10 Earth/Moon image taken in 1973 was used to color the MOC Earth and Moon picture. The procedure used was as follows: the Mariner 10 image was converted from 24-bit color to 8-bit color using a JPEG to GIF conversion program. The 8-bit color image was converted to 8-bit grayscale and an associated lookup table mapping each gray value of the image to a red-green-blue color triplet (RGB). Each color triplet was root-sum-squared (RSS), and sorted in increasing RSS value. These sorted lists were brightness-to-color maps for the images. Each brightness-to-color map was then used to convert the 8-bit grayscale MOC image to an 8-bit color image. This 8-bit color image was then converted to a 24-bit color image. The color image was edited to return the background to black.Characteristics of composite images in multiview imaging and integral photography.
Lee, Beom-Ryeol; Hwang, Jae-Jeong; Son, Jung-Young
2012-07-20
The compositions of images projected to a viewer's eyes from the various viewing regions of the viewing zone formed in one-dimensional integral photography (IP) and multiview imaging (MV) are identified. These compositions indicate that they are made up of pieces from different view images. Comparisons of the composite images with images composited at various regions of imaging space formed by camera arrays for multiview image acquisition reveal that the composite images do not involve any scene folding in the central viewing zone for either MV or IP. However, in the IP case, compositions from neighboring viewing regions aligned in the horizontal direction have reversed disparities, but in the viewing regions between the central and side viewing zones, no reversed disparities are expected. However, MV does exhibit them.
Mudanyali, Onur; Erlinger, Anthony; Seo, Sungkyu; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2009-12-14
Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as approximately 18 cm(2). Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of approximately 5 mm without the need for refocusing which corresponds to up to approximately 9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments.
NASA Astrophysics Data System (ADS)
Liu, Ya-Cheng; Chung, Chien-Kai; Lai, Jyun-Yi; Chang, Han-Chao; Hsu, Feng-Yi
2013-06-01
Upper gastrointestinal endoscopies are primarily performed to observe the pathologies of the esophagus, stomach, and duodenum. However, when an endoscope is pushed into the esophagus or stomach by the physician, the organs behave similar to a balloon being gradually inflated. Consequently, their shapes and depth-of-field of images change continually, preventing thorough examination of the inflammation or anabrosis position, which delays the curing period. In this study, a 2.9-mm image-capturing module and a convoluted mechanism was incorporated into the tube like a standard 10- mm upper gastrointestinal endoscope. The scale-invariant feature transform (SIFT) algorithm was adopted to implement disease feature extraction on a koala doll. Following feature extraction, the smoothly varying affine stitching (SVAS) method was employed to resolve stitching distortion problems. Subsequently, the real-time splice software developed in this study was embedded in an upper gastrointestinal endoscope to obtain a panoramic view of stomach inflammation in the captured images. The results showed that the 2.9-mm image-capturing module can provide approximately 50 verified images in one spin cycle, a viewing angle of 120° can be attained, and less than 10% distortion can be achieved in each image. Therefore, these methods can solve the problems encountered when using a standard 10-mm upper gastrointestinal endoscope with a single camera, such as image distortion, and partial inflammation displays. The results also showed that the SIFT algorithm provides the highest correct matching rate, and the SVAS method can be employed to resolve the parallax problems caused by stitching together images of different flat surfaces.
Mudanyali, Onur; Erlinger, Anthony; Seo, Sungkyu; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2009-01-01
Conventional optical microscopes image cells by use of objective lenses that work together with other lenses and optical components. While quite effective, this classical approach has certain limitations for miniaturization of the imaging platform to make it compatible with the advanced state of the art in microfluidics. In this report, we introduce experimental details of a lensless on-chip imaging concept termed LUCAS (Lensless Ultra-wide field-of-view Cell monitoring Array platform based on Shadow imaging) that does not require any microscope objectives or other bulky optical components to image a heterogeneous cell solution over an ultra-wide field of view that can span as large as ~18 cm2. Moreover, unlike conventional microscopes, LUCAS can image a heterogeneous cell solution of interest over a depth-of-field of ~5 mm without the need for refocusing which corresponds to up to ~9 mL sample volume. This imaging platform records the shadows (i.e., lensless digital holograms) of each cell of interest within its field of view, and automated digital processing of these cell shadows can determine the type, the count and the relative positions of cells within the solution. Because it does not require any bulky optical components or mechanical scanning stages it offers a significantly miniaturized platform that at the same time reduces the cost, which is quite important for especially point of care diagnostic tools. Furthermore, the imaging throughput of this platform is orders of magnitude better than conventional optical microscopes, which could be exceedingly valuable for high-throughput cell-biology experiments. PMID:20010542
Can laptops be left inside passenger bags if motion imaging is used in X-ray security screening?
Mendes, Marcia; Schwaninger, Adrian; Michel, Stefan
2013-01-01
This paper describes a study where a new X-ray machine for security screening featuring motion imaging (i.e., 5 views of a bag are shown as an image sequence) was evaluated and compared to single view imaging available on conventional X-ray screening systems. More specifically, it was investigated whether with this new technology X-ray screening of passenger bags could be enhanced to such an extent that laptops could be left inside passenger bags, without causing a significant impairment in threat detection performance. An X-ray image interpretation test was created in four different versions, manipulating the factors packing condition (laptop and bag separate vs. laptop in bag) and display condition (single vs. motion imaging). There was a highly significant and large main effect of packing condition. When laptops and bags were screened separately, threat item detection was substantially higher. For display condition, a medium effect was observed. Detection could be slightly enhanced through the application of motion imaging. There was no interaction between display and packing condition, implying that the high negative effect of leaving laptops in passenger bags could not be fully compensated by motion imaging. Additional analyses were carried out to examine effects depending on different threat categories (guns, improvised explosive devices, knives, others), the placement of the threat items (in bag vs. in laptop) and viewpoint (easy vs. difficult view). In summary, although motion imaging provides an enhancement, it is not strong enough to allow leaving laptops in bags for security screening.
Can laptops be left inside passenger bags if motion imaging is used in X-ray security screening?
Mendes, Marcia; Schwaninger, Adrian; Michel, Stefan
2013-01-01
This paper describes a study where a new X-ray machine for security screening featuring motion imaging (i.e., 5 views of a bag are shown as an image sequence) was evaluated and compared to single view imaging available on conventional X-ray screening systems. More specifically, it was investigated whether with this new technology X-ray screening of passenger bags could be enhanced to such an extent that laptops could be left inside passenger bags, without causing a significant impairment in threat detection performance. An X-ray image interpretation test was created in four different versions, manipulating the factors packing condition (laptop and bag separate vs. laptop in bag) and display condition (single vs. motion imaging). There was a highly significant and large main effect of packing condition. When laptops and bags were screened separately, threat item detection was substantially higher. For display condition, a medium effect was observed. Detection could be slightly enhanced through the application of motion imaging. There was no interaction between display and packing condition, implying that the high negative effect of leaving laptops in passenger bags could not be fully compensated by motion imaging. Additional analyses were carried out to examine effects depending on different threat categories (guns, improvised explosive devices, knives, others), the placement of the threat items (in bag vs. in laptop) and viewpoint (easy vs. difficult view). In summary, although motion imaging provides an enhancement, it is not strong enough to allow leaving laptops in bags for security screening. PMID:24151457
Variation of MODIS reflectance and vegetation indices with viewing geometry and soybean development.
Breunig, Fábio M; Galvão, Lênio S; Formaggio, Antônio R; Epiphanio, José C N
2012-06-01
Directional effects introduce a variability in reflectance and vegetation index determination, especially when large field-of-view sensors are used (e.g., Moderate Resolution Imaging Spectroradiometer - MODIS). In this study, we evaluated directional effects on MODIS reflectance and four vegetation indices (Normalized Difference Vegetation Index - NDVI; Enhanced Vegetation Index - EVI; Normalized Difference Water Index - NDWI(1640) and NDWI(2120)) with the soybean development in two growing seasons (2004-2005 and 2005-2006). To keep the reproductive stage for a given cultivar as a constant factor while varying viewing geometry, pairs of images obtained in close dates and opposite view angles were analyzed. By using a non-parametric statistics with bootstrapping and by normalizing these indices for angular differences among viewing directions, their sensitivities to directional effects were studied. Results showed that the variation in MODIS reflectance between consecutive phenological stages was generally smaller than that resultant from viewing geometry for closed canopies. The contrary was observed for incomplete canopies. The reflectance of the first seven MODIS bands was higher in the backscattering. Except for the EVI, the other vegetation indices had larger values in the forward scattering direction. Directional effects decreased with canopy closure. The NDVI was lesser affected by directional effects than the other indices, presenting the smallest differences between viewing directions for fixed phenological stages.
Jia, Jia; Chen, Jhensi; Yao, Jun; Chu, Daping
2017-01-01
A high quality 3D display requires a high amount of optical information throughput, which needs an appropriate mechanism to distribute information in space uniformly and efficiently. This study proposes a front-viewing system which is capable of managing the required amount of information efficiently from a high bandwidth source and projecting 3D images with a decent size and a large viewing angle at video rate in full colour. It employs variable gratings to support a high bandwidth distribution. This concept is scalable and the system can be made compact in size. A horizontal parallax only (HPO) proof-of-concept system is demonstrated by projecting holographic images from a digital micro mirror device (DMD) through rotational tiled gratings before they are realised on a vertical diffuser for front-viewing. PMID:28304371
Multibeam interferometric illumination as the primary source of resolution in optical microscopy
NASA Astrophysics Data System (ADS)
Ryu, J.; Hong, S. S.; Horn, B. K. P.; Freeman, D. M.; Mermelstein, M. S.
2006-04-01
High-resolution images of a fluorescent target were obtained using a low-resolution optical detector by illuminating the target with interference patterns produced with 31 coherent beams. The beams were arranged in a cone with 78° half angle to produce illumination patterns consistent with a numerical aperture of 0.98. High-resolution images were constructed from low-resolution images taken with 930 different illumination patterns. Results for optical detectors with numerical apertures of 0.1 and 0.2 were similar, demonstrating that the resolution is primarily determined by the illuminator and not by the low-resolution detector. Furthermore, the long working distance, large depth of field, and large field of view of the low-resolution detector are preserved.
Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes
NASA Astrophysics Data System (ADS)
Huang, Chi-Chieh
The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional FOV up to 165o without distortion, modest spherical aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our devices possessed enhanced, dynamic motion-tracking capability ideal for diverse applications in military, security, search and rescue, night navigation, medical imaging and astronomy. In the future, due to its reflection-based operating principles, it can be further extended into mid- and far-infrared for more demanding applications.
Technical Summary of the Half-Degree Imager (HDI)
NASA Astrophysics Data System (ADS)
Richmond, Michael W.
2017-01-01
The Half-Degree Imager (HDI) was first attached to the WIYN 0.9-m Telescope in October, 2013. In the three years since then, it has served a large community of astronomers throughout the WIYN 0.9-m consortium. The large field of view and relatively short readout time, combined with a large selection of broad-band and narrow-band filters, make HDI a powerful tool for large-area surveys. I will provide a summary of the technical features of this CCD camera and its operations, and present statistics on its use -- showing the fraction of time lost due to bad weather and technical problems. I will reserve time to answer questions from the audience, including those who may be interested in using HDI for their own projects.
High Resolution Globe of Jupiter
2001-01-30
This true-color simulated view of Jupiter is composed of 4 images taken by NASA's Cassini spacecraft on December 7, 2000. To illustrate what Jupiter would have looked like if the cameras had a field-of-view large enough to capture the entire planet, the cylindrical map was projected onto a globe. The resolution is about 144 kilometers (89 miles) per pixel. Jupiter's moon Europa is casting the shadow on the planet. https://photojournal.jpl.nasa.gov/catalog/PIA02873
Design of retinal-projection-based near-eye display with contact lens.
Wu, Yuhang; Chen, Chao Ping; Mi, Lantian; Zhang, Wenbo; Zhao, Jingxin; Lu, Yifan; Guo, Weiqian; Yu, Bing; Li, Yang; Maitlo, Nizamuddin
2018-04-30
We propose a design of a retinal-projection-based near-eye display for achieving ultra-large field of view, vision correction, and occlusion. Our solution is highlighted by a contact lens combo, a transparent organic light-emitting diode panel, and a twisted nematic liquid crystal panel. Its design rules are set forth in detail, followed by the results and discussion regarding the field of view, angular resolution, modulation transfer function, contrast ratio, distortion, and simulated imaging.
2016-10-31
Saturn appears as a serene globe amid tranquil rings in this view from NASA's Cassini spacecraft. In reality, the planet's atmosphere is an ever-changing scene of high-speed winds and evolving weather patterns, punctuated by occasional large storms (see PIA14901). The rings, consist of countless icy particles, which are continually colliding. Such collisions play a key role in the rings' numerous waves and wakes, which are the manifestation of the subtle influence of Saturn's moons and, indeed, the planet itself. The long duration of the Cassini mission has allowed scientists to study how the atmosphere and rings of Saturn change over time, providing much-needed insights into this active planetary system. The view looks toward the sunlit side of the rings from about 41 degrees above the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on July 16, 2016 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 752 nanometers. The view was acquired at a distance of approximately 1 million miles (2 million kilometers) from Saturn. Image scale is 68 miles (110 kilometers) per pixel. The view was obtained at a distance of approximately 752,000 miles (1.21 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 6 degrees. Image scale is 45 miles (72 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20502
RATIO_TOOL - SOFTWARE FOR COMPUTING IMAGE RATIOS
NASA Technical Reports Server (NTRS)
Yates, G. L.
1994-01-01
Geological studies analyze spectral data in order to gain information on surface materials. RATIO_TOOL is an interactive program for viewing and analyzing large multispectral image data sets that have been created by an imaging spectrometer. While the standard approach to classification of multispectral data is to match the spectrum for each input pixel against a library of known mineral spectra, RATIO_TOOL uses ratios of spectral bands in order to spot significant areas of interest within a multispectral image. Each image band can be viewed iteratively, or a selected image band of the data set can be requested and displayed. When the image ratios are computed, the result is displayed as a gray scale image. At this point a histogram option helps in viewing the distribution of values. A thresholding option can then be used to segment the ratio image result into two to four classes. The segmented image is then color coded to indicate threshold classes and displayed alongside the gray scale image. RATIO_TOOL is written in C language for Sun series computers running SunOS 4.0 and later. It requires the XView toolkit and the OpenWindows window manager (version 2.0 or 3.0). The XView toolkit is distributed with Open Windows. A color monitor is also required. The standard distribution medium for RATIO_TOOL is a .25 inch streaming magnetic tape cartridge in UNIX tar format. An electronic copy of the documentation is included on the program media. RATIO_TOOL was developed in 1992 and is a copyrighted work with all copyright vested in NASA. Sun, SunOS, and OpenWindows are trademarks of Sun Microsystems, Inc. UNIX is a registered trademark of AT&T Bell Laboratories.
The plant virus microscope image registration method based on mismatches removing.
Wei, Lifang; Zhou, Shucheng; Dong, Heng; Mao, Qianzhuo; Lin, Jiaxiang; Chen, Riqing
2016-01-01
The electron microscopy is one of the major means to observe the virus. The view of virus microscope images is limited by making specimen and the size of the camera's view field. To solve this problem, the virus sample is produced into multi-slice for information fusion and image registration techniques are applied to obtain large field and whole sections. Image registration techniques have been developed in the past decades for increasing the camera's field of view. Nevertheless, these approaches typically work in batch mode and rely on motorized microscopes. Alternatively, the methods are conceived just to provide visually pleasant registration for high overlap ratio image sequence. This work presents a method for virus microscope image registration acquired with detailed visual information and subpixel accuracy, even when overlap ratio of image sequence is 10% or less. The method proposed focus on the correspondence set and interimage transformation. A mismatch removal strategy is proposed by the spatial consistency and the components of keypoint to enrich the correspondence set. And the translation model parameter as well as tonal inhomogeneities is corrected by the hierarchical estimation and model select. In the experiments performed, we tested different registration approaches and virus images, confirming that the translation model is not always stationary, despite the fact that the images of the sample come from the same sequence. The mismatch removal strategy makes building registration of virus microscope images at subpixel accuracy easier and optional parameters for building registration according to the hierarchical estimation and model select strategies make the proposed method high precision and reliable for low overlap ratio image sequence. Copyright © 2015 Elsevier Ltd. All rights reserved.
Large-area super-resolution optical imaging by using core-shell microfibers
NASA Astrophysics Data System (ADS)
Liu, Cheng-Yang; Lo, Wei-Chieh
2017-09-01
We first numerically and experimentally report large-area super-resolution optical imaging achieved by using core-shell microfibers. The particular spatial electromagnetic waves for different core-shell microfibers are studied by using finite-difference time-domain and ray tracing calculations. The focusing properties of photonic nanojets are evaluated in terms of intensity profile and full width at half-maximum along propagation and transversal directions. In experiment, the general optical fiber is chemically etched down to 6 μm diameter and coated with different metallic thin films by using glancing angle deposition. The direct imaging of photonic nanojets for different core-shell microfibers is performed with a scanning optical microscope system. We show that the intensity distribution of a photonic nanojet is highly related to the metallic shell due to the surface plasmon polaritons. Furthermore, large-area super-resolution optical imaging is performed by using different core-shell microfibers placed over the nano-scale grating with 150 nm line width. The core-shell microfiber-assisted imaging is achieved with super-resolution and hundreds of times the field-of-view in contrast to microspheres. The possible applications of these core-shell optical microfibers include real-time large-area micro-fluidics and nano-structure inspections.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Astrophysics Data System (ADS)
Haines, Richard F.; Chuang, Sherry L.
1992-07-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
The effects of video compression on acceptability of images for monitoring life sciences experiments
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1992-01-01
Future manned space operations for Space Station Freedom will call for a variety of carefully planned multimedia digital communications, including full-frame-rate color video, to support remote operations of scientific experiments. This paper presents the results of an investigation to determine if video compression is a viable solution to transmission bandwidth constraints. It reports on the impact of different levels of compression and associated calculational parameters on image acceptability to investigators in life-sciences research at ARC. Three nonhuman life-sciences disciplines (plant, rodent, and primate biology) were selected for this study. A total of 33 subjects viewed experimental scenes in their own scientific disciplines. Ten plant scientists viewed still images of wheat stalks at various stages of growth. Each image was compressed to four different compression levels using the Joint Photographic Expert Group (JPEG) standard algorithm, and the images were presented in random order. Twelve and eleven staffmembers viewed 30-sec videotaped segments showing small rodents and a small primate, respectively. Each segment was repeated at four different compression levels in random order using an inverse cosine transform (ICT) algorithm. Each viewer made a series of subjective image-quality ratings. There was a significant difference in image ratings according to the type of scene viewed within disciplines; thus, ratings were scene dependent. Image (still and motion) acceptability does, in fact, vary according to compression level. The JPEG still-image-compression levels, even with the large range of 5:1 to 120:1 in this study, yielded equally high levels of acceptability. In contrast, the ICT algorithm for motion compression yielded a sharp decline in acceptability below 768 kb/sec. Therefore, if video compression is to be used as a solution for overcoming transmission bandwidth constraints, the effective management of the ratio and compression parameters according to scientific discipline and experiment type is critical to the success of remote experiments.
Hobi, Martina L.; Ginzler, Christian
2012-01-01
Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645
Hobi, Martina L; Ginzler, Christian
2012-01-01
Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of -0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of -0.43 m for the herb and grass vegetation and -0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of -1.85 m for the WorldView-2 GCP-enhanced RPCs model and -1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling.
Deep neural network features for horses identity recognition using multiview horses' face pattern
NASA Astrophysics Data System (ADS)
Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.
2017-03-01
To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in [5] a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.
Large Aperture Camera for the Simon's Observatory
NASA Astrophysics Data System (ADS)
Dicker, Simon; Simons Observatory Collaboration
2018-01-01
The Simon's observatory will consist of one large 6m telescope and three or more smaller telescopes working together with a goal of measuring the polarization in the Cosmic Microwave Background on angular scales as small as 1' to larger than 1 degree and at a sensitivity far greater than has ever been reached before. To reach these sensitivities, needed for our science goals, we require over 90000 background limited TES detectors on the large telescope - hence a very large field-of-view. The telescope design we have selected is a copy of the CCAT-prime telescope, a Crossed Dragone with extra aspheric terms to increase the diffraction limited field-of-view. At the secondary focus will be a 2.5m diameter cryostat containing re-imaging silicon optics which can correct remaining aberrations (mostly astigmatism) at the edge of the field of view and allow this part of the focal plane to be used at higher frequencies. This poster will contain an outline of our optical designs and take a brief look at how they could be scaled to a larger telescope.
Imaging of Stellar Surfacess Using Radio Facilities Including ALMA
NASA Astrophysics Data System (ADS)
O'Gorman, Eamon
2018-04-01
Until very recently, studies focusing on imaging stars at continuum radio wavelengths (here defined as submillimeter, millimeter, and centimeter wavelengths) has been scarce. These studies have mainly been carried out with the Very Large Array on a handful of evolved stars (i.e., Asymptotic Giant Branch and Red Supergiant stars) whereby their stellar disks have just about been spatially resolved. Some of these results however, have challenged our historical views on the nature of evolved star atmospheres. Now, the very long baselines of the Atacama Large Millimeter/submillimeter Array and the newly upgraded Karl G. Jansky Very Large Array provide a new opportunity to image these atmospheres at unprecedented spatial resolution and sensitivity across a much wider portion of the radio spectrum. In this talk I will first provide a history of stellar radio imaging and then discuss some recent exciting ALMA results. Finally I will present some brand new multi-wavelength ALMA and VLA results for the famous red supergiant Antares.
Gaussian vs. Bessel light-sheets: performance analysis in live large sample imaging
NASA Astrophysics Data System (ADS)
Reidt, Sascha L.; Correia, Ricardo B. C.; Donnachie, Mark; Weijer, Cornelis J.; MacDonald, Michael P.
2017-08-01
Lightsheet fluorescence microscopy (LSFM) has rapidly progressed in the past decade from an emerging technology into an established methodology. This progress has largely been driven by its suitability to developmental biology, where it is able to give excellent spatial-temporal resolution over relatively large fields of view with good contrast and low phototoxicity. In many respects it is superseding confocal microscopy. However, it is no magic bullet and still struggles to image deeply in more highly scattering samples. Many solutions to this challenge have been presented, including, Airy and Bessel illumination, 2-photon operation and deconvolution techniques. In this work, we show a comparison between a simple but effective Gaussian beam illumination and Bessel illumination for imaging in chicken embryos. Whilst Bessel illumination is shown to be of benefit when a greater depth of field is required, it is not possible to see any benefits for imaging into the highly scattering tissue of the chick embryo.
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
Perspective with Landsat Overlay: Mojave to Ventura, California
NASA Technical Reports Server (NTRS)
2000-01-01
Southern California's dramatic topography plays acritical role in its climate, hydrology, ecology, agriculture, and habitability. This image of Southern California, from the desert at Mojave to the ocean at Ventura, shows a variety of landscapes and environments. Winds usually bring moisture to this area from the west, moving from the ocean, across the coastal plains, to the mountains, and then to the deserts. Most rainfall occurs as the air masses rise over the mountains and cool with altitude. Continuing east, and now drained of their moisture, the air masses drop in altitude and warm as they spread across the desert. The mountain rainfall supports forest and chaparral vegetation, seen here, and also becomes ground water and stream flow that supports citrus, avocado, strawberry, other crops, and a large and growing population on the coastal plains.
This perspective view was generated by draping a Landsat satellite image over a preliminary topographic map from the Shuttle Radar Topography Mission. It shows the Tehachapi Mountains in the right foreground, the city of Ventura on the coast at the distant left, and the eastern most Santa Ynez Mountains forming the skyline at the distant right.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30 meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive.The elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington DC.Size: 43 kilometers (27 miles) view width, 166 kilometers (103 miles) view distance Location: 34.8 deg. North lat., 118.8 deg. West lon. Orientation: View toward the southwest, 3X vertical exaggeration Image: Landsat bands 1, 2&4, 3 as blue, green, and red, respectively Date Acquired: February 16, 2000 (SRTM), November 11, 1986 (Landsat) Image: NASA/JPL/NIMALarge-pitch steerable synthetic transmit aperture imaging (LPSSTA)
NASA Astrophysics Data System (ADS)
Li, Ying; Kolios, Michael C.; Xu, Yuan
2016-04-01
A linear ultrasound array system usually has a larger pitch and is less costly than a phased array system, but loses the ability to fully steer the ultrasound beam. In this paper, we propose a system whose hardware is similar to a large-pitch linear array system, but whose ability to steer the beam is similar to a phased array system. The motivation is to reduce the total number of measurement channels M (the product of the number of transmissions, nT, and the number of the receive channels in each transmission, nR), while maintaining reasonable image quality. We combined adjacent elements (with proper delays introduced) into groups that would be used in both the transmit and receive processes of synthetic transmit aperture imaging. After the M channels of RF data were acquired, a pseudo-inversion was applied to estimate the equivalent signal in traditional STA to reconstruct a STA image. Even with the similar M, different choices of nT and nR will produce different image quality. The images produced with M=N2/15 in the selected regions of interest (ROI) were demonstrated to be comparable with a full phased array, where N is the number of the array elements. The disadvantage of the proposed system is that its field of view in one delay-configuration is smaller than a standard full phased array. However, by adjusting the delay for each element within each group, the beam can be steered to cover the same field of view as the standard fully-filled phased array. The LPSSTA system might be useful for 3D ultrasound imaging.
NASA Astrophysics Data System (ADS)
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir M.; Samala, Ravi K.
2017-10-01
In digital breast tomosynthesis (DBT), the high-attenuation metallic clips marking a previous biopsy site in the breast cause errors in the estimation of attenuation along the ray paths intersecting the markers during reconstruction, which result in interplane and inplane artifacts obscuring the visibility of subtle lesions. We proposed a new metal artifact reduction (MAR) method to improve image quality. Our method uses automatic detection and segmentation to generate a marker location map for each projection (PV). A voting technique based on the geometric correlation among different PVs is designed to reduce false positives (FPs) and to label the pixels on the PVs and the voxels in the imaged volume that represent the location and shape of the markers. An iterative diffusion method replaces the labeled pixels on the PVs with estimated tissue intensity from the neighboring regions while preserving the original pixel values in the neighboring regions. The inpainted PVs are then used for DBT reconstruction. The markers are repainted on the reconstructed DBT slices for radiologists’ information. The MAR method is independent of reconstruction techniques or acquisition geometry. For the training set, the method achieved 100% success rate with one FP in 19 views. For the test set, the success rate by view was 97.2% for core biopsy microclips and 66.7% for clusters of large post-lumpectomy markers with a total of 10 FPs in 58 views. All FPs were large dense benign calcifications that also generated artifacts if they were not corrected by MAR. For the views with successful detection, the metal artifacts were reduced to a level that was not visually apparent in the reconstructed slices. The visibility of breast lesions obscured by the reconstruction artifacts from the metallic markers was restored.
Ramos, Rogelio; Zlatev, Roumen; Valdez, Benjamin; Stoytcheva, Margarita; Carrillo, Mónica; García, Juan-Francisco
2013-01-01
A virtual instrumentation (VI) system called VI localized corrosion image analyzer (LCIA) based on LabVIEW 2010 was developed allowing rapid automatic and subjective error-free determination of the pits number on large sized corroded specimens. The VI LCIA controls synchronously the digital microscope image taking and its analysis, finally resulting in a map file containing the coordinates of the detected probable pits containing zones on the investigated specimen. The pits area, traverse length, and density are also determined by the VI using binary large objects (blobs) analysis. The resulting map file can be used further by a scanning vibrating electrode technique (SVET) system for rapid (one pass) “true/false” SVET check of the probable zones only passing through the pit's centers avoiding thus the entire specimen scan. A complete SVET scan over the already proved “true” zones could determine the corrosion rate in any of the zones. PMID:23691434
Matsuo, Toshihiko; Gochi, Akira; Hirakawa, Tsuyoshi; Ito, Tadashi; Kohno, Yoshihisa
2010-10-01
General electronic medical records systems remain insufficient for ophthalmology outpatient clinics from the viewpoint of dealing with many ophthalmic examinations and images in a large number of patients. Filing systems for documents and images by Yahgee Document View (Yahgee, Inc.) were introduced on the platform of general electronic medical records system (Fujitsu, Inc.). Outpatients flow management system and electronic medical records system for ophthalmology were constructed. All images from ophthalmic appliances were transported to Yahgee Image by the MaxFile gateway system (P4 Medic, Inc.). The flow of outpatients going through examinations such as visual acuity testing were monitored by the list "Ophthalmology Outpatients List" by Yahgee Workflow in addition to the list "Patients Reception List" by Fujitsu. Patients' identification number was scanned with bar code readers attached to ophthalmic appliances. Dual monitors were placed in doctors' rooms to show Fujitsu Medical Records on the left-hand monitor and ophthalmic charts of Yahgee Document on the right-hand monitor. The data of manually-inputted visual acuity, automatically-exported autorefractometry and non-contact tonometry on a new template, MaxFile ED, were again automatically transported to designated boxes on ophthalmic charts of Yahgee Document. Images such as fundus photographs, fluorescein angiograms, optical coherence tomographic and ultrasound scans were viewed by Yahgee Image, and were copy-and-pasted to assigned boxes on the ophthalmic charts. Ordering such as appointments, drug prescription, fees and diagnoses input, central laboratory tests, surgical theater and ward room reservations were placed by functions of the Fujitsu electronic medical records system. The combination of the Fujitsu electronic medical records and Yahgee Document View systems enabled the University Hospital to examine the same number of outpatients as prior to the implementation of the computerized filing system.
NASA Astrophysics Data System (ADS)
Winfield, J. M.; Douglas, N. H. M.; deSouza, N. M.; Collins, D. J.
2014-05-01
We present the development and application of a phantom for assessment and optimization of fat suppression over a large field-of-view in diffusion-weighted magnetic resonance imaging at 1.5 T and 3 T. A Perspex cylinder (inner diameter 185 mm, height 300 mm) which contains a second cylinder (inner diameter 140 mm) was constructed. The inner cylinder was filled with water doped with copper sulphate and sodium chloride and the annulus was filled with corn oil, which closely matches the spectrum and longitudinal relaxation times of subcutaneous abdominal fat. Placement of the phantom on the couch at 45° to the z-axis presented an elliptical cross-section, which was of a similar size and shape to axial abdominal images. The use of a phantom for optimization of fat suppression allowed quantitative comparison between studies without the differences introduced by variability between human subjects. We have demonstrated that the phantom is suitable for selection of inversion delay times, spectral adiabatic inversion recovery delays and assessment of combinatorial methods of fat suppression. The phantom is valuable in protocol development and the assessment of new techniques, particularly in multi-centre trials.
Extended Source/Galaxy All Sky 1
2003-03-27
This panoramic view of the entire sky reveals the distribution of galaxies beyond our Milky Way galaxy, which astronomers call extended sources, as observed by Two Micron All-Sky Survey. The image is constructed from a database of over 1.6 million galaxies listed in the survey's Extended Source Catalog; more than half of the galaxies have never before been catalogued. The image is a representation of the relative brightnesses of these million-plus galaxies, all observed at a wavelength of 2.2 microns. The brightest and nearest galaxies are represented in blue, and the faintest, most distant ones are in red. This color scheme gives insights into the three dimensional large-scale structure of the nearby universe with the brightest, closest clusters and superclusters showing up as the blue and bluish-white features. The dark band in this image shows the area of the sky where our Milky Way galaxy blocks our view of distant objects, which, in this projection, lies predominantly along the edges of the image. http://photojournal.jpl.nasa.gov/catalog/PIA04252
High-speed large angle mammography tomosynthesis system
NASA Astrophysics Data System (ADS)
Eberhard, Jeffrey W.; Staudinger, Paul; Smolenski, Joe; Ding, Jason; Schmitz, Andrea; McCoy, Julie; Rumsey, Michael; Al-Khalidy, Abdulrahman; Ross, William; Landberg, Cynthia E.; Claus, Bernhard E. H.; Carson, Paul; Goodsitt, Mitchell; Chan, Heang-Ping; Roubidoux, Marilyn; Thomas, Jerry A.; Osland, Jacqueline
2006-03-01
A new mammography tomosynthesis prototype system that acquires 21 projection images over a 60 degree angular range in approximately 8 seconds has been developed and characterized. Fast imaging sequences are facilitated by a high power tube and generator for faster delivery of the x-ray exposure and a high speed detector read-out. An enhanced a-Si/CsI flat panel digital detector provides greater DQE at low exposure, enabling tomo image sequence acquisitions at total patient dose levels between 150% and 200% of the dose of a standard mammographic view. For clinical scenarios where a single MLO tomographic acquisition per breast may replace the standard CC and MLO views, total tomosynthesis breast dose is comparable to or below the dose in standard mammography. The system supports co-registered acquisition of x-ray tomosynthesis and 3-D ultrasound data sets by incorporating an ultrasound transducer scanning system that flips into position above the compression paddle for the ultrasound exam. Initial images acquired with the system are presented.
NASA Technical Reports Server (NTRS)
Stanfill, D. F.
1994-01-01
Pixel Pusher is a Macintosh application used for viewing and performing minor enhancements on imagery. It will read image files in JPL's two primary image formats- VICAR and PDS - as well as the Macintosh PICT format. VICAR (NPO-18076) handles an array of image processing capabilities which may be used for a variety of applications including biomedical image processing, cartography, earth resources, and geological exploration. Pixel Pusher can also import VICAR format color lookup tables for viewing images in pseudocolor (256 colors). This program currently supports only eight bit images but will work on monitors with any number of colors. Arbitrarily large image files may be viewed in a normal Macintosh window. Color and contrast enhancement can be performed with a graphical "stretch" editor (as in contrast stretch). In addition, VICAR images may be saved as Macintosh PICT files for exporting into other Macintosh programs, and individual pixels can be queried to determine their locations and actual data values. Pixel Pusher is written in Symantec's Think C and was developed for use on a Macintosh SE30, LC, or II series computer running System Software 6.0.3 or later and 32 bit QuickDraw. Pixel Pusher will only run on a Macintosh which supports color (whether a color monitor is being used or not). The standard distribution medium for this program is a set of three 3.5 inch Macintosh format diskettes. The program price includes documentation. Pixel Pusher was developed in 1991 and is a copyrighted work with all copyright vested in NASA. Think C is a trademark of Symantec Corporation. Macintosh is a registered trademark of Apple Computer, Inc.
New three-dimensional visualization system based on angular image differentiation
NASA Astrophysics Data System (ADS)
Montes, Juan D.; Campoy, Pascual
1995-03-01
This paper presents a new auto-stereoscopic system capable of reproducing static or moving 3D images by projection with horizontal parallax or with horizontal and vertical parallaxes. The working principle is based on the angular differentiation of the images which are projected onto the back side of the new patented screen. The most important features of this new system are: (1) Images can be seen by naked eye, without the use of glasses or any other aid. (2) The 3D view angle is not restricted by the angle of the optics making up the screen. (3) Fine tuning is not necessary, independently of the parallax and of the size of the 3D view angle. (4) Coherent light is not necessary neither in capturing the image nor in its reproduction, but standard cameras and projectors. (5) Since the images are projected, the size and depth of the reproduced scene is unrestricted. (6) Manufacturing cost is not excessive, due to the use of optics of large focal length, to the lack of fine tuning and to the use of the same screen several reproduction systems. (7) This technology can be used for any projection system: slides, movies, TV cannons,... A first prototype of static images has been developed and tested with a 3D view angle of 90 degree(s) and a photographic resolution over a planar screen of 900 mm, of diagonal length. Present developments have success on a dramatic size reduction of the projecting system and of its cost. Simultaneous tasks have been carried out on the development of a prototype of 3D moving images.
ESR paper on the proper use of mobile devices in radiology.
2018-04-01
Mobile devices (smartphones, tablets, etc.) have become key methods of communication, data access and data sharing for the population in the past decade. The technological capabilities of these devices have expanded very rapidly; for example, their in-built cameras have largely replaced conventional cameras. Their processing power is often sufficient to handle the large data sets of radiology studies and to manipulate images and studies directly on hand-held devices. Thus, they can be used to transmit and view radiology studies, often in locations remote from the source of the imaging data. They are not recommended for primary interpretation of radiology studies, but they facilitate sharing of studies for second opinions, viewing of studies and reports by clinicians at the bedside, etc. Other potential applications include remote participation in educational activity (e.g. webinars) and consultation of online educational content, e-books, journals and reference sources. Social-networking applications can be used for exchanging professional information and teaching. Users of mobile device must be aware of the vulnerabilities and dangers of their use, in particular regarding the potential for inappropriate sharing of confidential patient information, and must take appropriate steps to protect confidential data. • Mobile devices have revolutionized communication in the past decade, and are now ubiquitous. • Mobile devices have sufficient processing power to manipulate and display large data sets of radiological images. • Mobile devices allow transmission & sharing of radiologic studies for purposes of second opinions, bedside review of images, teaching, etc. • Mobile devices are currently not recommended as tools for primary interpretation of radiologic studies. • The use of mobile devices for image and data transmission carries risks, especially regarding confidentiality, which must be considered.
NASA's Great Observatories Celebrate International Year of Astronomy
NASA Astrophysics Data System (ADS)
2009-11-01
A never-before-seen view of the turbulent heart of our Milky Way galaxy is being unveiled by NASA on Nov. 10. This event will commemorate the 400 years since Galileo first turned his telescope to the heavens in 1609. In celebration of this International Year of Astronomy, NASA is releasing images of the galactic center region as seen by its Great Observatories to more than 150 planetariums, museums, nature centers, libraries, and schools across the country. The sites will unveil a giant, 6-foot-by-3-foot print of the bustling hub of our galaxy that combines a near-infrared view from the Hubble Space Telescope, an infrared view from the Spitzer Space Telescope, and an X-ray view from the Chandra X-ray Observatory into one multiwavelength picture. Experts from all three observatories carefully assembled the final image from large mosaic photo surveys taken by each telescope. This composite image provides one of the most detailed views ever of our galaxy's mysterious core. Participating institutions also will display a matched trio of Hubble, Spitzer, and Chandra images of the Milky Way's center on a second large panel measuring 3 feet by 4 feet. Each image shows the telescope's different wavelength view of the galactic center region, illustrating not only the unique science each observatory conducts, but also how far astronomy has come since Galileo. The composite image features the spectacle of stellar evolution: from vibrant regions of star birth, to young hot stars, to old cool stars, to seething remnants of stellar death called black holes. This activity occurs against a fiery backdrop in the crowded, hostile environment of the galaxy's core, the center of which is dominated by a supermassive black hole nearly four million times more massive than our Sun. Permeating the region is a diffuse blue haze of X-ray light from gas that has been heated to millions of degrees by outflows from the supermassive black hole as well as by winds from massive stars and by stellar explosions. Infrared light reveals more than a hundred thousand stars along with glowing dust clouds that create complex structures including compact globules, long filaments, and finger-like "pillars of creation," where newborn stars are just beginning to break out of their dark, dusty cocoons. The unveilings will take place at 152 institutions nationwide, reaching both big cities and small towns. Each institution will conduct an unveiling celebration involving the public, schools, and local media. The Astrophysics Division of NASA's Science Mission Directorate supports the International Year of Astronomy Great Observatories image unveiling. The project is a collaboration among the Space Telescope Science Institute in Baltimore, Md., the Spitzer Science Center in Pasadena, Calif., and the Chandra X-ray Center in Cambridge, Mass. Images of the Milky Way galactic center region and a list of places exhibiting these images can be found at: http://hubblesite.org/news/2009/28 & http://www.nasa.gov/hubble http://spitzer.caltech.edu & http://www.nasa.gov/spitzer http://chandra.harvard.edu & http://www.nasa.gov/chandra http://astronomy2009.nasa.gov
Perspective View with Landsat Overlay, Salt Lake City, Utah
NASA Technical Reports Server (NTRS)
2002-01-01
Most of the population of Utah lives just west of the Wasatch Mountains in the north central part of the state. This broad east-northeastward view shows that region with the cities of Ogden, Salt Lake City, and Provo seen from left to right. The Great Salt Lake (left) and Utah Lake (right) are quite shallow and appear greenish in this enhanced natural color view. Thousands of years ago ancient Lake Bonneville covered all of the lowlands seen here. Its former shoreline is clearly seen as a wave-cut bench and/or light colored 'bathtub ring' at several places along the base of the mountain front - evidence seen from space of our ever-changing planet.This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM), a Landsat 5 satellite image mosaic, and a false sky. Topographic expression is exaggerated four times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 147 kilometers (91 miles), View distance 38 kilometers (24 miles) Location: 40.7 deg. North lat., 112.0 deg. West lon. Orientation: View 19.5 deg North of East, 20 degrees below horizontal Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)SRTM Perspective View with Landsat Overlay: San Fernando Valley, California
NASA Technical Reports Server (NTRS)
2000-01-01
The San Fernando Valley (lower right of center) is part of Los Angeles and includes well over one million people. Two major disasters have occurred here in the last few decades: the 1971 Sylmar earthquake and the 1994 Northridge earthquake. Both quakes caused major damage to homes, freeways, and other structures and included major injuries and fatalities. The Northridge earthquake was the one of the costliest natural disasters in United States history. Understanding earthquake risks requires understanding a location's geophysical setting, and topographic data are of substantial benefit in that regard. Landforms are often characteristic of specific tectonic processes, such as ground movement along faults. Elevation models, such as those produced by the Shuttle Radar Topography Mission (SRTM), are particularly useful in visualizing regional scale landforms that are too large to be seen directly on-site. They can also be used to model the propagation of damaging seismic waves, which helps in urban planning. In recent years, elevation models have also been a critical input to radar interferometric studies, which reveal detailed patterns of ground deformation from earthquakes that had never before been seen.This perspective view was generated by draping a Landsat satellite image over a preliminary topographic map from SRTM. Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive.The elevation data used in this image was acquired by SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 33 kilometers (20 miles) view width, 88 kilometers (55 miles) view distance Location: 34.2 deg. North lat., 118.5 deg. West lon. Orientation: View toward the northeast, 3X vertical exaggeration Image: Landsat bands 1, 2&4, 3 as blue, green, and red, respectively Date Acquired: February 16, 2000 (SRTM), November 11, 1986 (Landsat)Spring Dust Storm Smothers Beijing
NASA Technical Reports Server (NTRS)
2002-01-01
A few days earlier than usual, a large, dense plume of dust blew southward and eastward from the desert plains of Mongolia-quite smothering to the residents of Beijing. Citizens of northeastern China call this annual event the 'shachenbao,' or 'dust cloud tempest.' However, the tempest normally occurs during the spring time. The dust storm hit Beijing on Friday night, March 15, and began coating everything with a fine, pale brown layer of grit. The region is quite dry; a problem some believe has been exacerbated by decades of deforestation. According to Chinese government estimates, roughly 1 million tons of desert dust and sand blow into Beijing each year. This true-color image was made using two adjacent swaths (click to see the full image) of data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS), flying aboard the OrbView-2 satellite, on March 17, 2002. The massive dust storm (brownish pixels) can easily be distinguished from clouds (bright white pixels) as it blows across northern Japan and eastward toward the open Pacific Ocean. The black regions are gaps between SeaWiFS' viewing swaths and represent areas where no data were collected. Image courtesy the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE
Hubble's High-Definition Panoramic View of the Andromeda Galaxy
2017-12-08
Zoom into the Andromeda galaxy. The largest NASA Hubble Space Telescope image ever assembled, this sweeping view of a portion of the Andromeda galaxy (M31) is the sharpest large composite image ever taken of our galactic neighbor. Though the galaxy is over 2 million light-years away, the Hubble telescope is powerful enough to resolve individual stars in a 61,000-light-year-long section of the galaxy's pancake-shaped disk. It's like photographing a beach and resolving individual grains of sand. And, there are lots of stars in this sweeping view — over 100 million, with some of them in thousands of star clusters seen embedded in the disk. This ambitious photographic cartography of the Andromeda galaxy represents a new benchmark for precision studies of large spiral galaxies which dominate the universe's population of over 100 billion galaxies. Never before have astronomers been able to see individual stars over a major portion of an external spiral galaxy. Most of the stars in the universe live inside such majestic star cities, and this is the first data that reveal populations of stars in context to their home galaxy. Credit: NASA, ESA, and G. Bacon (STScI) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Concept, design and analysis of a large format autostereoscopic display system
NASA Astrophysics Data System (ADS)
Knocke, F.; de Jongh, R.; Frömel, M.
2005-09-01
Autostereoscopic display devices with large visual field are of importance in a number of applications such as computer aided design projects, technical education, and military command systems. Typical requirements for such systems are, aside from the large visual field, a large viewing zone, a high level of image brightness, and an extended depth of field. Additional appliances such as specialized eyeglasses or head-trackers are disadvantageous for the aforementioned applications. We report on the design and prototyping of an autostereoscopic display system on the basis of projection-type one-step unidirectional holography. The prototype consists of a hologram holder, an illumination unit, and a special direction-selective screen. Reconstruction light is provided by a 2W frequency-doubled Nd:YVO4 laser. The production of stereoscopic hologram stripes on photopolymer is carried out on a special origination setup. The prototype has a screen size of 180cm × 90cm and provides a visual field of 29° when viewed from 3.6 meters. Due to the coherent reconstruction, a depth of field of several meters is achievable. Up to 18 hologram stripes can be arranged on the holder to permit a rapid switch between a series of motifs or views. Both computer generated image sequences and digital camera photos may serve as input frames. However, a comprehensive pre-distortion must be performed in order to account for optical distortion and several other geometrical factors. The corresponding computations are briefly summarized below. The performance of the system is analyzed, aspects of beam-shaping and mechanical design are discussed and photographs of early reconstructions are presented.
Slater, Amy; Varsani, Neesha; Diedrichs, Phillippa C
2017-09-01
This study experimentally examined the impact of exposure to fitspiration images and self-compassion quotes on social media on young women's body satisfaction, body appreciation, self-compassion, and negative mood. Female undergraduate students (N=160) were randomly assigned to view either Instagram images of fitspiration, self-compassion quotes, a combination of both, or appearance-neutral images. Results showed no differences between viewing fitspiration images compared to viewing neutral images, except for poorer self-compassion among those who viewed fitspiration images. However, women who viewed self-compassion quotes showed greater body satisfaction, body appreciation, self-compassion, and reduced negative mood compared to women who viewed neutral images. Further, viewing a combination of fitspiration images and self-compassion quotes led to positive outcomes compared to viewing only fitspiration images. Trait levels of thin-ideal internalisation moderated some effects. The findings suggest that self-compassion might offer a novel avenue for attenuating the negative impact of social media on women's body satisfaction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
NASA Astrophysics Data System (ADS)
Vogl, Raimund
2001-08-01
In 1997, a large PACS was first introduced at Innsbruck University Hospital in the context of a new traumatology centre. In the subsequent years, this initial PACS setting covering only one department was expanded to most of the hospital campus, with currently some 250 viewing stations attached. Constantly connecting new modalities and viewing stations created the demand for several redesigns from the original PACS configuration to cope with the increasing data load. We give an account of these changes necessary to develop a multi hospital PACS and the considerations that lead us there. Issues of personnel for running a large scale PACS are discussed and we give an outlook to the new information systems currently under development for archiving and communication of general medical imaging data and for simple telemedicine networking between several large university hospitals.
2018-04-23
Saturn's rings display their subtle colors in this view captured on Aug. 22, 2009, by NASA's Cassini spacecraft. The particles that make up the rings range in size from smaller than a grain of sand to as large as mountains, and are mostly made of water ice. The exact nature of the material responsible for bestowing color on the rings remains a matter of intense debate among scientists. Images taken using red, green and blue spectral filters were combined to create this natural color view. Cassini's narrow-angle camera took the images at a distance of approximately 1.27 million miles (2.05 million kilometers) from the center of the rings. The Cassini spacecraft ended its mission on Sept. 15, 2017 https://photojournal.jpl.nasa.gov/catalog/PIA22418
MISR Images Forest Fires and Hurricane
NASA Technical Reports Server (NTRS)
2000-01-01
These images show forest fires raging in Montana and Hurricane Hector swirling in the Pacific. These two unrelated, large-scale examples of nature's fury were captured by the Multi-angle Imaging SpectroRadiometer(MISR) during a single orbit of NASA's Terra satellite on August 14, 2000.
In the left image, huge smoke plumes rise from devastating wildfires in the Bitterroot Mountain Range near the Montana-Idaho border. Flathead Lake is near the upper left, and the Great Salt Lake is at the bottom right. Smoke accumulating in the canyons and plains is also visible. This image was generated from the MISR camera that looks forward at a steep angle (60 degrees); the instrument has nine different cameras viewing Earth at different angles. The smoke is far more visible when seen at this highly oblique angle than it would be in a conventional, straight-downward (nadir)view. The wide extent of the smoke is evident from comparison with the image on the right, a view of Hurricane Hector acquired from MISR's nadir-viewing camera. Both images show an area of approximately 400 kilometers (250 miles)in width and about 850 kilometers (530 miles) in length.When this image of Hector was taken, the eastern Pacific tropical cyclone was located approximately 1,100 kilometers (680 miles) west of the southern tip of Baja California, Mexico. The eye is faintly visible and measures 25 kilometers (16 miles) in diameter. The storm was beginning to weaken, and 24hours later the National Weather Service downgraded Hector from a hurricane to a tropical storm.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.For more information: http://www-misr.jpl.nasa.govDOE Office of Scientific and Technical Information (OSTI.GOV)
Nazareth, D; Malhotra, H; French, S
Purpose: Breast radiotherapy, particularly electronic compensation, may involve large dose gradients and difficult patient positioning problems. We have developed a simple self-calibrating augmented-reality system, which assists in accurately and reproducibly positioning the patient, by displaying her live image from a single camera superimposed on the correct perspective projection of her 3D CT data. Our method requires only a standard digital camera capable of live-view mode, installed in the treatment suite at an approximately-known orientation and position (rotation R; translation T). Methods: A 10-sphere calibration jig was constructed and CT imaged to provide a 3D model. The (R,T) relating the cameramore » to the CT coordinate system were determined by acquiring a photograph of the jig and optimizing an objective function, which compares the true image points to points calculated with a given candidate R and T geometry. Using this geometric information, 3D CT patient data, viewed from the camera's perspective, is plotted using a Matlab routine. This image data is superimposed onto the real-time patient image, acquired by the camera, and displayed using standard live-view software. This enables the therapists to view both the patient's current and desired positions, and guide the patient into assuming the correct position. The method was evaluated using an in-house developed bolus-like breast phantom, mounted on a supporting platform, which could be tilted at various angles to simulate treatment-like geometries. Results: Our system allowed breast phantom alignment, with an accuracy of about 0.5 cm and 1 ± 0.5 degree. Better resolution could be possible using a camera with higher-zoom capabilities. Conclusion: We have developed an augmented-reality system, which combines a perspective projection of a CT image with a patient's real-time optical image. This system has the potential to improve patient setup accuracy during breast radiotherapy, and could possibly be used for other disease sites as well.« less
Surface features of central North America: a synoptic view from computer graphics
Pike, R.J.
1991-01-01
A digital shaded-relief image of the 48 contiguous United States shows the details of large- and small-scale landforms, including several linear trends. The features faithfully reflect tectonism, continental glaciation, fluvial activity, volcanism, and other surface-shaping events and processes. The new map not only depicts topography accurately and in its true complexity, but does so in one synoptic view that provides a regional context for geologic analysis unobscured by clouds, culture, vegetation, or artistic constraints. -Author
Rapid portal imaging with a high-efficiency, large field-of-view detector.
Mosleh-Shirazi, M A; Evans, P M; Swindell, W; Symonds-Tayler, J R; Webb, S; Partridge, M
1998-12-01
The design, construction, and performance evaluation of an electronic portal imaging device (EPID) are described. The EPID has the same imaging geometry as the current mirror-based systems except for the x-ray detection stage, where a two-dimensional (2D) array of 1 cm thick CsI(Tl) detector elements are utilized. The approximately 18% x-ray quantum efficiency of the scintillation detector and its 30 x 40 cm2 field-of-view at the isocenter are greater than other area-imaging EPIDs. The imaging issues addressed are theoretical and measured signal-to-noise ratio, linearity of the imaging chain, influence of frame-summing on image quality and image calibration. Portal images of test objects and a humanoid phantom are used to measure the performance of the system. An image quality similar to the current devices is achieved but with a lower dose. With approximately 1 cGy dose delivered by a 6 MV beam, a 2 mm diam structure of 1.3% contrast and an 18 mm diam object of 0.125% contrast can be resolved without using image-enhancement methods. A spatial resolution of about 2 mm at the isocenter is demonstrated. The capability of the system to perform fast sequential imaging, synchronized with the radiation pulses, makes it suitable for patient motion studies and verification of intensity-modulated beams as well as its application in cone-beam megavoltage computed tomography.
Long-term, high-resolution imaging in the mouse neocortex through a chronic cranial window
Holtmaat, Anthony; Bonhoeffer, Tobias; Chow, David K; Chuckowree, Jyoti; De Paola, Vincenzo; Hofer, Sonja B; Hübener, Mark; Keck, Tara; Knott, Graham; Lee, Wei-Chung A; Mostany, Ricardo; Mrsic-Flogel, Tom D; Nedivi, Elly; Portera-Cailliau, Carlos; Svoboda, Karel; Trachtenberg, Joshua T; Wilbrecht, Linda
2011-01-01
To understand the cellular and circuit mechanisms of experience-dependent plasticity, neurons and their synapses need to be studied in the intact brain over extended periods of time. Two-photon excitation laser scanning microscopy (2PLSM), together with expression of fluorescent proteins, enables high-resolution imaging of neuronal structure in vivo. In this protocol we describe a chronic cranial window to obtain optical access to the mouse cerebral cortex for long-term imaging. A small bone flap is replaced with a coverglass, which is permanently sealed in place with dental acrylic, providing a clear imaging window with a large field of view (∼0.8–12 mm2). The surgical procedure can be completed within ∼1 h. The preparation allows imaging over time periods of months with arbitrary imaging intervals. The large size of the imaging window facilitates imaging of ongoing structural plasticity of small neuronal structures in mice, with low densities of labeled neurons. The entire dendritic and axonal arbor of individual neurons can be reconstructed. PMID:19617885
In situ calibration of an infrared imaging video bolometer in the Large Helical Device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukai, K., E-mail: mukai.kiyofumi@LHD.nifs.ac.jp; Peterson, B. J.; Pandya, S. N.
The InfraRed imaging Video Bolometer (IRVB) is a powerful diagnostic to measure multi-dimensional radiation profiles in plasma fusion devices. In the Large Helical Device (LHD), four IRVBs have been installed with different fields of view to reconstruct three-dimensional profiles using a tomography technique. For the application of the measurement to plasma experiments using deuterium gas in LHD in the near future, the long-term effect of the neutron irradiation on the heat characteristics of an IRVB foil should be taken into account by regular in situ calibration measurements. Therefore, in this study, an in situ calibration system was designed.
A complete catalog of stellar mass maps for PHANGS
NASA Astrophysics Data System (ADS)
Munoz-Mateos, Juan-Carlos; Querejeta, Miguel; Schinnerer, Eva; Leroy, Adam; Sun, Jiayi; Blanc, Guillermo; Kruijssen, Diederik; Emsellem, Eric; Bigiel, Frank
2018-05-01
We request IRAC 3.6 and 4.5 um imaging of four galaxies that have been mapped in molecular gas by ALMA as part of its first large program targeting nearby galaxies (PHANGS: Physics at High Angular resolution in Nearby GalaxieS). IRAC provides a uniquely robust view of the stellar mass distribution, which in turn plays a key role in regulating the properties and behavior of the molecular gas. These are the only targets of our ALMA large program without such imaging. A modest investment of Spitzer time will allow us to measure the drivers of molecular cloud and star formation in these targets.
How to Study History: The View from Sociology.
ERIC Educational Resources Information Center
Goldstone, Jack A.
1986-01-01
Reviews two recent books: Charles Tilly's 1985 work, "Big Structures, Large Processes, Huge Comparisons," and the 1984 volume edited by Theda Skocpol, "Vision and Method in Historical Sociology." Concludes that historians who still harbor negative images of historical sociologists would benefit by gaining a more accurate…
NASA Technical Reports Server (NTRS)
2001-01-01
This set of images from the Multi-angle Imaging SpectroRadiometer highlights coastal areas of four states along the Gulf of Mexico: Louisiana, Mississippi, Alabama and part of the Florida panhandle. The images were acquired on October 15, 2001 (Terra orbit 9718)and represent an area of 345 kilometers x 315 kilometers.The two smaller images on the right are (top) a natural color view comprised of red, green, and blue band data from MISR's nadir(vertical-viewing) camera, and (bottom) a false-color view comprised of near-infrared, red, and blue band data from the same camera. The predominantly red color of the false-color image is due to the presence of vegetation, which is bright at near-infrared wavelengths. Cities appear as grey patches, with New Orleans visible at the southern edge of Lake Pontchartrain, along the left-hand side of the images. The Lake Pontchartrain Bridge runs approximately north-south across the middle of the lake. The distinctive shape of the Mississippi River Delta can be seen to the southeast of New Orleans. Other coastal cities are visible east of the Mississippi, including Biloxi, Mobile and Pensacola.The large image is similar to the true-color nadir view, except that red band data from the 60-degree backward-looking camera has been substituted into the red channel; the blue and green data from the nadir camera have been preserved. In this visualization, green hues appear somewhat subdued, and a number of areas with a reddish color are present, particularly near the mouths of the Mississippi, Pascagoula, Mobile-Tensaw, and Escambia Rivers. Here, the red color is highlighting differences in surface texture. This combination of angular and spectral information differentiates areas with aquatic vegetation associated with poorly drained bottom lands, marshes, and/or estuaries from the surrounding surface vegetation. These wetland regions are not as well differentiated in the conventional nadir views.Variations in ocean color are apparent in all three views, and represent the outflow of suspended sediment from the seabed shelf to the open waters of the Gulf of Mexico. Major features include the Mississippi Delta, where large amounts of land-derived sediments have been deposited in shallow coastal waters. These deltaic environments form a complex, interconnected web of estuarine channels and extensive coastal wetlands that provide important habitat for fisheries. The city of New Orleans is prone to flooding, with about 45% of the metropolitan core situated at or below sea level. The city is protected by levees, but the wetlands which also function as a buffer from storm surges have been disappearing.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.NOAA Photo Library - NOAA's Ark/Animals Album
options banner CATALOG View ALL images contained in the collection. Click on thumbnails to view larger images. ALBUMS Images are arranged by themes. Click on thumbnails to view larger images. Note that not all images are contained in the albums - select the above option to view ALL current images. NOAA's
A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.
Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan
2017-12-01
A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.
Pose-Invariant Face Recognition via RGB-D Images.
Sang, Gaoli; Li, Jing; Zhao, Qijun
2016-01-01
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
The Wide Field Imager for Athena
NASA Astrophysics Data System (ADS)
Rau, A.; Nandra, K.; Meidinger, N.; Plattner, M.
2017-10-01
The Wide Field Imager (WFI) is one of the two scientific instruments of Athena, ESA's next large X-ray Observatory with launch in 2028. The instrument will provide two defining capabilities to the mission sensitive wide-field imaging spectroscopy and excellent high-count rate performance. It will do so with the use of two separate detectors systems, the Large Detector Array (LDA) optimized for its field of view (40'×40') with a 100 fold survey speed increase compared to existing X-ray missions, and the Fast Detector (FD) tweaked for high throughput and low pile-up for point sources as bright as the Crab. In my talk I will present the key performance parameters of the instrument and their links to the scientific goals of Athena and summarize the status of the ongoing development activities.
Zhang, Geng; Wang, Shuang; Li, Libo; Hu, Xiuqing; Hu, Bingliang
2016-11-01
The lunar spectrum has been used in radiometric calibration and sensor stability monitoring for spaceborne optical sensors. A ground-based large-aperture static image spectrometer (LASIS) can be used to acquire the lunar spectral image for lunar radiance model improvement when the moon orbits over its viewing field. The lunar orbiting behavior is not consistent with the desired scanning speed and direction of LASIS. To correctly extract interferograms from the obtained data, a translation correction method based on image correlation is proposed. This method registers the frames to a reference frame to reduce accumulative errors. Furthermore, we propose a circle-matching-based approach to achieve even higher accuracy during observation of the full moon. To demonstrate the effectiveness of our approaches, experiments are run on true lunar observation data. The results show that the proposed approaches outperform the state-of-the-art methods.
Global, Energy-Dependent Ring Current Response During Two Large Storms
NASA Astrophysics Data System (ADS)
Goldstein, J.; Angelopoulos, V.; Burch, J. L.; De Pascuale, S.; Fuselier, S. A.; Genestreti, K. J.; Kurth, W. S.; LLera, K.; McComas, D. J.; Reeves, G. D.; Spence, H. E.; Valek, P. W.
2015-12-01
Two recent large (~200 nT) geomagnetic storms occurred during 17--18 March 2015 and 22--23 June 2015. The global, energy-dependent ring current response to these two extreme events is investigated using both global imaging and multi-point in situ observations. Energetic neutral atom (ENA) imaging by the Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) mission provides a global view of ring current ions. Local measurements are provided by two multi-spacecraft missions. The two Van Allen Probes measure in situ plasma (including ion composition) and fields at ring current and plasmaspheric L values. The recently launched Magnetospheric Multiscale (MMS) comprises four spacecraft that have just begun to measure particles (including ion composition) and fields at outer magnetospheric L-values. We analyze the timing and energetics of the stormtime evolution of ring current ions, both trapped and precipitating, using TWINS ENA images and in situ data by the Van Allen Probes and MMS.
'Lyell' Panorama inside Victoria Crater (False Color)
NASA Technical Reports Server (NTRS)
2008-01-01
During four months prior to the fourth anniversary of its landing on Mars, NASA's Mars Exploration Rover Opportunity examined rocks inside an alcove called 'Duck Bay' in the western portion of Victoria Crater. The main body of the crater appears in the upper right of this stereo panorama, with the far side of the crater lying about 800 meters (half a mile) away. Bracketing that part of the view are two promontories on the crater's rim at either side of Duck Bay. They are 'Cape Verde,' about 6 meters (20 feet) tall, on the left, and 'Cabo Frio,' about 15 meters (50 feet) tall, on the right. The rest of the image, other than sky and portions of the rover, is ground within Duck Bay. Opportunity's targets of study during the last quarter of 2007 were rock layers within a band exposed around the interior of the crater, about 6 meters (20 feet) from the rim. Bright rocks within the band are visible in the foreground of the panorama. The rover science team assigned informal names to three subdivisions of the band: 'Steno,' 'Smith,' and 'Lyell.' This view combines many images taken by Opportunity's panoramic camera (Pancam) from the 1,332nd through 1,379th Martian days, or sols, of the mission (Oct. 23 to Dec. 11, 2007). Images taken through Pancam filters centered on wavelengths of 753 nanometers, 535 nanometers and 432 nanometers were mixed to produce this view, which is presented in a false-color stretch to bring out subtle color differences in the scene. Some visible patterns in dark and light tones are the result of combining frames that were affected by dust on the front sapphire window of the rover's camera. Opportunity landed on Jan. 25, 2004, Universal Time, (Jan. 24, Pacific Time) inside a much smaller crater about 6 kilometers (4 miles) north of Victoria Crater, to begin a surface mission designed to last 3 months and drive about 600 meters (0.4 mile).Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance
Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang
2015-01-01
We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249
Use of Vertical Aerial Images for Semi-Oblique Mapping
NASA Astrophysics Data System (ADS)
Poli, D.; Moe, K.; Legat, K.; Toschi, I.; Lago, F.; Remondino, F.
2017-05-01
The paper proposes a methodology for the use of the oblique sections of images from large-format photogrammetric cameras, by exploiting the effect of the central perspective geometry in the lateral parts of the nadir images ("semi-oblique" images). The point of origin of the investigation was the execution of a photogrammetric flight over Norcia (Italy), which was seriously damaged after the earthquake of 30/10/2016. Contrary to the original plan of oblique acquisitions, the flight was executed on 15/11/2017 using an UltraCam Eagle camera with focal length 80 mm, and combining two flight plans, rotated by 90º ("crisscross" flight). The images (GSD 5 cm) were used to extract a 2.5D DSM cloud, sampled to a XY-grid size of 2 GSD, a 3D point clouds with a mean spatial resolution of 1 GSD and a 3D mesh model at a resolution of 10 cm of the historic centre of Norcia for a quantitative assessment of the damages. From the acquired nadir images the "semi-oblique" images (forward, backward, left and right views) could be extracted and processed in a modified version of GEOBLY software for measurements and restitution purposes. The potential of such semi-oblique image acquisitions from nadir-view cameras is hereafter shown and commented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schollmeier, Marius S.; Geissel, Matthias; Shores, Jonathon E.
We present calculations for the field of view (FOV), image fluence, image monochromaticity, spectral acceptance, and image aberrations for spherical crystal microscopes, which are used as self-emission imaging or backlighter systems at large-scale high energy density physics facilities. Our analytic results are benchmarked with ray-tracing calculations as well as with experimental measurements from the 6.151 keV backlighter system at Sandia National Laboratories. Furthermore, the analytic expressions can be used for x-ray source positions anywhere between the Rowland circle and object plane. We discovered that this enables quick optimization of the performance of proposed but untested, bent-crystal microscope systems to findmore » the best compromise between FOV, image fluence, and spatial resolution for a particular application.« less
Super-resolution optical telescopes with local light diffraction shrinkage
Wang, Changtao; Tang, Dongliang; Wang, Yanqin; Zhao, Zeyu; Wang, Jiong; Pu, Mingbo; Zhang, Yudong; Yan, Wei; Gao, Ping; Luo, Xiangang
2015-01-01
Suffering from giant size of objective lenses and infeasible manipulations of distant targets, telescopes could not seek helps from present super-resolution imaging, such as scanning near-field optical microscopy, perfect lens and stimulated emission depletion microscopy. In this paper, local light diffraction shrinkage associated with optical super-oscillatory phenomenon is proposed for real-time and optically restoring super-resolution imaging information in a telescope system. It is found that fine target features concealed in diffraction-limited optical images of a telescope could be observed in a small local field of view, benefiting from a relayed metasurface-based super-oscillatory imaging optics in which some local Fourier components beyond the cut-off frequency of telescope could be restored. As experimental examples, a minimal resolution to 0.55 of Rayleigh criterion is obtained, and imaging complex targets and large targets by superimposing multiple local fields of views are demonstrated as well. This investigation provides an access for real-time, incoherent and super-resolution telescopes without the manipulation of distant targets. More importantly, it gives counterintuitive evidence to the common knowledge that relayed optics could not deliver more imaging details than objective systems. PMID:26677820
Recovering the fine structures in solar images
NASA Technical Reports Server (NTRS)
Karovska, Margarita; Habbal, S. R.; Golub, L.; Deluca, E.; Hudson, Hugh S.
1994-01-01
Several examples of the capability of the blind iterative deconvolution (BID) technique to recover the real point spread function, when limited a priori information is available about its characteristics. To demonstrate the potential of image post-processing for probing the fine scale and temporal variability of the solar atmosphere, the BID technique is applied to different samples of solar observations from space. The BID technique was originally proposed for correction of the effects of atmospheric turbulence on optical images. The processed images provide a detailed view of the spatial structure of the solar atmosphere at different heights in regions with different large-scale magnetic field structures.
Alali, Sanaz; Gribble, Adam; Vitkin, I Alex
2016-03-01
A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second.
Image-Enhancement Aid For The Partially Sighted
NASA Technical Reports Server (NTRS)
Lawton, T. A.; Gennery, D. B.
1989-01-01
Digital filtering enhances ability to read and to recognize objects. Possible to construct portable vision aid by combining miniature video equipment to observe scene and display images with very-large-scale integrated circuits to implement real-time digital image-data processing. Afflicted observer views scene through magnifier to shift spatial frequencies downward and thereby improves perceived image. However, less magnification needed, larger the scene observed. Thus, one measure of effectiveness of new system is amount of magnification required with and without it. In series of tests, found 27 to 70 percent more magnification needed for afflicted observers to recognize unfiltered words than to recognize filtered words.
Wide-field high spatial frequency domain imaging of tissue microstructure
NASA Astrophysics Data System (ADS)
Lin, Weihao; Zeng, Bixin; Cao, Zili; Zhu, Danfeng; Xu, M.
2018-02-01
Wide-field tissue imaging is usually not capable of resolving tissue microstructure. We present High Spatial Frequency Domain Imaging (HSFDI) - a noncontact imaging modality that spatially maps the tissue microscopic scattering structures over a large field of view. Based on an analytical reflectance model of sub-diffusive light from forward-peaked highly scattering media, HSFDI quantifies the spatially-resolved parameters of the light scattering phase function from the reflectance of structured light modulated at high spatial frequencies. We have demonstrated with ex vivo cancerous tissue to validate the robustness of HSFDI in significant contrast and differentiation of the microstructutral parameters between different types and disease states of tissue.
Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho
2007-10-01
Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.
Minimal resin embedding of multicellular specimens for targeted FIB-SEM imaging.
Schieber, Nicole L; Machado, Pedro; Markert, Sebastian M; Stigloher, Christian; Schwab, Yannick; Steyer, Anna M
2017-01-01
Correlative light and electron microscopy (CLEM) is a powerful tool to perform ultrastructural analysis of targeted tissues or cells. The large field of view of the light microscope (LM) enables quick and efficient surveys of the whole specimen. It is also compatible with live imaging, giving access to functional assays. CLEM protocols take advantage of the features to efficiently retrace the position of targeted sites when switching from one modality to the other. They more often rely on anatomical cues that are visible both by light and electron microscopy. We present here a simple workflow where multicellular specimens are embedded in minimal amounts of resin, exposing their surface topology that can be imaged by scanning electron microscopy (SEM). LM and SEM both benefit from a large field of view that can cover whole model organisms. As a result, targeting specific anatomic locations by focused ion beam-SEM (FIB-SEM) tomography becomes straightforward. We illustrate this application on three different model organisms, used in our laboratory: the zebrafish embryo Danio rerio, the marine worm Platynereis dumerilii, and the dauer larva of the nematode Caenorhabditis elegans. Here we focus on the experimental steps to reduce the amount of resin covering the samples and to image the specimens inside an FIB-SEM. We expect this approach to have widespread applications for volume electron microscopy on multiple model organisms. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Jie; Messinger, David W.; Dube, Roger R.
2018-03-01
Bloodstain detection and discrimination from nonblood substances on various substrates are critical in forensic science as bloodstains are a critical source for confirmatory DNA tests. Conventional bloodstain detection methods often involve time-consuming sample preparation, a chance of harm to investigators, the possibility of destruction of blood samples, and acquisition of too little data at crime scenes either in the field or in the laboratory. An imaging method has the advantages of being nondestructive, noncontact, real-time, and covering a large field-of-view. The abundant spectral information provided by multispectral imaging makes it a potential presumptive bloodstain detection and discrimination method. This article proposes an interference filter (IF) based area scanning three-spectral-band crime scene imaging system used for forensic bloodstain detection and discrimination. The impact of large angle of views on the spectral shift of calibrated IFs is determined, for both detecting and discriminating bloodstains from visually similar substances on multiple substrates. Spectral features in the visible and near-infrared portion employed by the relative band depth method are used. This study shows that 1 ml bloodstain on black felt, gray felt, red felt, white cotton, white polyester, and raw wood can be detected. Bloodstains on the above substrates can be discriminated from cola, coffee, ketchup, orange juice, red wine, and green tea.
NASA Technical Reports Server (NTRS)
2004-01-01
8 January 2004 This is how Mars appeared to the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) wide angle system on 25 December 2003, the day that Beagle 2 and Mars Express reached the red planet. The large, dark region just left of center is Syrtis Major, a persistent low albedo terrain known to astronomers for nearly four centuries before the first spacecraft went to Mars. Immediately to the right (east) of Syrtis Major is the somewhat circular plain, Isidis Planitia. Beagle 2 arrived in Isidis Planitia only about 18 minutes before Mars Global Surveyor flew over the region and acquired a portion of this global view. Relative to other global images of Mars acquired by MGS over the past several martian years, the surface features were not as sharp and distinct on 25 December 2003 because of considerable haze kicked up by large dust storms in the western and southern hemispheres during th previous two weeks. The picture is a composite of several MGS MOC red and blue daily global images that have been map-projected and digitally wrapped to a sphere. Although the effect here is minor, inspection of this mosaic shows zones that appear smudged or blurry. The high dust opacity on 25 December impacted MOC's oblique viewing geometry toward the edges of each orbit's daily global mapping image, thus emphasizing the 'blurry' zones between images acquired on successive orbits.
Enjilela, Esmaeil; Lee, Ting-Yim; Hsieh, Jiang; Wisenberg, Gerald; Teefy, Patrick; Yadegari, Andrew; Bagur, Rodrigo; Islam, Ali; Branch, Kelley; So, Aaron
2018-03-01
We implemented and validated a compressed sensing (CS) based algorithm for reconstructing dynamic contrast-enhanced (DCE) CT images of the heart from sparsely sampled X-ray projections. DCE CT imaging of the heart was performed on five normal and ischemic pigs after contrast injection. DCE images were reconstructed with filtered backprojection (FBP) and CS from all projections (984-view) and 1/3 of all projections (328-view), and with CS from 1/4 of all projections (246-view). Myocardial perfusion (MP) measurements with each protocol were compared to those with the reference 984-view FBP protocol. Both the 984-view CS and 328-view CS protocols were in good agreements with the reference protocol. The Pearson correlation coefficients of 984-view CS and 328-view CS determined from linear regression analyses were 0.98 and 0.99 respectively. The corresponding mean biases of MP measurement determined from Bland-Altman analyses were 2.7 and 1.2ml/min/100g. When only 328 projections were used for image reconstruction, CS was more accurate than FBP for MP measurement with respect to 984-view FBP. However, CS failed to generate MP maps comparable to those with 984-view FBP when only 246 projections were used for image reconstruction. DCE heart images reconstructed from one-third of a full projection set with CS were minimally affected by aliasing artifacts, leading to accurate MP measurements with the effective dose reduced to just 33% of conventional full-view FBP method. The proposed CS sparse-view image reconstruction method could facilitate the implementation of sparse-view dynamic acquisition for ultra-low dose CT MP imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
New Tools for Viewing Spectrally and Temporally-Rich Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Bradley, E. S.; Toomey, M. P.; Roberts, D. A.; Still, C. J.
2010-12-01
High frequency, temporally extensive remote sensing datasets (GOES: 30 minutes, Santa Cruz Island webcam: nearly 5 years at every 10 min.) and airborne imaging spectrometry (AVIRIS with 224 spectral bands), present exciting opportunities for education, synthesis, and analysis. However, the large file volume / size can make holistic review and exploration difficult. In this research, we explore two options for visualization (1) a web-based portal for time-series analysis, PanOpt, and (2) Google Earth-based timestamped image overlays. PanOpt is an interactive website (http://zulu.geog.ucsb.edu/panopt/), which integrates high frequency (GOES) and multispectral (MODIS) satellite imagery with webcam ground-based repeat photography. Side-by-side comparison of satellite imagery with webcam images supports analysis of atmospheric and environmental phenomena. In this proof of concept, we have integrated four years of imagery for a multi-view FogCam on Santa Cruz Island off the coast of Southern California with two years of GOES-11 and four years of MODIS Aqua imagery subsets for the area (14,000 km2). From the PHP-based website, users can search the data (date, time of day, etc.) and specify timestep and display size; and then view the image stack as animations or in a matrix form. Extracted metrics for regions of interest (ROIs) can be viewed in different formats, including time-series and scatter plots. Through click and mouseover actions over the hyperlink-enabled data points, users can view the corresponding images. This directly melds the quantitative and qualitative aspects and could be particularly effective for both education as well as anomaly interpretation. We have also extended this project to Google Earth with timestamped GOES and MODIS image overlays, which can be controlled using the temporal slider and linked to a screen chart of ancillary meteorological data. The automated ENVI/IDL script for generating KMZ overlays was also applied for generating same-day visualization of AVIRIS acquisitions as part of the Gulf of Mexico oil spill response. This supports location-focused imagery review and synthesis, which is critical for successfully imaging moving targets, such as oil slicks.
A Unified Taxonomic Approach to the Laboratory Assessment of Visionic Devices
2006-09-01
the ratification stage with member nations. Marasco and Task 4 presented a large array of tests applicable to image intensification-based visionic...aircraft.” In print. 4. Marasco , P. L., and Task, H. L. 1999. “Optical characterization of wide field-of-view night vision devices,” in
ERIC Educational Resources Information Center
Harwood, Kenneth
Professors of rhetoric are viewing with concern the prospect of 15 years of declining enrollment in higher education. It has been speculated that the decline in enrollment will be less severe in large, well-established, urban institutions with low tuition, a clear mission or well-defined public image, well-planned modest growth, established…
Spectral Radiance of a Large-Area Integrating Sphere Source
Walker, James H.; Thompson, Ambler
1995-01-01
The radiance and irradiance calibration of large field-of-view scanning and imaging radiometers for remote sensing and surveillance applications has resulted in the development of novel calibration techniques. One of these techniques is the employment of large-area integrating sphere sources as radiance or irradiance secondary standards. To assist the National Aeronautical and Space Administration’s space based ozone measurement program, a commercially available large-area internally illuminated integrating sphere source’s spectral radiance was characterized in the wavelength region from 230 nm to 400 nm at the National Institute of Standards and Technology. Spectral radiance determinations and spatial mappings of the source indicate that carefully designed large-area integrating sphere sources can be measured with a 1 % to 2 % expanded uncertainty (two standard deviation estimate) in the near ultraviolet with spatial nonuniformities of 0.6 % or smaller across a 20 cm diameter exit aperture. A method is proposed for the calculation of the final radiance uncertainties of the source which includes the field of view of the instrument being calibrated. PMID:29151725
An automated 3D reconstruction method of UAV images
NASA Astrophysics Data System (ADS)
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
Application of Remote Sensing in Building Damages Assessment after Moderate and Strong Earthquake
NASA Astrophysics Data System (ADS)
Tian, Y.; Zhang, J.; Dou, A.
2003-04-01
- Earthquake is a main natural disaster in modern society. However, we still cannot predict the time and place of its occurrence accurately. Then it is of much importance to survey the damages information when an earthquake occurs, which can help us to mitigate losses and implement fast damage evaluation. In this paper, we use remote sensing techniques for our purposes. Remotely sensed satellite images often view a large scale of land at a time. There are several kinds of satellite images, which of different spatial and spectral resolutions. Landsat-4/5 TM sensor can view ground at 30m resolution, while Landsat-7 ETM Plus has a resolution of 15m in panchromatic waveband. SPOT satellite can provide images with higher resolutions. Those images obtained pre- and post-earthquake can help us greatly in identifying damages of moderate and large-size buildings. In this paper, we bring forward a method to implement quick damages assessment by analyzing both pre- and post-earthquake satellite images. First, those images are geographically registered together with low RMS (Root Mean Square) error. Then, we clip out residential areas by overlaying images with existing vector layers through Geographic Information System (GIS) software. We present a new change detection algorithm to quantitatively identify damages degree. An empirical or semi-empirical model is then established by analyzing the real damage degree and changes of pixel values of the same ground objects. Experimental result shows that there is a good linear relationship between changes of pixel values and ground damages, which proves the potentials of remote sensing in post-quake fast damage assessment. Keywords: Damages Assessment, Earthquake Hazard, Remote Sensing
A standardised protocol for texture feature analysis of endoscopic images in gynaecological cancer.
Neofytou, Marios S; Tanos, Vasilis; Pattichis, Marios S; Pattichis, Constantinos S; Kyriacou, Efthyvoulos C; Koutsouris, Dimitris D
2007-11-29
In the development of tissue classification methods, classifiers rely on significant differences between texture features extracted from normal and abnormal regions. Yet, significant differences can arise due to variations in the image acquisition method. For endoscopic imaging of the endometrium, we propose a standardized image acquisition protocol to eliminate significant statistical differences due to variations in: (i) the distance from the tissue (panoramic vs close up), (ii) difference in viewing angles and (iii) color correction. We investigate texture feature variability for a variety of targets encountered in clinical endoscopy. All images were captured at clinically optimum illumination and focus using 720 x 576 pixels and 24 bits color for: (i) a variety of testing targets from a color palette with a known color distribution, (ii) different viewing angles, (iv) two different distances from a calf endometrial and from a chicken cavity. Also, human images from the endometrium were captured and analysed. For texture feature analysis, three different sets were considered: (i) Statistical Features (SF), (ii) Spatial Gray Level Dependence Matrices (SGLDM), and (iii) Gray Level Difference Statistics (GLDS). All images were gamma corrected and the extracted texture feature values were compared against the texture feature values extracted from the uncorrected images. Statistical tests were applied to compare images from different viewing conditions so as to determine any significant differences. For the proposed acquisition procedure, results indicate that there is no significant difference in texture features between the panoramic and close up views and between angles. For a calibrated target image, gamma correction provided an acquired image that was a significantly better approximation to the original target image. In turn, this implies that the texture features extracted from the corrected images provided for better approximations to the original images. Within the proposed protocol, for human ROIs, we have found that there is a large number of texture features that showed significant differences between normal and abnormal endometrium. This study provides a standardized protocol for avoiding any significant texture feature differences that may arise due to variability in the acquisition procedure or the lack of color correction. After applying the protocol, we have found that significant differences in texture features will only be due to the fact that the features were extracted from different types of tissue (normal vs abnormal).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.
2012-10-02
An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSImore » QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.« less
NASA Technical Reports Server (NTRS)
Leberl, Franz; Karspeck, Milan; Millot, Michel; Maurice, Kelly; Jackson, Matt
1992-01-01
This final report summarizes the work done from mid-1989 until January 1992 to develop a prototype set of tools for the analysis of EOS-type images. Such images are characterized by great multiplicity and quantity. A single 'snapshot' of EOS-type imagery may contain several hundred component images so that on a particular pixel, one finds multiple gray values. A prototype EOS-sensor, AVIRIS, has 224 gray values at each pixel. The work focused on the ability to utilize very large images and continuously roam through those images, zoom and be able to hold more than one black and white or color image, for example for stereo viewing or for image comparisons. A second focus was the utilization of so-called 'image cubes', where multiple images need to be co-registered and then jointly analyzed, viewed, and manipulated. The target computer platform that was selected was a high-performance graphics superworkstation, Stardent 3000. This particular platform offered many particular graphics tools such as the Application Visualization System (AVS) or Dore, but it missed availability of commercial third-party software for relational data bases, image processing, etc. The project was able to cope with these limitations and a phase-3 activity is currently being negotiated to port the software and enhance it for use with a novel graphics superworkstation to be introduced into the market in the Spring of 1993.
NASA Astrophysics Data System (ADS)
Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude
2018-02-01
Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics.
Spatial imaging of UV emission from Jupiter and Saturn
NASA Technical Reports Server (NTRS)
Clarke, J. T.; Moos, H. W.
1981-01-01
Spatial imaging with the IUE is accomplished both by moving one of the apertures in a series of exposures and within the large aperture in a single exposure. The image of the field of view subtended by the large aperture is focussed directly onto the detector camera face at each wavelength; since the spatial resolution of the instrument is 5 to 6 arc sec and the aperture extends 23.0 by 10.3 arc sec, imaging both parallel and perpendicular to dispersion is possible in a single exposure. The correction for the sensitivity variation along the slit at 1216 A is obtained from exposures of diffuse geocoronal H Ly alpha emission. The relative size of the aperture superimposed on the apparent discs of Jupiter and Saturn in typical observation is illustrated. By moving the planet image 10 to 20 arc sec along the major axis of the aperture (which is constrained to point roughly north-south) maps of the discs of these planets are obtained with 6 arc sec spatial resolution.
Perspective View with Landsat Overlay, Sacramento, Calif.
NASA Technical Reports Server (NTRS)
2002-01-01
California's state capitol, Sacramento, can be seen clustered along the American and Sacramento Rivers in this computer-generated perspective viewed from the west. Folsom Lake is in the center and the Sierra Nevada is above, with the edge of Lake Tahoe just visible at top center.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and an enhanced color Landsat 5satellite image. Topographic expression is exaggerated two times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: scale varies in this perspective image Location: 38.6 deg. North lat., 121.3 deg. West lon. Orientation: looking east Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 1 arcsecond (30 meters or 98 feet) Date Acquired: February 2000 (SRTM)Perspective View with Landsat Overlay, Mount Shasta, Calif.
NASA Technical Reports Server (NTRS)
2002-01-01
At more than 4,300 meters (14,000 feet ), Mount Shasta is California's tallest volcano and part of the Cascade chain of volcanoes extending south from Washington. This computer-generated perspective viewed from the west also includes Shastina, a slightly smaller volcanic cone left of Shasta's summit and Black Butte, another volcano in the right foreground.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and an enhanced color Landsat 5satellite image. Topographic expression is exaggerated two times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise,Washington, D.C.Size: scale varies in this perspective image Location: 41.4 deg. North lat., 122.3 deg. West lon. Orientation: looking east Image Data: Landsat Bands 3,2,1 as red, green, blue, respectively Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 1 arcsecond (30 meters or 98 feet) Date Acquired: February 2000 (SRTM)NASA Astrophysics Data System (ADS)
Murphy, B. S.; Egbert, G. D.
2017-12-01
In addition to its broadband seismic component, the USArray has also been collecting long-period magnetotelluric (MT) data across the continental United States. These data allow for an unprecedented three-dimensional view of the lithospheric geoelectric structure of the continent. As electrical conductivity and seismic properties provide complementary views of the Earth, synthesizing seismic and MT images can reduce ambiguity inherent in each technique and can thereby allow for tighter constraints on lithospheric properties. In the western US, comparison of MT and seismic results has clarified some issues (e.g., with regard to fluids and volatiles) and has raised some new questions, but for the most part the two techniques provide views that generally mesh well together. In sharp contrast, MT and seismic results in the eastern US lead to seemingly contradictory conclusions about lithosphere properties. The most striking example is the Piedmont region of the southeastern United States; here seismic images suggest a relatively thin, warm Phanerozoic lithosphere, while MT images show a large, deep, highly resistive body that seems to require thick, cold, even cratonic lithosphere. While these MT results shed intriguing new light onto the enigmatic post-Paleozoic history of eastern North America, the strong anticorrelation with seismic images remains a mystery. A similar anticorrelation appears to also exist in the Northern Appalachians, and preliminary views of the geoelectric signature of the well-studied Northern Appalachian Anomaly suggest that synthesizing the seismic and MT images of that region may be nontrivial. Clearly, a major challenge in continued analysis of USArray data is the reconciliation of seemingly contradictory seismic and MT images. The path forward in addressing this problem will require closer collaboration between seismologists and MT scientists and will likely require a careful reconsideration of how each group interprets the physical meaning of their respective anomalies.
Perspective View with Landsat Overlay, Salt Lake City Olympics Venues, Utah
NASA Technical Reports Server (NTRS)
2002-01-01
The 2002 Winter Olympics are hosted by Salt Lake City at several venues within the city, in nearby cities, and within the adjacent Wasatch Mountains. This computer generated perspective image provides a northward looking 'view from space' that includes all of these Olympic sites. In the south, next to Utah Lake, Provo hosts the ice hockey competition. In the north, northeast of the Great Salt Lake, Ogden hosts curling, and the nearby Snow Basin ski area hosts the downhill events. In between, southeast of the Great Salt Lake, Salt Lake City hosts the Olympic Village and the various skating events. Further east, across the Wasatch Mountains, the Park City area ski resorts host the bobsled, ski jumping, and snowboarding events. The Winter Olympics are always hosted in mountainous terrain. This view shows the dramatic landscape that makes the Salt Lake City region a world-class center for winter sports.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and a Landsat 5 satellite image mosaic. Topographic expression is exaggerated four times.For a full-resolution, annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site] Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter(approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 48.8 kilometers (30.2 miles), View distance 177 kilometers (110 miles) Location: 41 deg. North lat., 112.0 deg. West lon. Orientation: View North, 20 degrees below horizontal Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)Stereoscopic wide field of view imaging system
NASA Technical Reports Server (NTRS)
Prechtl, Eric F. (Inventor); Sedwick, Raymond J. (Inventor); Jonas, Eric M. (Inventor)
2011-01-01
A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field of view or panoramic or omni-directional still or video images.
MIGHTEE: The MeerKAT International GHz Tiered Extragalactic Exploration
NASA Astrophysics Data System (ADS)
Taylor, A. Russ; Jarvis, Matt
2017-05-01
The MeerKAT telescope is the precursor of the Square Kilometre Array mid-frequency dish array to be deployed later this decade on the African continent. MIGHTEE is one of the MeerKAT large survey projects designed to pathfind SKA key science in cosmology and galaxy evolution. Through a tiered radio continuum deep imaging project including several fields totaling 20 square degrees to microJy sensitivities and an ultra-deep image of a single 1 square degree field of view, MIGHTEE will explore dark matter and large scale structure, the evolution of galaxies, including AGN activity and star formation as a function of cosmic time and environment, the emergence and evolution of magnetic fields in galaxies, and the magnetic counter part to large scale structure of the universe.
Design and deployment of a large brain-image database for clinical and nonclinical research
NASA Astrophysics Data System (ADS)
Yang, Guo Liang; Lim, Choie Cheio Tchoyoson; Banukumar, Narayanaswami; Aziz, Aamer; Hui, Francis; Nowinski, Wieslaw L.
2004-04-01
An efficient database is an essential component of organizing diverse information on image metadata and patient information for research in medical imaging. This paper describes the design, development and deployment of a large database system serving as a brain image repository that can be used across different platforms in various medical researches. It forms the infrastructure that links hospitals and institutions together and shares data among them. The database contains patient-, pathology-, image-, research- and management-specific data. The functionalities of the database system include image uploading, storage, indexing, downloading and sharing as well as database querying and management with security and data anonymization concerns well taken care of. The structure of database is multi-tier client-server architecture with Relational Database Management System, Security Layer, Application Layer and User Interface. Image source adapter has been developed to handle most of the popular image formats. The database has a user interface based on web browsers and is easy to handle. We have used Java programming language for its platform independency and vast function libraries. The brain image database can sort data according to clinically relevant information. This can be effectively used in research from the clinicians" points of view. The database is suitable for validation of algorithms on large population of cases. Medical images for processing could be identified and organized based on information in image metadata. Clinical research in various pathologies can thus be performed with greater efficiency and large image repositories can be managed more effectively. The prototype of the system has been installed in a few hospitals and is working to the satisfaction of the clinicians.
Toshiba TDF-500 High Resolution Viewing And Analysis System
NASA Astrophysics Data System (ADS)
Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.
1988-06-01
A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.
Extratropical Cyclone in the Southern Ocean
NASA Technical Reports Server (NTRS)
2002-01-01
These images from the Multi-angle Imaging SpectroRadiometer (MISR) portray an occluded extratropical cyclone situated in the Southern Ocean, about 650 kilometers south of the Eyre Peninsula, South Australia. The left-hand image, a true-color view from MISR's nadir (vertical-viewing) camera, shows clouds just south of the Yorke Peninsula and the Murray-Darling river basin in Australia. Retrieved cloud-tracked wind velocities are indicated by the superimposed arrows. The image on the right displays cloud-top heights. Areas where cloud heights could not be retrieved are shown in black. Both the wind vectors and the cloud heights were derived using data from multiple MISR cameras within automated computer processing algorithms. The stereoscopic algorithms used to generate these results are still being refined, and future versions of these products may show modest changes. Extratropical cyclones are the dominant weather system at midlatitudes, and the term is used generically for regional low-pressure systems in the mid- to high-latitudes. In the southern hemisphere, cyclonic rotation is clockwise. These storms obtain their energy from temperature differences between air masses on either side of warm and cold fronts, and their characteristic pattern is of warm and cold fronts radiating out from a migrating low pressure center which forms, deepens, and dissipates as the fronts fold and collapse on each other. The center of this cyclone has started to decay, with the band of cloud to the south most likely representing the main front that was originally connected with the cyclonic circulation. These views were acquired on October 11, 2001, and the large view represents an area of about 380 kilometers x 1900 kilometers. Image courtesy NASA/GSFC/LaRC/JPL, MISR Team.
Fiducial marker for correlating images
Miller, Lisa Marie [Rocky Point, NY; Smith, Randy J [Wading River, NY; Warren, John B [Port Jefferson, NY; Elliott, Donald [Hampton Bays, NY
2011-06-21
The invention relates to a fiducial marker having a marking grid that is used to correlate and view images produced by different imaging modalities or different imaging and viewing modalities. More specifically, the invention relates to the fiducial marking grid that has a grid pattern for producing either a viewing image and/or a first analytical image that can be overlaid with at least one other second analytical image in order to view a light path or to image different imaging modalities. Depending on the analysis, the grid pattern has a single layer of a certain thickness or at least two layers of certain thicknesses. In either case, the grid pattern is imageable by each imaging or viewing modality used in the analysis. Further, when viewing a light path, the light path of the analytical modality cannot be visualized by viewing modality (e.g., a light microscope objective). By correlating these images, the ability to analyze a thin sample that is, for example, biological in nature but yet contains trace metal ions is enhanced. Specifically, it is desired to analyze both the organic matter of the biological sample and the trace metal ions contained within the biological sample without adding or using extrinsic labels or stains.
Oblique Aerial Photography Tool for Building Inspection and Damage Assessment
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.
2014-11-01
Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.
X-ray cargo container inspection system with few-view projection imaging
NASA Astrophysics Data System (ADS)
Duan, Xinhui; Cheng, Jianping; Zhang, Li; Xing, Yuxiang; Chen, Zhiqiang; Zhao, Ziran
2009-01-01
An X-ray cargo inspection system with few-view projection imaging is developed for detecting contraband in air containers. This paper describes this developing inspection system, including its configuration and the process of inspection using three imaging modalities: digital radiography (DR), few view imaging and computed tomography (CT). The few-view imaging can provide 3D images with much faster scanning speed than CT and do great help to quickly locate suspicious cargo in a container. An algorithm to reconstruct tomographic images from severely sparse projection data of few-view imaging is discussed. A cooperative work manner of the three modalities is presented to make the inspection more convenient and effective. Numerous experiments of performance tests and modality comparison are performed on our system for inspecting air containers. Results demonstrate the effectiveness of our methods and implementation of few-view imaging in practical inspection systems.
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Zimmermann, Steven D. (Inventor); Martin, H. Lee (Inventor)
1999-01-01
A device for omnidirectional image viewing providing pan-and-tilt orientation, rotation, and magnification within a hemispherical field-of-view that utilizes no moving parts. The imaging device is based on the effect that the image from a fisheye lens, which produces a circular image of at entire hemispherical field-of-view, which can be mathematically corrected using high speed electronic circuitry. More specifically, an incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical field-of-view without the need for any mechanical mechanisms. The preferred embodiment of the image transformation device can provide corrected images at real-time rates, compatible with standard video equipment. The device can be used for any application where a conventional pan-and-tilt or orientation mechanism might be considered including inspection, monitoring, surveillance, and target acquisition.
Measurements of WDS Objects Found in Images Taken for Detecting CPM Pairs in the LSPM Catalog
NASA Astrophysics Data System (ADS)
Knapp, Wilfried; Nanson, John
2017-10-01
During our research for CPM objects in the LSPM catalog so far not included in the WDS catalog part II (Knapp and Nanson 2017) we found by chance a surprisingly large number of WDS objects in the field of view of several images taken for this project. To use the existing image material in the best possible way we decided to take measurements of these objects and to look at other existing catalog data allowing a check for potential common proper motion. This report presents the findings of this research.
2016-11-09
Relatively young craters, with sharp crater rims and streaks of bright material, are the focus of this view of Ceres from NASA's Dawn spacecraft. The large, ancient and quite degraded crater Fluusa is seen at top center. The younger craters are Kupalo, at lower right, and Juling, to its left. Dawn took this image on Oct. 17, 2016, from its second extended-mission science orbit (XMO2), at a distance of about 920 miles (1,480 kilometers) above the surface. The image resolution is about 460 feet (140 meters) per pxel. http://photojournal.jpl.nasa.gov/catalog/PIA21223
1994-07-01
Photo Artwork composite by JPL This depiction of comet Shoemaker-Levy 9 impacting Jupiter is shown from several perspectives. IMAGE D depicts a generic view from Jupiter's south pole. For visual appeal, most of the large cometary fragments are shown close to one another in this image. At the time of Jupiter impact, the fragments will be separated from one another by serveral times the distances shown. This image was created by D.A. Seal of JPL's Mission Design Section using orbital computations provIded by P.W. Chodas and D.K. Yeomans of JPL's Navigation Section.
Wang, Chen; Ji, Na
2012-06-01
The intrinsic aberrations of high-NA gradient refractive index (GRIN) lenses limit their image quality as well as field of view. Here we used a pupil-segmentation-based adaptive optical approach to correct the inherent aberrations in a two-photon fluorescence endoscope utilizing a 0.8 NA GRIN lens. By correcting the field-dependent aberrations, we recovered diffraction-limited performance across a large imaging field. The consequent improvements in imaging signal and resolution allowed us to detect fine structures that were otherwise invisible inside mouse brain slices.
Dual-axis confocal microscope for high-resolution in vivo imaging
Wang, Thomas D.; Mandella, Michael J.; Contag, Christopher H.; Kino, Gordon S.
2007-01-01
We describe a novel confocal microscope that uses separate low-numerical-aperture objectives with the illumination and collection axes crossed at angle θ from the midline. This architecture collects images in scattering media with high transverse and axial resolution, long working distance, large field of view, and reduced noise from scattered light. We measured transverse and axial (FWHM) resolution of 1.3 and 2.1 μm, respectively, in free space, and confirm subcellular resolution in excised esophageal mucosa. The optics may be scaled to millimeter dimensions and fiber coupled for collection of high-resolution images in vivo. PMID:12659264
Torfeh, Tarraf; Hammoud, Rabih; McGarry, Maeve; Al-Hammadi, Noora; Perkins, Gregory
2015-09-01
To develop and validate a large field of view phantom and quality assurance software tool for the assessment and characterization of geometric distortion in MRI scanners commissioned for radiation therapy planning. A purpose built phantom was developed consisting of 357 rods (6mm in diameter) of polymethyl-methacrylat separated by 20mm intervals, providing a three dimensional array of control points at known spatial locations covering a large field of view up to a diameter of 420mm. An in-house software module was developed to allow automatic geometric distortion assessment. This software module was validated against a virtual dataset of the phantom that reproduced the exact geometry of the physical phantom, but with known translational and rotational displacements and warping. For validation experiments, clinical MRI sequences were acquired with and without the application of a commercial 3D distortion correction algorithm (Gradwarp™). The software module was used to characterize and assess system-related geometric distortion in the sequences relative to a benchmark CT dataset, and the efficacy of the vendor geometric distortion correction algorithms (GDC) was also assessed. Results issued from the validation of the software against virtual images demonstrate the algorithm's ability to accurately calculate geometric distortion with sub-pixel precision by the extraction of rods and quantization of displacements. Geometric distortion was assessed for the typical sequences used in radiotherapy applications and over a clinically relevant 420mm field of view (FOV). As expected and towards the edges of the field of view (FOV), distortion increased with increasing FOV. For all assessed sequences, the vendor GDC was able to reduce the mean distortion to below 1mm over a field of view of 5, 10, 15 and 20cm radius respectively. Results issued from the application of the developed phantoms and algorithms demonstrate a high level of precision. The results indicate that this platform represents an important, robust and objective tool to perform routine quality assurance of MR-guided therapeutic applications, where spatial accuracy is paramount. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Liew, Soo Chin; Gupta, Avijit; Chia, Aik Song; Ang, Wu Chye
2016-06-01
The paper illustrates application of satellite images for studying the anatomy of a long-duration, extensive, and slow flood on the Chao Phraya River in 2011 that inundated Bangkok in its lower reach. The spread of floods in the valley was mapped with MODIS, month by month, from July 2011 to February 2012. A subsampled WorldView-2 mosaic was used to observe part of the valley in detail. The flood in Bangkok was studied with four higher-resolution images from Spot 4, WorldView-2, and GeoEye-1 satellites. We suspect that the floodwaters jumped the banks of the Chao Phraya south of Chai Nat, and then travelled overland and along river channels. The overland passage made it difficult to protect settlements. We also studied sedimentation from the images of this shallow overland flow across the country, which was complicated by the presence of preexisting embankments, other anthropogenic structures, and smaller stream channels. This is a descriptive study but it highlights the nature of flooding that is likely to be repeated in this low flat valley from high rainfall. The pattern of flooding was similar to that of a previous large flood in 1996 recorded in a SPOT 2 image. These floods impact Bangkok periodically, a city of about 10 million people, which started on a levee in a low flat delta, then expanded into backswamps, and is marked with local depressions from groundwater extraction. These slow extensive floods can be mapped from satellite images and properly recorded as an early step in analysis of large floods. Mapping of such floods on ground is logistically impossible. Slow, extensive, and long-lasting floods affect lower valleys and deltas of a number of major rivers, impacting agricultural fields and large populations. These floods are especially disastrous for cities located on low deltas. We submit that basic exercises on satellite images provide valuable introductory information for understanding geomorphology of such floods, and also for structuring plans for flood amelioration. Satellite images at very high resolutions, also used in this study, provide complimentary data to mapping and ground observation. Basin environments that are inundated by large shallow extensive floods are not unusual. In future, climate change is expected to raise the frequency of floods in lower parts of a number of river valleys and deltas, so that for such an environment slow extensive floods may become common and need to be studied. In that sense this is a template for studying large slow floods, arguably more frequent in future.
NASA Astrophysics Data System (ADS)
Wang, Cuihuan; Kim, Leonard; Barnard, Nicola; Khan, Atif; Pierce, Mark C.
2016-02-01
Our long term goal is to develop a high-resolution imaging method for comprehensive assessment of tissue removed during lumpectomy procedures. By identifying regions of high-grade disease within the excised specimen, we aim to develop patient-specific post-operative radiation treatment regimens. We have assembled a benchtop spectral-domain optical coherence tomography (SD-OCT) system with 1320 nm center wavelength. Automated beam scanning enables "sub-volumes" spanning 5 mm x 5 mm x 2 mm (500 A-lines x 500 B-scans x 2 mm in depth) to be collected in under 15 seconds. A motorized sample positioning stage enables multiple sub-volumes to be acquired across an entire tissue specimen. Sub-volumes are rendered from individual B-scans in 3D Slicer software and en face (XY) images are extracted at specific depths. These images are then tiled together using MosaicJ software to produce a large area en face view (up to 40 mm x 25 mm). After OCT imaging, specimens were sectioned and stained with HE, allowing comparison between OCT image features and disease markers on histopathology. This manuscript describes the technical aspects of image acquisition and reconstruction, and reports initial qualitative comparison between large area en face OCT images and HE stained tissue sections. Future goals include developing image reconstruction algorithms for mapping an entire sample, and registering OCT image volumes with clinical CT and MRI images for post-operative treatment planning.
Anaglyph of Perspective View with Aerial Photo Overlay Pasadena, California
NASA Technical Reports Server (NTRS)
2000-01-01
This anaglyph is a perspective view that shows the western part of the city of Pasadena, California, looking north toward the San Gabriel Mountains. Red-blue glasses are required to see the 3-D effect. Portions of the cities of Altadena and La Canada-Flintridge are also shown. The image was created from two datasets: the Shuttle Radar Topography Mission (SRTM) supplied the elevation data and U. S. Geological Survey digital aerial photography provided the image detail. The Jet Propulsion Laboratory is the cluster of large buildings left of center, at the base of the mountains. This image shows the power of combining data from different sources to create planning tools to study problems that affect large urban areas. In addition to the well-known earthquake hazards, Southern California is affected by a natural cycle of fire and mudflows. Wildfires can strip the mountains of vegetation, increasing the hazards from flooding and mudflows. Data shown in this image can be used to predict both how wildfires spread over the terrain and how mudflows are channeled down the canyons.
This anaglyph was generated using topographic data from the Shuttle Radar Topography Mission to create two differing perspectives of a single image, one perspective for each eye. Each point in the image is shifted slightly, depending on its elevation. When viewed through special glasses, the result is a view of the Earth's surface in its full three dimensions. Anaglyph glasses cover the left eye with a red filter and cover the right eye with a blue filter.The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, uses the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission is designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI) space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 5.8 km (3.6 miles) x 10 km (6.2 miles) Location: 34.16 deg. North lat., 118.16 deg. West lon. Orientation: Looking North Original Data Resolution: SRTM, 30 m; aerial photo, 3 m; no vertical exaggeration Date Acquired: February 16, 2000 Image: NASA/JPL/NIMAComparison of a single-view and a double-view aerosol optical depth retrieval algorithm
NASA Astrophysics Data System (ADS)
Henderson, Bradley G.; Chylek, Petr
2003-11-01
We compare the results of a single-view and a double-view aerosol optical depth (AOD) retrieval algorithm applied to image pairs acquired over NASA Stennis Space Center, Mississippi. The image data were acquired by the Department of Energy's (DOE) Multispectral Thermal Imager (MTI), a pushbroom satellite imager with 15 bands from the visible to the thermal infrared. MTI has the ability to acquire imagery in pairs in which the first image is a near-nadir view and the second image is off-nadir with a zenith angle of approximately 60°. A total of 15 image pairs were used in the analysis. For a given image pair, AOD retrieval is performed twice---once using a single-view algorithm applied to the near-nadir image, then again using a double-view algorithm. Errors for both retrievals are computed by comparing the results to AERONET AOD measurements obtained at the same time and place. The single-view algorithm showed an RMS error about the mean of 0.076 in AOD units, whereas the double-view algorithm showed a modest improvement with an RMS error of 0.06. The single-view errors show a positive bias which is presumed to be a result of the empirical relationship used to determine ground reflectance in the visible. A plot of AOD error of the double-view algorithm versus time shows a noticeable trend which is interpreted to be a calibration drift. When this trend is removed, the RMS error of the double-view algorithm drops to 0.030. The single-view algorithm qualitatively appears to perform better during the spring and summer whereas the double-view algorithm seems to be less sensitive to season.
NASA Technical Reports Server (NTRS)
2000-01-01
Honolulu, on the island of Oahu, is a large and growing urban area. This stereoscopic image pair, combining a Landsat image with topography measured by the Shuttle Radar Topography Mission (SRTM), shows how topography controls the urban pattern. This color image can be viewed in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing), or by downloading and printing the image pair, and viewing them with a stereoscope.
Features of interest in this scene include Diamond Head (an extinct volcano near the bottom of the image), Waikiki Beach (just above Diamond Head), the Punchbowl National Cemetary (another extinct volcano, near the image center), downtown Honolulu and Honolulu harbor (image left-center), and offshore reef patterns. The slopes of the Koolau mountain range are seen in the right half of the image. Clouds commonly hang above ridges and peaks of the Hawaiian Islands, but in this synthesized stereo rendition appear draped directly on the mountains. The clouds are actually about 1000 meters (3300 feet) above sea level.This stereoscopic image pair was generated using topographic data from the Shuttle Radar Topography Mission, combined with a Landsat 7 Thematic Mapper image collected at the same time as the SRTM flight. The topography data were used to create two differing perspectives, one for each eye. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions. The United States Geological Survey's Earth Resources Observations Systems (EROS) Data Center, Sioux Falls, South Dakota, provided the Landsat data.The Shuttle Radar Topography Mission (SRTM), launched on February 11, 2000, used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. The mission was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, an additional C-band imaging antenna and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) and the German (DLR) and Italian (ASI)space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 11 by 20 kilometers (7 by 13 miles) Location: 21.3 deg. North lat., 157.9 deg. West lon. Orientation: North toward upper right Original Data Resolution: SRTM, 30 meters (99 feet); Landsat, 15 meters (50 feet) Date Acquired: SRTM, February 18, 2000; Landsat February 12, 2000 Image: NASA/JPL/NIMANASA Astrophysics Data System (ADS)
Sramek, Benjamin Koerner
The ability to deliver conformal dose distributions in radiation therapy through intensity modulation and the potential for tumor dose escalation to improve treatment outcome has necessitated an increase in localization accuracy of inter- and intra-fractional patient geometry. Megavoltage cone-beam CT imaging using the treatment beam and onboard electronic portal imaging device is one option currently being studied for implementation in image-guided radiation therapy. However, routine clinical use is predicated upon continued improvements in image quality and patient dose delivered during acquisition. The formal statement of hypothesis for this investigation was that the conformity of planned to delivered dose distributions in image-guided radiation therapy could be further enhanced through the application of kilovoltage scatter correction and intermediate view estimation techniques to megavoltage cone-beam CT imaging, and that normalized dose measurements could be acquired and inter-compared between multiple imaging geometries. The specific aims of this investigation were to: (1) incorporate the Feldkamp, Davis and Kress filtered backprojection algorithm into a program to reconstruct a voxelized linear attenuation coefficient dataset from a set of acquired megavoltage cone-beam CT projections, (2) characterize the effects on megavoltage cone-beam CT image quality resulting from the application of Intermediate View Interpolation and Intermediate View Reprojection techniques to limited-projection datasets, (3) incorporate the Scatter and Primary Estimation from Collimator Shadows (SPECS) algorithm into megavoltage cone-beam CT image reconstruction and determine the set of SPECS parameters which maximize image quality and quantitative accuracy, and (4) evaluate the normalized axial dose distributions received during megavoltage cone-beam CT image acquisition using radiochromic film and thermoluminescent dosimeter measurements in anthropomorphic pelvic and head and neck phantoms. The conclusions of this investigation were: (1) the implementation of intermediate view estimation techniques to megavoltage cone-beam CT produced improvements in image quality, with the largest impact occurring for smaller numbers of initially-acquired projections, (2) the SPECS scatter correction algorithm could be successfully incorporated into projection data acquired using an electronic portal imaging device during megavoltage cone-beam CT image reconstruction, (3) a large range of SPECS parameters were shown to reduce cupping artifacts as well as improve reconstruction accuracy, with application to anthropomorphic phantom geometries improving the percent difference in reconstructed electron density for soft tissue from -13.6% to -2.0%, and for cortical bone from -9.7% to 1.4%, (4) dose measurements in the anthropomorphic phantoms showed consistent agreement between planar measurements using radiochromic film and point measurements using thermoluminescent dosimeters, and (5) a comparison of normalized dose measurements acquired with radiochromic film to those calculated using multiple treatment planning systems, accelerator-detector combinations, patient geometries and accelerator outputs produced a relatively good agreement.
Micro-valve pump light valve display
Yeechun Lee.
1993-01-19
A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.
Micro-valve pump light valve display
Lee, Yee-Chun
1993-01-01
A flat panel display incorporates a plurality of micro-pump light valves (MLV's) to form pixels for recreating an image. Each MLV consists of a dielectric drop sandwiched between substrates, at least one of which is transparent, a holding electrode for maintaining the drop outside a viewing area, and a switching electrode from accelerating the drop from a location within the holding electrode to a location within the viewing area. The sustrates may further define non-wetting surface areas to create potential energy barriers to assist in controlling movement of the drop. The forces acting on the drop are quadratic in nature to provide a nonlinear response for increased image contrast. A crossed electrode structure can be used to activate the pixels whereby a large flat panel display is formed without active driver components at each pixel.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
NASA Astrophysics Data System (ADS)
Rusch, D. W.; Thomas, G. E.; McClintock, W.; Merkel, A. W.; Bailey, S. M.; Russell, J. M., III; Randall, C. E.; Jeppesen, C.; Callan, M.
2009-03-01
The Aeronomy of Ice in the Mesosphere (AIM) mission was launched from Vandenberg Air Force Base in California at 4:26:03 EDT on April 25, 2007, becoming the first satellite mission dedicated to the study of noctilucent clouds (NLCs), also known as polar mesospheric clouds (PMC) when viewed from space. We present the first results from one of the three instruments on board the satellite, the Cloud Imaging and Particle Size (CIPS) instrument. CIPS has produced detailed morphology of the Northern 2007 PMC and Southern 2007/2008 seasons with 5 km horizontal spatial resolution. CIPS, with its very large angular field of view, images cloud structures at multiple scattering angles within a narrow spectral bandpass centered at 265 nm. Spatial coverage is 100% above about 70° latitude, where camera views overlap from orbit to orbit, and terminates at about 82°. Spatial coverage decreases to about 50% at the lowest latitudes where data are collected (35°). Cloud structures have for the first time been mapped out over nearly the entire summertime polar region. These structures include [`]ice rings', spatially small but bright clouds, and large regions ([`]ice-free regions') in the heart of the cloud season essentially devoid of ice particles. The ice rings bear a close resemblance to tropospheric convective outflow events, suggesting a point source of mesospheric convection. These rings (often circular arcs) are most likely Type IV NLC ([`]whirls' in the standard World Meteorological Organization (WMO) nomenclature).
Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi
2016-01-01
Background. Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated “slide scanners” which can provide a “whole slide digital image.” These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods. In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results. The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion. With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost. PMID:27747147
Banavar, Spoorthi Ravi; Chippagiri, Prashanthi; Pandurangappa, Rohit; Annavajjula, Saileela; Rajashekaraiah, Premalatha Bidadi
2016-01-01
Background . Microscopes are omnipresent throughout the field of biological research. With microscopes one can see in detail what is going on at the cellular level in tissues. Though it is a ubiquitous tool, the limitation is that with high magnification there is a small field of view. It is often advantageous to see an entire sample at high magnification. Over the years technological advancements in optics have helped to provide solutions to this limitation of microscopes by creating the so-called dedicated "slide scanners" which can provide a "whole slide digital image." These scanners can provide seamless, large-field-of-view, high resolution image of entire tissue section. The only disadvantage of such complete slide imaging system is its outrageous cost, thereby hindering their practical use by most laboratories, especially in developing and low resource countries. Methods . In a quest for their substitute, we tried commonly used image editing software Adobe Photoshop along with a basic image capturing device attached to a trinocular microscope to create a digital pathology slide. Results . The seamless image created using Adobe Photoshop maintained its diagnostic quality. Conclusion . With time and effort photomicrographs obtained from a basic camera-microscope set up can be combined and merged in Adobe Photoshop to create a whole slide digital image of practically usable quality at a negligible cost.
Compression and information recovery in ptychography
NASA Astrophysics Data System (ADS)
Loetgering, L.; Treffer, D.; Wilhein, T.
2018-04-01
Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.
Stereo study as an aid to visual analysis of ERTS and Skylab images
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. The parallax on ERTS and Skylab images is sufficiently large for exploitation by human photointerpreters. The ability to view the imagery stereoscopically reduces the signal-to-noise ratio. Stereoscopic examination of orbital data can contribute to studies of spatial, spectral, and temporal variations on the imagery. The combination of true stereo parallax, plus shadow parallax offer many possibilities to human interpreters for making meaningful analyses of orbital imagery.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh
2009-01-01
Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
Large Area MEMS Based Ultrasound Device for Cancer Detection.
Wodnicki, Robert; Thomenius, Kai; Hooi, Fong Ming; Sinha, Sumedha P; Carson, Paul L; Lin, Der-Song; Zhuang, Xuefeng; Khuri-Yakub, Pierre; Woychik, Charles
2011-08-21
We present image results obtained using a prototype ultrasound array which demonstrates the fundamental architecture for a large area MEMS based ultrasound device for detection of breast cancer. The prototype array consists of a tiling of capacitive Micro-Machined Ultrasound Transducers (cMUTs) which have been flip-chip attached to a rigid organic substrate. The pitch on the cMUT elements is 185 um and the operating frequency is nominally 9 MHz. The spatial resolution of the new probe is comparable to production PZT probes, however the sensitivity is reduced by conditions that should be correctable. Simulated opposed-view image registration and Speed of Sound volume reconstruction results for ultrasound in the mammographic geometry are also presented.
Retinal image mosaicing using the radial distortion correction model
NASA Astrophysics Data System (ADS)
Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.
2008-03-01
Fundus camera imaging can be used to examine the retina to detect disorders. Similar to looking through a small keyhole into a large room, imaging the fundus with an ophthalmologic camera allows only a limited view at a time. Thus, the generation of a retinal montage using multiple images has the potential to increase diagnostic accuracy by providing larger field of view. A method of mosaicing multiple retinal images using the radial distortion correction (RADIC) model is proposed in this paper. Our method determines the inter-image connectivity by detecting feature correspondences. The connectivity information is converted to a tree structure that describes the spatial relationships between the reference and target images for pairwise registration. The montage is generated by cascading pairwise registration scheme starting from the anchor image downward through the connectivity tree hierarchy. The RADIC model corrects the radial distortion that is due to the spherical-to-planar projection during retinal imaging. Therefore, after radial distortion correction, individual images can be properly mapped onto a montage space by a linear geometric transformation, e.g. affine transform. Compared to the most existing montaging methods, our method is unique in that only a single registration per image is required because of the distortion correction property of RADIC model. As a final step, distance-weighted intensity blending is employed to correct the inter-image differences in illumination encountered when forming the montage. Visual inspection of the experimental results using three mosaicing cases shows our method can produce satisfactory montages.
The power of Kawaii: viewing cute images promotes a careful behavior and narrows attentional focus.
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning "cute") things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE=43.9 ± 10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9 ± 5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7 ± 2.2% improvement) than after viewing less cute images (1.4 ± 2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2 ± 2.1%). In the third experiment, participants performed a global-local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work.
The Power of Kawaii: Viewing Cute Images Promotes a Careful Behavior and Narrows Attentional Focus
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning “cute”) things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE = 43.9±10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9±5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7±2.2% improvement) than after viewing less cute images (1.4±2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2±2.1%). In the third experiment, participants performed a global–local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work. PMID:23050022
A four-alternative forced choice (4AFC) software for observer performance evaluation in radiology
NASA Astrophysics Data System (ADS)
Zhang, Guozhi; Cockmartin, Lesley; Bosmans, Hilde
2016-03-01
Four-alternative forced choice (4AFC) test is a psychophysical method that can be adopted for observer performance evaluation in radiological studies. While the concept of this method is well established, difficulties to handle large image data, perform unbiased sampling, and keep track of the choice made by the observer have restricted its application in practice. In this work, we propose an easy-to-use software that can help perform 4AFC tests with DICOM images. The software suits for any experimental design that follows the 4AFC approach. It has a powerful image viewing system that favorably simulates the clinical reading environment. The graphical interface allows the observer to adjust various viewing parameters and perform the selection with very simple operations. The sampling process involved in 4AFC as well as the speed and accuracy of the choice made by the observer is precisely monitored in the background and can be easily exported for test analysis. The software has also a defensive mechanism for data management and operation control that minimizes the possibility of mistakes from user during the test. This software can largely facilitate the use of 4AFC approach in radiological observer studies and is expected to have widespread applicability.
Einhäuser, Wolfgang; Nuthmann, Antje
2016-09-01
During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.
2015-06-15
The two large craters on Tethys, near the line where day fades to night, almost resemble two giant eyes observing Saturn. The location of these craters on Tethys' terminator throws their topography into sharp relief. Both are large craters, but the larger and southernmost of the two shows a more complex structure. The angle of the lighting highlights a central peak in this crater. Central peaks are the result of the surface reacting to the violent post-impact excavation of the crater. The northern crater does not show a similar feature. Possibly the impact was too small to form a central peak, or the composition of the material in the immediate vicinity couldn't support the formation of a central peak. In this image Tethys is significantly closer to the camera, while the planet is in the background. Yet the moon is still utterly dwarfed by the giant Saturn. This view looks toward the anti-Saturn side of Tethys. North on Tethys is up and rotated 42 degrees to the right. The image was taken in visible light with the Cassini spacecraft wide-angle camera on April 11, 2015. The view was obtained at a distance of approximately 75,000 miles (120,000 kilometers) from Tethys. Image scale at Tethys is 4 miles (7 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/pia18318
Earth Observation taken by the Expedition 20 crew
2009-10-06
ISS020-E-047807 (6 Oct. 2009) --- Thunderstorms on the Brazilian horizon are featured in this image photographed by an Expedition 20 crew member on the International Space Station. A picturesque line of thunderstorms and numerous circular cloud patterns filled the view as the station crew members looked out at the limb and atmosphere (blue line on the horizon) of Earth. This region displayed in the photograph (top) includes an unstable, active atmosphere forming a large area of cumulonimbus clouds in various stages of development. The crew was looking west southwestward from the Amazon Basin, along the Rio Madeira, toward Bolivia when the image was taken. The distinctive circular patterns of the clouds in this view are likely caused by the aging of thunderstorms. Such ring structures often form during the final stages of a storm?s development as their centers collapse. Sunglint is visible on the waters of the Rio Madeira and Lago Acara in the Amazon Basin. Widespread haze over the basin gives the reflected light an orange hue. The Rio Madeira flows northward and joins the Amazon River on its path to the Atlantic Ocean. Scientists believe that a large smoke plume near the bottom center of the image may explain one source of the haze.
Ultra-broadband ptychography with self-consistent coherence estimation from a high harmonic source
NASA Astrophysics Data System (ADS)
Odstrčil, M.; Baksh, P.; Kim, H.; Boden, S. A.; Brocklesby, W. S.; Frey, J. G.
2015-09-01
With the aim of improving imaging using table-top extreme ultraviolet sources, we demonstrate coherent diffraction imaging (CDI) with relative bandwidth of 20%. The coherence properties of the illumination probe are identified using the same imaging setup. The presented methods allows for the use of fewer monochromating optics, obtaining higher flux at the sample and thus reach higher resolution or shorter exposure time. This is important in the case of ptychography when a large number of diffraction patterns need to be collected. Our microscopy setup was tested on a reconstruction of an extended sample to show the quality of the reconstruction. We show that high harmonic generation based EUV tabletop microscope can provide reconstruction of samples with a large field of view and high resolution without additional prior knowledge about the sample or illumination.
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
Recovering of images degraded by atmosphere
NASA Astrophysics Data System (ADS)
Lin, Guang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2017-08-01
Remote sensing images are seriously degraded by multiple scattering and bad weather. Through the analysis of the radiative transfer procedure in atmosphere, an image atmospheric degradation model considering the influence of atmospheric absorption multiple scattering and non-uniform distribution is proposed in this paper. Based on the proposed model, a novel recovering method is presented to eliminate atmospheric degradation. Mean-shift image segmentation and block-wise deconvolution are used to reduce time cost, retaining a good result. The recovering results indicate that the proposed method can significantly remove atmospheric degradation and effectively improve contrast compared with other removal methods. The results also illustrate that our method is suitable for various degraded remote sensing, including images with large field of view (FOV), images taken in side-glance situations, image degraded by atmospheric non-uniform distribution and images with various forms of clouds.
Zooming in on Pluto Pattern of Pits
2015-12-10
On July 14, 2015, the telescopic camera on NASA's New Horizons spacecraft took the highest resolution images ever obtained of the intricate pattern of "pits" across a section of Pluto's prominent heart-shaped region, informally named Tombaugh Regio. Mission scientists believe these mysterious indentations may form through a combination of ice fracturing and evaporation. The scarcity of overlying impact craters in this area also leads scientists to conclude that these pits -- typically hundreds of yards across and tens of yards deep -- formed relatively recently. Their alignment provides clues about the ice flow and the exchange of nitrogen and other volatile materials between the surface and the atmosphere. The image is part of a sequence taken by New Horizons' Long Range Reconnaissance Imager (LORRI) as the spacecraft passed within 9,550 miles (15,400 kilometers) of Pluto's surface, just 13 minutes before the time of closest approach. The small box on the global view shows the section of the region imaged in the southeast corner of the giant ice sheet informally named Sputnik Planum. The magnified view is 50-by-50 miles (80-by-80 kilometers) across. The large ring-like structure near the bottom right of the magnified view -- and the smaller one near the bottom left -- may be remnant craters. The upper-left quadrant of the image shows the border between the relatively smooth Sputnik Planum ice sheet and the pitted area, with a series of hills forming slightly inside this unusual "shoreline." http://photojournal.jpl.nasa.gov/catalog/PIA20212
NASA Astrophysics Data System (ADS)
Drass, Holger; Vanzi, Leonardo; Torres-Torriti, Miguel; Dünner, Rolando; Shen, Tzu-Chiang; Belmar, Francisco; Dauvin, Lousie; Staig, Tomás.; Antognini, Jonathan; Flores, Mauricio; Luco, Yerko; Béchet, Clémentine; Boettger, David; Beard, Steven; Montgomery, David; Watson, Stephen; Cabral, Alexandre; Hayati, Mahmoud; Abreu, Manuel; Rees, Phil; Cirasuolo, Michele; Taylor, William; Fairley, Alasdair
2016-08-01
The Multi-Object Optical and Near-infrared Spectrograph (MOONS) will cover the Very Large Telescope's (VLT) field of view with 1000 fibres. The fibres will be mounted on fibre positioning units (FPU) implemented as two-DOF robot arms to ensure a homogeneous coverage of the 500 square arcmin field of view. To accurately and fast determine the position of the 1000 fibres a metrology system has been designed. This paper presents the hardware and software design and performance of the metrology system. The metrology system is based on the analysis of images taken by a circular array of 12 cameras located close to the VLTs derotator ring around the Nasmyth focus. The system includes 24 individually adjustable lamps. The fibre positions are measured through dedicated metrology targets mounted on top of the FPUs and fiducial markers connected to the FPU support plate which are imaged at the same time. A flexible pipeline based on VLT standards is used to process the images. The position accuracy was determined to 5 μm in the central region of the images. Including the outer regions the overall positioning accuracy is 25 μm. The MOONS metrology system is fully set up with a working prototype. The results in parts of the images are already excellent. By using upcoming hardware and improving the calibration it is expected to fulfil the accuracy requirement over the complete field of view for all metrology cameras.
Phytoplankton off the West Coast of Africa
NASA Technical Reports Server (NTRS)
2002-01-01
Just off the coast of West Africa, persistent northeasterly trade winds often churn up deep ocean water. When the nutrients in these deep waters reach the ocean's surface, they often give rise to large blooms of phytoplankton. This image of the Mauritanian coast shows swirls of phytoplankton fed by the upwelling of nutrient-rich water. The scene was acquired by the Medium Resolution Imaging Spectrometer (MERIS) aboard the European Space Agency's ENVISAT. MERIS will monitor changes in phytoplankton across Earth's oceans and seas, both for the purpose of managing fisheries and conducting global change research. NASA scientists will use data from this European instrument in the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) program. The mission of SIMBIOS is to construct a consistent long-term dataset of ocean color (phytoplankton abundance) measurements made by multiple satellite instruments, including the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and the Moderate-Resolution Imaging Spectroradiometer (MODIS). For more information about MERIS and ENVISAT, visit the ENVISAT home page. Image copyright European Space Agency
Mismatch removal via coherent spatial relations
NASA Astrophysics Data System (ADS)
Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen
2014-07-01
We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Mazur, T; Yang, D
Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformationmore » and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Islands in the Midst of the World
NASA Technical Reports Server (NTRS)
2002-01-01
The Greek islands of the Aegean Sea, scattered across 800 kilometers from north to south and between Greece and western Turkey, are uniquely situated at the intersection of Europe, Asia and Africa. This image from the Multi-angle Imaging SpectroRadiometer includes many of the islands of the East Aegean, Sporades, Cyclades, Dodecanese and Crete, as well as part of mainland Turkey. Many sites important to ancient and modern history can be found here. The largest modern city in the Aegean coast is Izmir, situated about one quarter of the image length from the top, southeast of the large three-pronged island of Lesvos. Izmir can be located as a bright coastal area near the greenish waters of the Izmir Bay, about one quarter of the image length from the top, southeast of Lesvos. The coastal areas around this cosmopolitan Turkish city were a center of Ionian culture from the 11th century BC, and at the top of the image (north of Lesvos), once stood the ancient city of Troy.The image was acquired before the onset of the winter rains, on September 30, 2001, but dense vegetation is never very abundant in the arid Mediterranean climate. The sharpness and clarity of the view also indicate dry, clear air. Some vegetative changes can be detected between the western or southern islands such as Crete (the large island along the bottom of the image) and those closer to the Turkish coast which appear comparatively green. Volcanic activities are evident by the form of the islands of Santorini. This small group of islands shaped like a broken ring are situated to the right and below image center. Santorini's Thera volcano erupted around 1640 BC, and the rim of the caldera collapsed, forming the shape of the islands as they exist today.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. This natural-color image was acquired by MISR's nadir (vertical-viewing) camera, and is a portion of the data acquired during Terra orbit 9495. The image covers an area of 369 kilometers x 567 kilometers, and utilizes data from blocks 58 to 64 within World Reference System-2 path 181.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Application of digital image correlation for long-distance bridge deflection measurement
NASA Astrophysics Data System (ADS)
Tian, Long; Pan, Bing; Cai, Youfa; Liang, Hui; Zhao, Yan
2013-06-01
Due to its advantages of non-contact, full-field and high-resolution measurement, digital image correlation (DIC) method has gained wide acceptance and found numerous applications in the field of experimental mechanics. In this paper, the application of DIC for real-time long-distance bridge deflection detection in outdoor environments is studied. Bridge deflection measurement using DIC in outdoor environments is more challenging than regular DIC measurements performed under laboratory conditions. First, much more image noise due to variations in ambient light will be presented in the images recorded in outdoor environments. Second, how to select the target area becomes a key factor because long-distance imaging results in a large field of view of the test object. Finally, the image acquisition speed of the camera must be high enough (larger than 100 fps) to capture the real-time dynamic motion of a bridge. In this work, the above challenging issues are addressed and several improvements were made to DIC method. The applicability was demonstrated by real experiments. Experimental results indicate that the DIC method has great potentials in motion measurement in various large building structures.
SRTM Perspective View with Landsat Overlay: Santa Barbara Coastline, California
NASA Technical Reports Server (NTRS)
2001-01-01
This image of the Santa Barbara, California, region provides a beautiful snapshot of the area's rugged mountains and long and varied coastline. Generated using data acquired from the Shuttle Radar Topography Mission (SRTM) and an enhanced Landsat image this is a perspective view toward the northeast, from the Goleta Valley in the foreground to a snow-capped Mount Abel (elevation 2,526 m or 8,286 feet) along the skyline at the left. On a clear day, a pilot might see a similar view shortly before touching down on the east-west runway of the Santa Barbara Airport, seen just to the left of the coastline near the center of image. This area is one of the few places along the U.S. West Coast where because of a south-facing beach, fall and winter sunrises occur over the ocean.
Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data match the 30-meter(98-foot) resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive. For visualization purposes, topographic heights displayed in this image are exaggerated two times. Colors approximate natural colors.The elevation data used in this image was acquired by SRTM aboard Space Shuttle Endeavour, launched on February 11,2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of Earth's land surface. To collect the 3-D SRTM data, engineers added a mast 60 meters (about 200-feet)long, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense, and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif, for NASA's Earth Science Enterprise, Washington, D.C. JPL is a division of the California Institute of Technology in Pasadena.Location: 34.5 deg. North lat., 119.75 deg. West lon. View: Northeast Scale: Scale Varies in this Perspective Date Acquired: February 16, 2000 SRTM, December 14, 1984 LandsatIn vivo high resolution human corneal imaging using full-field optical coherence tomography.
Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José-Alain; Fink, Mathias; Boccara, A Claude
2018-02-01
We present the first full-field optical coherence tomography (FFOCT) device capable of in vivo imaging of the human cornea. We obtained images of the epithelial structures, Bowman's layer, sub-basal nerve plexus (SNP), anterior and posterior stromal keratocytes, stromal nerves, Descemet's membrane and endothelial cells with visible nuclei. Images were acquired with a high lateral resolution of 1.7 µm and relatively large field-of-view of 1.26 mm x 1.26 mm - a combination, which, to the best of our knowledge, has not been possible with other in vivo human eye imaging methods. The latter together with a contactless operation, make FFOCT a promising candidate for becoming a new tool in ophthalmic diagnostics.
2017-01-09
Shadows cast across Mimas' defining feature, Herschel Crater, provide an indication of the size of the crater's towering walls and central peak. Named after the icy moon's discoverer, astronomer William Herschel, the crater stretches 86 miles (139 kilometers) wide -- almost one-third of the diameter of Mimas (246 miles or 396 kilometers) itself. Large impact craters often have peaks in their center -- see Tethys' large crater Odysseus in PIA08400. Herschel's peak stands nearly as tall as Mount Everest on Earth. This view looks toward the anti-Saturn hemisphere of Mimas. North on Mimas is up and rotated 21 degrees to the left. The image was taken with the Cassini spacecraft narrow-angle camera on Oct. 22, 2016 using a combination of spectral filters which preferentially admits wavelengths of ultraviolet light centered at 338 nanometers. The view was acquired at a distance of approximately 115,000 miles (185,000 kilometers) from Mimas and at a Sun-Mimas-spacecraft, or phase, angle of 20 degrees. Image scale is 3,300 feet (1 kilometer) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20515
Stereo Pair, Salt Lake City, Utah
NASA Technical Reports Server (NTRS)
2002-01-01
The 2002 Winter Olympics are hosted by Salt Lake City at several venues within the city, in nearby cities, and within the adjacent Wasatch Mountains. This image pair provides a stereoscopic map view of north central Utah that includes all of these Olympic sites. In the south, next to Utah Lake, Provo hosts the ice hockey competition. In the north, northeast of the Great Salt Lake, Ogden hosts curling and the nearby Snowbasin ski area hosts the downhill events. In between, southeast of the Great Salt Lake, Salt Lake City hosts the Olympic Village and the various skating events. Further east, across the Wasatch Mountains, the Park City ski resort hosts the bobsled, ski jumping, and snowboarding events. The Winter Olympics are always hosted in mountainous terrain. This view shows the dramatic landscape that makes the Salt Lake City region a world-class center for winter sports.
This stereoscopic image was generated by draping a Landsat satellite image over a Shuttle Radar Topography Mission digital elevation model. Two differing perspectives were then calculated, one for each eye. They can be seen in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing or by downloading and printing the image pair and viewing them with a stereoscope. When stereoscopically merged, the result is a vertically exaggerated view of Earth's surface in its full three dimensions.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter(approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: 222 x 93.8 kilometers (138 x 58.2 miles) Location: 40.0 to 42.0 deg. North lat., 111.25 to 112.25.0 deg. West lon.(exactly) Orientation: North at top Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)2D/3D Visual Tracker for Rover Mast
NASA Technical Reports Server (NTRS)
Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria
2006-01-01
A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.
Qu, Bin; Huang, Ying; Wang, Weiyuan; Sharma, Prateek; Kuhls-Gilcrist, Andrew T.; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen
2011-01-01
Use of an extensible array of Electron Multiplying CCDs (EMCCDs) in medical x-ray imager applications was demonstrated for the first time. The large variable electronic-gain (up to 2000) and small pixel size of EMCCDs provide effective suppression of readout noise compared to signal, as well as high resolution, enabling the development of an x-ray detector with far superior performance compared to conventional x-ray image intensifiers and flat panel detectors. We are developing arrays of EMCCDs to overcome their limited field of view (FOV). In this work we report on an array of two EMCCD sensors running simultaneously at a high frame rate and optically focused on a mammogram film showing calcified ducts. The work was conducted on an optical table with a pulsed LED bar used to provide a uniform diffuse light onto the film to simulate x-ray projection images. The system can be selected to run at up to 17.5 frames per second or even higher frame rate with binning. Integration time for the sensors can be adjusted from 1 ms to 1000 ms. Twelve-bit correlated double sampling AD converters were used to digitize the images, which were acquired by a National Instruments dual-channel Camera Link PC board in real time. A user-friendly interface was programmed using LabVIEW to save and display 2K × 1K pixel matrix digital images. The demonstration tiles a 2 × 1 array to acquire increased-FOV stationary images taken at different gains and fluoroscopic-like videos recorded by scanning the mammogram simultaneously with both sensors. The results show high resolution and high dynamic range images stitched together with minimal adjustments needed. The EMCCD array design allows for expansion to an M×N array for arbitrarily larger FOV, yet with high resolution and large dynamic range maintained. PMID:23505330
Low-temperature and conventional scanning electron microscopy of human urothelial neoplasms.
Hopkins, D M; Morris, J A; Oates, K; Huddart, H; Staff, W G
1989-05-01
The appearance of neoplastic human urothelium viewed by low-temperature scanning electron microscopy (LTSEM) and conventional scanning electron microscopy (CSEM) was compared. Fixed, dehydrated neoplastic cells viewed by CSEM had well-defined, often raised cell junctions; no intercellular gaps; and varying degrees of pleomorphic surface microvilli. The frozen hydrated material viewed by LTSEM, however, was quite different. The cells had a flat or dimpled surface, but no microvilli. There were labyrinthine lateral processes which interdigitated with those of adjacent cells and outlined large intercellular gaps. The process of fixation and dehydration will inevitably distort cell contours and on theoretical grounds, the images of frozen hydrated material should more closely resemble the in vivo appearance.
High-Throughput Light Sheet Microscopy for the Automated Live Imaging of Larval Zebrafish
NASA Astrophysics Data System (ADS)
Baker, Ryan; Logan, Savannah; Dudley, Christopher; Parthasarathy, Raghuveer
The zebrafish is a model organism with a variety of useful properties; it is small and optically transparent, it reproduces quickly, it is a vertebrate, and there are a large variety of transgenic animals available. Because of these properties, the zebrafish is well suited to study using a variety of optical technologies including light sheet fluorescence microscopy (LSFM), which provides high-resolution three-dimensional imaging over large fields of view. Research progress, however, is often not limited by optical techniques but instead by the number of samples one can examine over the course of an experiment, which in the case of light sheet imaging has so far been severely limited. Here we present an integrated fluidic circuit and microscope which provides rapid, automated imaging of zebrafish using several imaging modes, including LSFM, Hyperspectral Imaging, and Differential Interference Contrast Microscopy. Using this system, we show that we can increase our imaging throughput by a factor of 10 compared to previous techniques. We also show preliminary results visualizing zebrafish immune response, which is sensitive to gut microbiota composition, and which shows a strong variability between individuals that highlights the utility of high throughput imaging. National Science Foundation, Award No. DBI-1427957.
Intact skull chronic windows for mesoscopic wide-field imaging in awake mice
Silasi, Gergely; Xiao, Dongsheng; Vanni, Matthieu P.; Chen, Andrew C. N.; Murphy, Timothy H.
2016-01-01
Background Craniotomy-based window implants are commonly used for microscopic imaging, in head-fixed rodents, however their field of view is typically small and incompatible with mesoscopic functional mapping of cortex. New Method We describe a reproducible and simple procedure for chronic through-bone wide-field imaging in awake head-fixed mice providing stable optical access for chronic imaging over large areas of the cortex for months. Results The preparation is produced by applying clear-drying dental cement to the intact mouse skull, followed by a glass coverslip to create a partially transparent imaging surface. Surgery time takes about 30 minutes. A single set-screw provides a stable means of attachment for mesoscale assessment without obscuring the cortical field of view. Comparison with Existing Methods We demonstrate the utility of this method by showing seed-pixel functional connectivity maps generated from spontaneous cortical activity of GCAMP6 signals in both awake and anesthetized mice. Conclusions We propose that the intact skull preparation described here may be used for most longitudinal studies that do not require micron scale resolution and where cortical neural or vascular signals are recorded with intrinsic sensors. PMID:27102043
Popocatepetl from the Space Station
NASA Technical Reports Server (NTRS)
2002-01-01
Popocatepetl, or Popo, the active volcano located about 70 km southeast of Mexico City, sends a plume south on January 23, 2001. The astronaut crew on the International Space Station Alpha observed and recorded this image as they orbited to the northeast of the volcano. Popo has been frequently active for six years. On this day, the eruption plume reportedly rose to more than 9 km above sea level [for reference, Popo's summit elevation is 5426 m (17,800 feet)]. Note the smaller ash plume below the main plume (arrow). The perspective from the ISS allowed the astronauts this unique 3 dimensional view. Popo is situated between two large population centers: Mexico City (more than 18 million people, and just off the image to the right) and Puebla (about 1.2 million people). The region's dense population provides the potential for extreme impacts from volcanic hazards. Recent eruptions have been frequent, and have resulted in evacuations around the mountain. The image ISS01-ESC-5316 is provided and archived by the Earth Sciences and Image Analysis Laboratory, Johnson Space Center. Additional images taken by astronauts can be viewed at NASA-JSC's Gateway to Astronaut Photography of Earth at http://eol.jsc.nasa.gov/
Expansion of the visual angle of a car rear-view image via an image mosaic algorithm
NASA Astrophysics Data System (ADS)
Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng
2015-05-01
The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.
The Origin of Clusters and Large-Scale Structures: Panoramic View of the High-z Universe
NASA Astrophysics Data System (ADS)
Ouchi, Masami
We will report results of our on-going survey for proto-clusters and large-scale structures at z=3-6. We carried out very wide and deep optical imaging down to i=27 for a 1 deg^2 field of the Subaru/XMM Deep Field with 8.2m Subaru Telescope. We obtain maps of the Universe traced by ~1,000 Ly-a galaxies at z=3, 4, and 6 and by ~10,000 Lyman break galaxies at z=3-6. These cosmic maps have a transverse dimension of ~150 Mpc x 150 Mpc in comoving units at these redshifts, and provide us, for the first time, a panoramic view of the high-z Universe from the scales of galaxies, clusters to large-scale structures. Major results and implications will be presented in our talk. (Part of this work is subject to press embargo.)
Concurrent multiscale imaging with magnetic resonance imaging and optical coherence tomography
NASA Astrophysics Data System (ADS)
Liang, Chia-Pin; Yang, Bo; Kim, Il Kyoon; Makris, George; Desai, Jaydev P.; Gullapalli, Rao P.; Chen, Yu
2013-04-01
We develop a novel platform based on a tele-operated robot to perform high-resolution optical coherence tomography (OCT) imaging under continuous large field-of-view magnetic resonance imaging (MRI) guidance. Intra-operative MRI (iMRI) is a promising guidance tool for high-precision surgery, but it may not have sufficient resolution or contrast to visualize certain small targets. To address these limitations, we develop an MRI-compatible OCT needle probe, which is capable of providing microscale tissue architecture in conjunction with macroscale MRI tissue morphology in real time. Coregistered MRI/OCT images on ex vivo chicken breast and human brain tissues demonstrate that the complementary imaging scales and contrast mechanisms have great potential to improve the efficiency and the accuracy of iMRI procedure.
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
An electronic pan/tilt/zoom camera system
NASA Technical Reports Server (NTRS)
Zimmermann, Steve; Martin, H. Lee
1991-01-01
A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.
2011-06-11
ISS028-E-008604 (11 June 2011) --- A night view of the southern Italian Peninsula is featured in this image photographed by an Expedition 28 crew member on the International Space Station. The Earth’s surface at night is covered with a delicate tracery of lights, particularly in regions – such as Europe – that have a long history of urban development. Large urban areas are recognizable from orbit due to extensive electric lighting and distinct street patterns; with smaller urban areas spread across the land surface and coastlines, the outlines of continental landmasses are easily discernable at night. This photograph highlights the night time appearance of the southern Italian Peninsula; the toe and heel of Italy’s “boot” are clearly defined by the lights of large cities such as Naples, Bari, and Brindisi as well as numerous smaller urban areas. The bordering Adriatic, Tyrrhenian, and Ionian Seas appear as dark regions to the east, west, and south of the boot. The city lights of Palermo and Catania on the island of Sicily are visible at image bottom center. The space station was located over an area of Romania close to the capital city of Bucharest – approximately 945 kilometers to the northeast—at the time this image was taken. Part of a docked Russian spacecraft solar panel array is visible in the foreground at left. The distance between the image subject area and the position of the photographer, as well as the viewing angle looking outwards from the space station, contributes to the distorted appearance of the Italian Peninsula and Sicily in the image.
Trigger and Readout System for the Ashra-1 Detector
NASA Astrophysics Data System (ADS)
Aita, Y.; Aoki, T.; Asaoka, Y.; Morimoto, Y.; Motz, H. M.; Sasaki, M.; Abiko, C.; Kanokohata, C.; Ogawa, S.; Shibuya, H.; Takada, T.; Kimura, T.; Learned, J. G.; Matsuno, S.; Kuze, S.; Binder, P. M.; Goldman, J.; Sugiyama, N.; Watanabe, Y.
Highly sophisticated trigger and readout system has been developed for All-sky Survey High Resolution Air-shower (Ashra) detector. Ashra-1 detector has 42 degree diameter field of view. Detection of Cherenkov and fluorescence light from large background in the large field of view requires finely segmented and high speed trigger and readout system. The system is composed of optical fiber image transmission system, 64 × 64 channel trigger sensor and FPGA based trigger logic processor. The system typically processes the image within 10 to 30 ns and opens the shutter on the fine CMOS sensor. 64 × 64 coarse split image is transferred via 64 × 64 precisely aligned optical fiber bundle to a photon sensor. Current signals from the photon sensor are discriminated by custom made trigger amplifiers. FPGA based processor processes 64 × 64 hit pattern and correspondent partial area of the fine image is acquired. Commissioning earth skimming tau neutrino observational search was carried out with this trigger system. In addition to the geometrical advantage of the Ashra observational site, the excellent tau shower axis measurement based on the fine imaging and the night sky background rejection based on the fine and fast imaging allow zero background tau shower search. Adoption of the optical fiber bundle and trigger LSI realized 4k channel trigger system cheaply. Detectability of tau shower is also confirmed by simultaneously observed Cherenkov air shower. Reduction of the trigger threshold appears to enhance the effective area especially in PeV tau neutrino energy region. New two dimensional trigger LSI was introduced and the trigger threshold was lowered. New calibration system of the trigger system was recently developed and introduced to the Ashra detector
Human Cortical Activity Evoked by the Assignment of Authenticity when Viewing Works of Art
Huang, Mengfei; Bridge, Holly; Kemp, Martin J.; Parker, Andrew J.
2011-01-01
The expertise of others is a major social influence on our everyday decisions and actions. Many viewers of art, whether expert or naïve, are convinced that the full esthetic appreciation of an artwork depends upon the assurance that the work is genuine rather than fake. Rembrandt portraits provide an interesting image set for testing this idea, as there is a large number of them and recent scholarship has determined that quite a few fakes and copies exist. Use of this image set allowed us to separate the brain’s response to images of genuine and fake pictures from the brain’s response to external advice about the authenticity of the paintings. Using functional magnetic resonance imaging, viewing of artworks assigned as “copy,” rather than “authentic,” evoked stronger responses in frontopolar cortex (FPC), and right precuneus, regardless of whether the portrait was actually genuine. Advice about authenticity had no direct effect on the cortical visual areas responsive to the paintings, but there was a significant psycho-physiological interaction between the FPC and the lateral occipital area, which suggests that these visual areas may be modulated by FPC. We propose that the activation of brain networks rather than a single cortical area in this paradigm supports the art scholars’ view that esthetic judgments are multi-faceted and multi-dimensional in nature. PMID:22164139
Perspective View with Landsat Overlay, San Jose, Costa Rica
NASA Technical Reports Server (NTRS)
2002-01-01
This perspective view shows the capital city of San Jose, Costa Rica, the gray area in the center of the image. The view is toward the northwest with the Pacific Ocean in the distance and shows a portion of the Meseta Central (Central Valley), home to about a third of Costa Rica's population.
Like much of Central America, Costa Rica is generally cloud covered, so very little satellite imagery is available. The ability of the Shuttle Radar Topography Mission (SRTM) instrument to penetrate clouds and make three-dimensional measurements will allow generation of the first complete high-resolution topographic map of the entire region. These data were used to generate the image.This three-dimensional perspective view was generated using elevation data from SRTM and an enhanced false-color Landsat 7 satellite image. Colors are from Landsat bands 5, 4, and 2 as red, green and blue, respectively. Topographic expression is exaggerated two times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive. The Landsat 7 Thematic Mapper image used here was provided to the SRTM by the United States Geological Survey, Earth Resources Observation Systems (EROS) Data Center, Sioux Falls, S.D.Elevation data used in this image was acquired by the SRTM aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: scale varies in this perspective image Location: 10.0 degrees North latitude, 83.8 degrees West longitude Orientation: looking Northwest Image Data: Landsat Bands 5, 4, 3 as red, green, blue respectively Original Data Resolution: SRTM 30 meters (99 feet) Date Acquired: February 2000 (SRTM)Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
NASA Astrophysics Data System (ADS)
Spiridonov, I.; Shopova, M.; Boeva, R.; Nikolov, M.
2012-05-01
One of the biggest problems in color reproduction processes is color shifts occurring when images are viewed under different illuminants. Process ink colors and their combinations that match under one light source will often appear different under another light source. This problem is referred to as color balance failure or color inconstancy. The main goals of the present study are to investigate and determine the color balance failure (color inconstancy) of offset printed images expressed by color difference and color gamut changes depending on three of the most commonly used in practice illuminants, CIE D50, CIE F2 and CIE A. The results obtained are important from a scientific and a practical point of view. For the first time, a methodology is suggested and implemented for the examination and estimation of color shifts by studying a large number of color and gamut changes in various ink combinations for different illuminants.
2016-10-03
Two tiny moons of Saturn, almost lost amid the planet's enormous rings, are seen orbiting in this image. Pan, visible within the Encke Gap near lower-right, is in the process of overtaking the slower Atlas, visible at upper-left. All orbiting bodies, large and small, follow the same basic rules. In this case, Pan (17 miles or 28 kilometers across) orbits closer to Saturn than Atlas (19 miles or 30 kilometers across). According to the rules of planetary motion deduced by Johannes Kepler over 400 years ago, Pan orbits the planet faster than Atlas does. This view looks toward the sunlit side of the rings from about 39 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 9, 2016. The view was acquired at a distance of approximately 3.4 million miles (5.5 million kilometers) from Atlas and at a Sun-Atlas-spacecraft, or phase, angle of 71 degrees. Image scale is 21 miles (33 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20501
Wide field imaging problems in radio astronomy
NASA Astrophysics Data System (ADS)
Cornwell, T. J.; Golap, K.; Bhatnagar, S.
2005-03-01
The new generation of synthesis radio telescopes now being proposed, designed, and constructed face substantial problems in making images over wide fields of view. Such observations are required either to achieve the full sensitivity limit in crowded fields or for surveys. The Square Kilometre Array (SKA Consortium, Tech. Rep., 2004), now being developed by an international consortium of 15 countries, will require advances well beyond the current state of the art. We review the theory of synthesis radio telescopes for large fields of view. We describe a new algorithm, W projection, for correcting the non-coplanar baselines aberration. This algorithm has improved performance over those previously used (typically an order of magnitude in speed). Despite the advent of W projection, the computing hardware required for SKA wide field imaging is estimated to cost up to $500M (2015 dollars). This is about half the target cost of the SKA. Reconfigurable computing is one way in which the costs can be decreased dramatically.
Chip-based wide field-of-view nanoscopy
NASA Astrophysics Data System (ADS)
Diekmann, Robin; Helle, Øystein I.; Øie, Cristina I.; McCourt, Peter; Huser, Thomas R.; Schüttpelz, Mark; Ahluwalia, Balpreet S.
2017-04-01
Present optical nanoscopy techniques use a complex microscope for imaging and a simple glass slide to hold the sample. Here, we demonstrate the inverse: the use of a complex, but mass-producible optical chip, which hosts the sample and provides a waveguide for the illumination source, and a standard low-cost microscope to acquire super-resolved images via two different approaches. Waveguides composed of a material with high refractive-index contrast provide a strong evanescent field that is used for single-molecule switching and fluorescence excitation, thus enabling chip-based single-molecule localization microscopy. Additionally, multimode interference patterns induce spatial fluorescence intensity variations that enable fluctuation-based super-resolution imaging. As chip-based nanoscopy separates the illumination and detection light paths, total-internal-reflection fluorescence excitation is possible over a large field of view, with up to 0.5 mm × 0.5 mm being demonstrated. Using multicolour chip-based nanoscopy, we visualize fenestrations in liver sinusoidal endothelial cells.
The Advanced Gamma-ray Imaging System (AGIS): Real Time Stereoscopic Array Trigger
NASA Astrophysics Data System (ADS)
Byrum, K.; Anderson, J.; Buckley, J.; Cundiff, T.; Dawson, J.; Drake, G.; Duke, C.; Haberichter, B.; Krawzcynski, H.; Krennrich, F.; Madhavan, A.; Schroedter, M.; Smith, A.
2009-05-01
Future large arrays of Imaging Atmospheric Cherenkov telescopes (IACTs) such as AGIS and CTA are conceived to comprise of 50 - 100 individual telescopes each having a camera with 10**3 to 10**4 pixels. To maximize the capabilities of such IACT arrays with a low energy threshold, a wide field of view and a low background rate, a sophisticated array trigger is required. We describe the design of a stereoscopic array trigger that calculates image parameters and then correlates them across a subset of telescopes. Fast Field Programmable Gate Array technology allows to use lookup tables at the array trigger level to form a real-time pattern recognition trigger tht capitalizes on the multiple view points of the shower at different shower core distances. A proof of principle system is currently under construction. It is based on 400 MHz FPGAs and the goal is for camera trigger rates of up to 10 MHz and a tunable cosmic-ray background suppression at the array level.
Boot of Italy taken during Expedition Six
2003-02-25
ISS006-E-33736 (25 February 2003) --- The boot of Italy crosses the image in this southwest-looking view taken by an Expedition Six crewmember onboard the International Space Station (ISS). The spine of Italy is highlighted with snow and the largely cloud-covered Mediterranean Sea is at the top. The Adriatic Sea transverses most of the bottom of the image and Sicily appears top left beyond the toe of the boot. The heel lies out of the left side of the image. Corsica and Sardinia appear right of center partly under cloud. The floor of the Po River valley, lower right, is obscured by haze. Experience gained from similar haze events, in which atmospheric pressure, humidity and visibility and atmospheric chemistry were known, suggests that the haze as industrial smog. Industrial haze from the urban region of the central and upper Po valley accumulates to visible concentrations under conditions of high atmospheric pressure and the surrounding mountains prevent easy dispersal. This view illustrates the markedly different color and texture of cloud versus industrial aerosol haze.
NOAA Photo Library - Navigating the Collection
will have to change the setting to 800x600 to view the full image without having to scroll from left to view or download the highest resolution image available, click on the message "High Resolution viewing individual images associated with albums. If wishing to view the image ID number of a thumbnail
Normalized distance aggregation of discriminative features for person reidentification
NASA Astrophysics Data System (ADS)
Hou, Li; Han, Kang; Wan, Wanggen; Hwang, Jenq-Neng; Yao, Haiyan
2018-03-01
We propose an effective person reidentification method based on normalized distance aggregation of discriminative features. Our framework is built on the integration of three high-performance discriminative feature extraction models, including local maximal occurrence (LOMO), feature fusion net (FFN), and a concatenation of LOMO and FFN called LOMO-FFN, through two fast and discriminant metric learning models, i.e., cross-view quadratic discriminant analysis (XQDA) and large-scale similarity learning (LSSL). More specifically, we first represent all the cross-view person images using LOMO, FFN, and LOMO-FFN, respectively, and then apply each extracted feature representation to train XQDA and LSSL, respectively, to obtain the optimized individual cross-view distance metric. Finally, the cross-view person matching is computed as the sum of the optimized individual cross-view distance metric through the min-max normalization. Experimental results have shown the effectiveness of the proposed algorithm on three challenging datasets (VIPeR, PRID450s, and CUHK01).
High-performance web viewer for cardiac images
NASA Astrophysics Data System (ADS)
dos Santos, Marcelo; Furuie, Sergio S.
2004-04-01
With the advent of the digital devices for medical diagnosis the use of the regular films in radiology has decreased. Thus, the management and handling of medical images in digital format has become an important and critical task. In Cardiology, for example, the main difficulty is to display dynamic images with the appropriated color palette and frame rate used on acquisition process by Cath, Angio and Echo systems. In addition, other difficulty is handling large images in memory by any existing personal computer, including thin clients. In this work we present a web-based application that carries out these tasks with robustness and excellent performance, without burdening the server and network. This application provides near-diagnostic quality display of cardiac images stored as DICOM 3.0 files via a web browser and provides a set of resources that allows the viewing of still and dynamic images. It can access image files from the local disks, or network connection. Its features include: allows real-time playback, dynamic thumbnails image viewing during loading, access to patient database information, image processing tools, linear and angular measurements, on-screen annotations, image printing and exporting DICOM images to other image formats, and many others, all characterized by a pleasant user-friendly interface, inside a Web browser by means of a Java application. This approach offers some advantages over the most of medical images viewers, such as: facility of installation, integration with other systems by means of public and standardized interfaces, platform independence, efficient manipulation and display of medical images, all with high performance.
Automated Leaf Tracking using Multi-view Image Sequences of Maize Plants for Leaf-growth Monitoring
NASA Astrophysics Data System (ADS)
Das Choudhury, S.; Awada, T.; Samal, A.; Stoerger, V.; Bashyam, S.
2017-12-01
Extraction of phenotypes with botanical importance by analyzing plant image sequences has the desirable advantages of non-destructive temporal phenotypic measurements of a large number of plants with little or no manual intervention in a relatively short period of time. The health of a plant is best interpreted by the emergence timing and temporal growth of individual leaves. For automated leaf growth monitoring, it is essential to track each leaf throughout the life cycle of the plant. Plants are constantly changing organisms with increasing complexity in architecture due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. The leaf cross-overs pose challenges to accurately track each leaf using single view image sequence. Thus, we introduce a novel automated leaf tracking algorithm using a graph theoretic approach by multi-view image sequence analysis based on the determination of leaf-tips and leaf-junctions in the 3D space. The basis of the leaf tracking algorithm is: the leaves emerge using bottom-up approach in the case of a maize plant, and the direction of leaf emergence strictly alternates in terms of direction. The algorithm involves labeling of the individual parts of a plant, i.e., leaves and stem, following graphical representation of the plant skeleton, i.e., one-pixel wide connected line obtained from the binary image. The length of the leaf is measured by the number of pixels in the leaf skeleton. To evaluate the performance of the algorithm, a benchmark dataset is indispensable. Thus, we publicly release University of Nebraska-Lincoln Component Plant Phenotyping dataset-2 (UNL-CPPD-2) consisting of images of the 20 maize plants captured by visible light camera of the Lemnatec Scanalyzer 3D high throughout plant phenotyping facility once daily for 60 days from 10 different views. The dataset is aimed to facilitate the development and evaluation of leaf tracking algorithms and their uniform comparisons.
Imaging and identification of waterborne parasites using a chip-scale microscope.
Lee, Seung Ah; Erath, Jessey; Zheng, Guoan; Ou, Xiaoze; Willems, Phil; Eichinger, Daniel; Rodriguez, Ana; Yang, Changhuei
2014-01-01
We demonstrate a compact portable imaging system for the detection of waterborne parasites in resource-limited settings. The previously demonstrated sub-pixel sweeping microscopy (SPSM) technique is a lens-less imaging scheme that can achieve high-resolution (<1 µm) bright-field imaging over a large field-of-view (5.7 mm×4.3 mm). A chip-scale microscope system, based on the SPSM technique, can be used for automated and high-throughput imaging of protozoan parasite cysts for the effective diagnosis of waterborne enteric parasite infection. We successfully imaged and identified three major types of enteric parasite cysts, Giardia, Cryptosporidium, and Entamoeba, which can be found in fecal samples from infected patients. We believe that this compact imaging system can serve well as a diagnostic device in challenging environments, such as rural settings or emergency outbreaks.
Attractive celebrity and peer images on Instagram: Effect on women's mood and body image.
Brown, Zoe; Tiggemann, Marika
2016-12-01
A large body of research has documented that exposure to images of thin fashion models contributes to women's body dissatisfaction. The present study aimed to experimentally investigate the impact of attractive celebrity and peer images on women's body image. Participants were 138 female undergraduate students who were randomly assigned to view either a set of celebrity images, a set of equally attractive unknown peer images, or a control set of travel images. All images were sourced from public Instagram profiles. Results showed that exposure to celebrity and peer images increased negative mood and body dissatisfaction relative to travel images, with no significant difference between celebrity and peer images. This effect was mediated by state appearance comparison. In addition, celebrity worship moderated an increased effect of celebrity images on body dissatisfaction. It was concluded that exposure to attractive celebrity and peer images can be detrimental to women's body image. Copyright © 2016 Elsevier Ltd. All rights reserved.
Influence of speckle image reconstruction on photometric precision for large solar telescopes
NASA Astrophysics Data System (ADS)
Peck, C. L.; Wöger, F.; Marino, J.
2017-11-01
Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.
Perspective View with Landsat Overlay, Palm Springs, Calif.
NASA Technical Reports Server (NTRS)
2002-01-01
The city of Palm Springs nestles at the base of Mount San Jacinto in this computer-generated perspective viewed from the east. The many golf courses in the area show up as irregular green areas while the two prominent lines passing through the middle of the image are Interstate 10 and the adjacent railroad tracks. The San Andreas Fault passes through the middle of the sandy Indio Hills in the foreground.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and an enhanced color Landsat 5satellite image. Topographic expression is exaggerated two times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise,Washington, D.C.Size: scale varies in this perspective image Location: 33.8 deg. North lat., 116.3 deg. West lon. Orientation: looking west Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 1 arcsecond (30 meters or 98 feet) Date Acquired: February 2000 (SRTM)Volga Delta and the Caspian Sea
NASA Technical Reports Server (NTRS)
2002-01-01
Russia's Volga River is the largest river system in Europe, draining over 1.3 million square kilometers of catchment area into the Caspian Sea. The brackish Caspian is Earth's largest landlocked water body, and its isolation from the world's oceans has enabled the preservation of several unique animal and plant species. The Volga provides most of the Caspian's fresh water and nutrients, and also discharges large amounts of sediment and industrial waste into the relatively shallow northern part of the sea. These images of the region were captured by the Multi-angle Imaging SpectroRadiometer on October 5, 2001, during Terra orbit 9567. Each image represents an area of approximately 275 kilometers x 376 kilometers.The left-hand image is from MISR's nadir (vertical-viewing) camera, and shows how light is reflected at red, green, and blue wavelengths. The right-hand image is a false color composite of red-band imagery from MISR's 60-degree backward, nadir, and 60-degree forward-viewing cameras, displayed as red, green, and blue, respectively. Here, color variations indicate how light is reflected at different angles of view. Water appears blue in the right-hand image, for example, because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. The rougher-textured vegetated wetlands near the coast exhibit preferential backscattering, and consequently appear reddish. A small cloud near the center of the delta separates into red, green, and blue components due to geometric parallax associated with its elevation above the surface.Other notable features within the images include several linear features located near the Volga Delta shoreline. These long, thin lines are artificially maintained shipping channels, dredged to depths of at least 2 meters. The crescent-shaped Kulaly Island, also known as Seal Island, is visible near the right-hand edge of the images.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.JHelioviewer: Open-Source Software for Discovery and Image Access in the Petabyte Age (Invited)
NASA Astrophysics Data System (ADS)
Mueller, D.; Dimitoglou, G.; Langenberg, M.; Pagel, S.; Dau, A.; Nuhn, M.; Garcia Ortiz, J. P.; Dietert, H.; Schmidt, L.; Hughitt, V. K.; Ireland, J.; Fleck, B.
2010-12-01
The unprecedented torrent of data returned by the Solar Dynamics Observatory is both a blessing and a barrier: a blessing for making available data with significantly higher spatial and temporal resolution, but a barrier for scientists to access, browse and analyze them. With such staggering data volume, the data is bound to be accessible only from a few repositories and users will have to deal with data sets effectively immobile and practically difficult to download. From a scientist's perspective this poses three challenges: accessing, browsing and finding interesting data while avoiding the proverbial search for a needle in a haystack. To address these challenges, we have developed JHelioviewer, an open-source visualization software that lets users browse large data volumes both as still images and movies. We did so by deploying an efficient image encoding, storage, and dissemination solution using the JPEG 2000 standard. This solution enables users to access remote images at different resolution levels as a single data stream. Users can view, manipulate, pan, zoom, and overlay JPEG 2000 compressed data quickly, without severe network bandwidth penalties. Besides viewing data, the browser provides third-party metadata and event catalog integration to quickly locate data of interest, as well as an interface to the Virtual Solar Observatory to download science-quality data. As part of the Helioviewer Project, JHelioviewer offers intuitive ways to browse large amounts of heterogeneous data remotely and provides an extensible and customizable open-source platform for the scientific community.
Cross calibration of GF-1 satellite wide field of view sensor with Landsat 8 OLI and HJ-1A HSI
NASA Astrophysics Data System (ADS)
Liu, Li; Gao, Hailiang; Pan, Zhiqiang; Gu, Xingfa; Han, Qijin; Zhang, Xuewen
2018-01-01
This paper focuses on cross calibrating the GaoFen (GF-1) satellite wide field of view (WFV) sensor using the Landsat 8 Operational Land Imager (OLI) and HuanJing-1A (HJ-1A) hyperspectral imager (HSI) as reference sensors. Two methods are proposed to calculate the spectral band adjustment factor (SBAF). One is based on the HJ-1A HSI image and the other is based on ground-measured reflectance. However, the HSI image and ground-measured reflectance were measured at different dates, as the WFV and OLI imagers passed overhead. Three groups of regions of interest (ROIs) were chosen for cross calibration, based on different selection criteria. Cross-calibration gains with nonzero and zero offsets were both calculated. The results confirmed that the gains with zero offset were better, as they were more consistent over different groups of ROIs and SBAF calculation methods. The uncertainty of this cross calibration was analyzed, and the influence of SBAF was calculated based on different HSI images and ground reflectance spectra. The results showed that the uncertainty of SBAF was <3% for bands 1 to 3. Two other large uncertainties in this cross calibration were variation of atmosphere and low ground reflectance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osin, D.; Schindler, T., E-mail: dosin@trialphaenergy.com
2016-11-15
A dual wavelength imaging system has been developed and installed on C-2U to capture 2D images of a He jet in the Scrape-Off Layer (SOL) of an advanced beam-driven Field-Reversed Configuration (FRC) plasma. The system was designed to optically split two identical images and pass them through 1 nm FWHM filters. Dual wavelength images are focused adjacent on a large format CCD chip and recorded simultaneously with a time resolution down to 10 μs using a gated micro-channel plate. The relatively compact optical system images a 10 cm plasma region with a spatial resolution of 0.2 cm and can bemore » used in a harsh environment with high electro-magnetic noise and high magnetic field. The dual wavelength imaging system provides 2D images of either electron density or temperature by observing spectral line pairs emitted by He jet atoms in the SOL. A large field of view, combined with good space and time resolution of the imaging system, allows visualization of macro-flows in the SOL. First 2D images of the electron density and temperature observed in the SOL of the C-2U FRC are presented.« less
A compact large-format streak tube for imaging lidar
NASA Astrophysics Data System (ADS)
Hui, Dandan; Luo, Duan; Tian, Liping; Lu, Yu; Chen, Ping; Wang, Junfeng; Sai, Xiaofeng; Wen, Wenlong; Wang, Xing; Xin, Liwei; Zhao, Wei; Tian, Jinshou
2018-04-01
The streak tubes with a large effective photocathode area, large effective phosphor screen area, and high photocathode radiant sensitivity are essential for improving the field of view, depth of field, and detectable range of the multiple-slit streak tube imaging lidar. In this paper, a high spatial resolution, large photocathode area, and compact meshless streak tube with a spherically curved cathode and screen is designed and tested. Its spatial resolution reaches 20 lp/mm over the entire Φ28 mm photocathode working area, and the simulated physical temporal resolution is better than 30 ps. The temporal distortion in our large-format streak tube, which is shown to be a non-negligible factor, has a minimum value as the radius of curvature of the photocathode varies. Furthermore, the photocathode radiant sensitivity and radiant power gain reach 41 mA/W and 18.4 at the wavelength of 550 nm, respectively. Most importantly, the external dimensions of our streak tube are no more than Φ60 mm × 110 mm.
Chan, K L Andrew; Kazarian, Sergei G
2008-10-01
Attenuated total reflection-Fourier transform infrared (ATR-FT-IR) imaging is a very useful tool for capturing chemical images of various materials due to the simple sample preparation and the ability to measure wet samples or samples in an aqueous environment. However, the size of the array detector used for image acquisition is often limited and there is usually a trade off between spatial resolution and the field of view (FOV). The combination of mapping and imaging can be used to acquire images with a larger FOV without sacrificing spatial resolution. Previous attempts have demonstrated this using an infrared microscope and a Germanium hemispherical ATR crystal to achieve images of up to 2.5 mm x 2.5 mm but with varying spatial resolution and depth of penetration across the imaged area. In this paper, we demonstrate a combination of mapping and imaging with a different approach using an external optics housing for large ATR accessories and inverted ATR prisms to achieve ATR-FT-IR images with a large FOV and reasonable spatial resolution. The results have shown that a FOV of 10 mm x 14 mm can be obtained with a spatial resolution of approximately 40-60 microm when using an accessory that gives no magnification. A FOV of 1.3 mm x 1.3 mm can be obtained with spatial resolution of approximately 15-20 microm when using a diamond ATR imaging accessory with 4x magnification. No significant change in image quality such as spatial resolution or depth of penetration has been observed across the whole FOV with this method and the measurement time was approximately 15 minutes for an image consisting of 16 image tiles.
Parallel-multiplexed excitation light-sheet microscopy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Xu, Dongli; Zhou, Weibin; Peng, Leilei
2017-02-01
Laser scanning light-sheet imaging allows fast 3D image of live samples with minimal bleach and photo-toxicity. Existing light-sheet techniques have very limited capability in multi-label imaging. Hyper-spectral imaging is needed to unmix commonly used fluorescent proteins with large spectral overlaps. However, the challenge is how to perform hyper-spectral imaging without sacrificing the image speed, so that dynamic and complex events can be captured live. We report wavelength-encoded structured illumination light sheet imaging (λ-SIM light-sheet), a novel light-sheet technique that is capable of parallel multiplexing in multiple excitation-emission spectral channels. λ-SIM light-sheet captures images of all possible excitation-emission channels in true parallel. It does not require compromising the imaging speed and is capable of distinguish labels by both excitation and emission spectral properties, which facilitates unmixing fluorescent labels with overlapping spectral peaks and will allow more labels being used together. We build a hyper-spectral light-sheet microscope that combined λ-SIM with an extended field of view through Bessel beam illumination. The system has a 250-micron-wide field of view and confocal level resolution. The microscope, equipped with multiple laser lines and an unlimited number of spectral channels, can potentially image up to 6 commonly used fluorescent proteins from blue to red. Results from in vivo imaging of live zebrafish embryos expressing various genetic markers and sensors will be shown. Hyper-spectral images from λ-SIM light-sheet will allow multiplexed and dynamic functional imaging in live tissue and animals.
VizieR Online Data Catalog: Mission Accessible Near-Earth Objects Survey (Thirouin+, 2016)
NASA Astrophysics Data System (ADS)
Thirouin, A.; Moskovitz, N.; Binzel, R. P.; Christensen, E.; DeMeo, F. E.; Person, M. J.; Polishook, D.; Thomas, C. A.; Trilling, D.; Willman, M.; Hinkle, M.; Burt, B.; Avner, D.; Aceituno, F. J.
2017-06-01
The data were obtained with the 4.3m Lowell Discovery Channel Telescope (DCT), the 4.1m Southern Astrophysical Research (SOAR) telescope, the 4m Nicholas U. Mayall Telescope, the 2.1m at Kitt Peak Observatory, the 1.8m Perkins telescope, the 1.5m Sierra Nevada Observatory (OSN), and the 1.3m SMARTS telescope between 2013 August and 2015 October. The DCT is forty miles southeast of Flagstaff at the Happy Jack site (Arizona, USA). Images were obtained using the Large Monolithic Imager (LMI), which is a 6144*6160 CCD. The total field of view is 12.5*12.5 with a plate scale of 0.12''/pixel (unbinned). Images were obtained using the 3*3 or 2*2 binning modes. Observations were carried out in situ. The SOAR telescope is located on Cerro Pachon, Chile. Images were obtained using the Goodman High Throughput Spectrograph (Goodman-HTS) instrument in its imaging mode. The instrument consists of a 4096*4096 Fairchild CCD, with a 7.2' diameter field of view (circular field of view) and a plate scale of 0.15''/pixel. Images were obtained using the 2*2 binning mode. Observations were conducted remotely. The Mayall telescope is a 4m telescope located at the Kitt Peak National Observatory (Tucson, Arizona, USA). The National Optical Astronomy Observatory (NOAO) CCD Mosaic-1.1 is a wide field imager composed of an array of eight CCD chips. The field of view is 36'*36', and the plate scale is 0.26''/pixel. Observations were performed remotely. The 2.1m at Kitt Peak Observatory was operated with the STA3 2k*4k CCD, which has a plate scale of 0.305''/pixel and a field of view of 10.2'*6.6'. The instrument was binned 2*2 and the observations were conducted in situ. The Perkins 72'' telescope is located at the Anderson Mesa station at Lowell Observatory (Flagstaff, Arizona, USA). We used the Perkins ReImaging SysteM (PRISM) instrument, a 2*2k Fairchild CCD. The PRISM plate scale is 0.39''/pixel for a field of view of 13'*13'. Observations were performed in situ. The 1.5m telescope located at the OSN at Loma de Dilar in the National Park of Sierra Nevada (Granada, Spain) was operated in situ. Observations were carried out with a 2k*2k CCD, with a total field of view of 7.8'*7.8'. We used 2*2 binning mode, resulting in an effective plate scale of 0.46''/pixel. The 1.3m SMARTS telescope is located at the Cerro Tololo Inter-American Observatory (Coquimbo region, Chile). This telescope is equipped with a camera called ANDICAM (A Novel Dual Imaging CAMera). ANDICAM is a Fairchild 2048*2048 CCD. The pixel scale is 0.371''/pixel, and the field of view is 6'*6'. Observations were carried out in queue mode. (2 data files).
SU-F-I-47: Optimizing Protocols for Image Quality and Dose in Abdominal CT of Large Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, L; Yester, M
Purpose: Newer CT scanners are able to use scout views to adjust mA throughout the scan in order to achieve a given noise level. However, given constraints of radiologist preferences for kVp and rotation time, it may not be possible to achieve an acceptable noise level for large patients. A study was initiated to determine for which patients kVp and/or rotation time should be changed in order to achieve acceptable image quality. Methods: Patient scans were reviewed on two new Emergency Department scanners (Philips iCT) to identify patients over a large range of sizes. These iCTs were set with amore » limit of 500 mA to safeguard against a failure that might cause a CT scan to be (incorrectly) obtained at too-high mA. Scout views of these scans were assessed for both AP and LAT patient width and AP and LAT standard deviation in an ROI over the liver. Effective diameter and product of the scout standard deviations over the liver were both studied as possible metrics for identifying patients who would need kVp and/or rotation time changed. The mA used for the liver in the CT was compared to these metrics for those patients whose CT scans showed acceptable image quality. Results: Both effective diameter and product of the scout standard deviations over the liver result in similar predictions for which patients will require the kVp and/or rotation time to be changed to achieve an optimal combination of image quality and dose. Conclusion: Two mechanisms for CT technologists to determine based on scout characteristics what kVp, mA limit, and rotation time to use when DoseRight with our physicians’ preferred kVp and rotation time will not yield adequate image quality are described.« less
NASA Technical Reports Server (NTRS)
2004-01-01
4 March 2004 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a spectacular suite of large and small polygons in the south polar region. On Earth, polygons such as these would be indicators of the presence of ground ice. Whether this is true for Mars remains to be determined, but it is interesting to note that these polygons do occur in a region identified by the Mars Odyssey Gamma Ray Spectrometer (GRS) team as a place with possible ground ice. The polygons are in an old impact crater located near 62.9oS, 281.4oW. This 1.5 meter (5 ft.) per pixel view covers an area 3 km (1.9 mi) wide and is illuminated by sunlight from the upper left. To see the smaller set of polygons, the reader must view the full-resolution image (click on picture, above).
Micro- and nano-tomography at the DIAMOND beamline I13L imaging and coherence
NASA Astrophysics Data System (ADS)
Rau, C.; Bodey, A.; Storm, M.; Cipiccia, S.; Marathe, S.; Zdora, M.-C.; Zanette, I.; Wagner, U.; Batey, D.; Shi, X.
2017-10-01
The Diamond Beamline I13L is dedicated to imaging on the micro- and nano-lengthsale, operating in the energy range between 6 and 30keV. For this purpose two independently operating branchlines and endstations have been built. The imaging branch is fully operational for micro-tomography and in-line phase contrast imaging with micrometre resolution. Grating interferometry is currently implemented, adding the capability of measuring phase and small-angle information. For tomography with increased resolution a full-field microscope providing 50nm spatial resolution with a field of view of 100μm is being tested. The instrument provides a large working distance between optics and sample to adapt a wide range of customised sample environments. On the coherence branch coherent diffraction imaging techniques such as ptychography, coherent X-ray diffraction (CXRD) are currently developed for three dimensional imaging with the highest resolution. The imaging branch is operated in collaboration with Manchester University, called therefore the Diamond-Manchester Branchline. The scientific applications cover a large area including bio-medicine, materials science, chemistry geology and more. The present paper provides an overview about the current status of the beamline and the science addressed.
Topography within Europa's Mannann'an crater
NASA Technical Reports Server (NTRS)
1998-01-01
This three dimensional effect is created by superimposing images of Jupiter's icy moon, Europa, which were taken from slightly different perspectives. When viewed through red (left eye) and blue (right eye) filters, this product, a stereo anaglyph, shows variations in height of surface features.
This view shows the rim and interior of the impact crater Mannann'an, on Jupiter's moon Europa. The stereo image reveals the rim of the crater which appears as a tall ridge near the left edge of the image, as well as and numerous small hills on the bottom of the crater. One of the most striking features is the large pit surrounded by circular cracks on the right side of the image, with dark radiating fractures in its center.The right (blue) image is a high resolution image (20 meters per picture element) taken through a clear filter. The left (red) image is composed of lower resolution (80 meters per picture element) color images taken through violet, green, and near-infrared filters and averaged to approximate an unfiltered view.North is to the top of the picture and the sun illuminates the scene from the east (right). The image, centered at 3 degrees north latitude and 120 degrees west longitude, covers an area approximately 18 by 4 kilometers (11 by 2.5 miles). The finest details that can be discerned in this picture are about 40 meters (44 yards) across. The images were taken on March 29th, 1998 at 13 hours, 17 minutes, 29 seconds Universal Time at a range of 1934 kilometers by the Solid State Imaging (SSI) system on NASA's Galileo spacecraft.The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepoNASA Astrophysics Data System (ADS)
Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.
2016-03-01
Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.
Simulation of the hybrid Tunka Advanced International Gamma-ray and Cosmic ray Astrophysics (TAIGA)
NASA Astrophysics Data System (ADS)
Kunnas, M.; Astapov, I.; Barbashina, N.; Beregnev, S.; Bogdanov, A.; Bogorodskii, D.; Boreyko, V.; Brückner, M.; Budnev, N.; Chiavassa, A.; Chvalaev, O.; Dyachok, A.; Epimakhov, S.; Eremin, T.; Gafarov, A.; Gorbunov, N.; Grebenyuk, V.; Gress, O.; Gress, T.; Grinyuk, A.; Grishin, O.; Horns, D.; Ivanova, A.; Karpov, N.; Kalmykov, N.; Kazarina, Y.; Kindin, V.; Kirichkov, N.; Kiryuhin, S.; Kokoulin, R.; Kompaniets, K.; Konstantinov, E.; Korobchenko, A.; Korosteleva, E.; Kozhin, V.; Kuzmichev, L.; Lenok, V.; Lubsandorzhiev, B.; Lubsandorzhiev, N.; Mirgazov, R.; Mirzoyan, R.; Monkhoev, R.; Nachtigall, R.; Pakhorukov, A.; Panasyuk, M.; Pankov, L.; Perevalov, A.; Petrukhin, A.; Platonov, V.; Poleschuk, V.; Popescu, M.; Popova, E.; Porelli, A.; Porokhovoy, S.; Prosin, V.; Ptuskin, V.; Romanov, V.; Rubtsov, G. I.; Müger; Rybov, E.; Samoliga, V.; Satunin, P.; Saunkin, A.; Savinov, V.; Semeney, Yu; Shaibonov (junior, B.; Silaev, A.; Silaev (junior, A.; Skurikhin, A.; Slunecka, M.; Spiering, C.; Sveshnikova, L.; Tabolenko, V.; Tkachenko, A.; Tkachev, L.; Tluczykont, M.; Veslopopov, A.; Veslopopova, E.; Voronov, D.; Wischnewski, R.; Yashin, I.; Yurin, K.; Zagorodnikov, A.; Zirakashvili, V.; Zurbanov, V.
2015-08-01
Up to several 10s of TeV, Imaging Air Cherenkov Telescopes (IACTs) have proven to be the instruments of choice for GeV/TeV gamma-ray astronomy due to their good reconstrucion quality and gamma-hadron separation power. However, sensitive observations at and above 100 TeV require very large effective areas (10 km2 and more), which is difficult and expensive to achieve. The alternative to IACTs are shower front sampling arrays (non-imaging technique or timing-arrays) with a large area and a wide field of view. Such experiments provide good core position, energy and angular resolution, but only poor gamma-hadron separation. Combining both experimental approaches, using the strengths of both techniques, could optimize the sensitivity to the highest energies. The TAIGA project plans to combine the non-imaging HiSCORE [8] array with small (∼10m2) imaging telescopes. This paper covers simulation results of this hybrid approach.
1989-08-25
P-34764 Voyager 2 obtained this high resolution color image of Neptune's large satellite Triton during its close flyby. Approximately a dozen individual images were combined to produce this comprehensive view of the Neptune-facing hemisphere of Triton. Fine detail is provided by high resolution, clear-filter images, with color information added from lower resolution frames. The large south polar cap at the bottom of the image is highly refective and slightly pink in color , and may consist of a slowly evaporating layer of nitrogen ice deposited during the previous winter. From the ragged edge of the polar cap northward the satellite's face is generously darker and redder in color. This coloring may be produced by the action of ultraviolet light and magnetospheric radiation upon methane in the atmosphere and surface. Running across this darker region , approximately parallel to the edge of the polar cap, is a band of brighter white material that is almost bluish in color. The underlying topography in this bright band is similiar, however to that in the darker, redder regions surrounding it.
Star Formation in the DR21 Region A
2004-04-13
Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image (top panel) is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon. Each of the constituent images is shown below the large mosaic. The Digital Sky Survey (DSS) image (lower left) provides a familiar view of deep space, with stars scattered around a dark field. The reddish hue is from gas heated by foreground stars in this region. This fluorescence fades away in the near-infrared Two-Micron All-Sky Survey (2MASS) image (lower center), but other features start to appear through the obscuring clouds of dust, now increasingly transparent. Many more stars are discerned in this image because near-infrared light pierces through some of the obscuration of the interstellar dust. Note that some stars seen as very bright in the visible image are muted in the near-infrared image, whereas other stars become more prominent. Embedded nebulae revealed in the Spitzer image are only hinted at in this picture. The Spitzer image (lower right) provides a vivid contrast to the other component images, revealing star-forming complexes and large-scale structures otherwise hidden from view. The Spitzer image is composed of photographs obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red). The brightest infrared cloud near the top center corresponds to DR21, which presumably contains a cluster of newly forming stars at a distance of nearly 10,000 light-years. The red filaments stretching across the Spitzer image denote the presence of polycyclic aromatic hydrocarbons. These organic molecules, comprised of carbon and hydrogen, are excited by surrounding interstellar radiation and become luminescent at wavelengths near 8 microns. The complex pattern of filaments is caused by an intricate combination of radiation pressure, gravity, and magnetic fields. The result is a tapestry in which winds, outflows, and turbulence move and shape the interstellar medium. http://photojournal.jpl.nasa.gov/catalog/PIA05735
Star Formation in the DR21 Region (A)
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Annotated mosaic Hidden behind a shroud of dust in the constellation Cygnus is a stellar nursery called DR21, which is giving birth to some of the most massive stars in our galaxy. Visible light images reveal no trace of this interstellar cauldron because of heavy dust obscuration. In fact, visible light is attenuated in DR21 by a factor of more than 10,000,000,000,000,000,000,000,000,000,000,000,000,000 (ten thousand trillion heptillion). New images from NASA's Spitzer Space Telescope allow us to peek behind the cosmic veil and pinpoint one of the most massive natal stars yet seen in our Milky Way galaxy. The never-before-seen star is 100,000 times as bright as the Sun. Also revealed for the first time is a powerful outflow of hot gas emanating from this star and bursting through a giant molecular cloud. The colorful image (top panel) is a large-scale composite mosaic assembled from data collected at a variety of different wavelengths. Views at visible wavelengths appear blue, near-infrared light is depicted as green, and mid-infrared data from the InfraRed Array Camera (IRAC) aboard NASA's Spitzer Space Telescope is portrayed as red. The result is a contrast between structures seen in visible light (blue) and those observed in the infrared (yellow and red). A quick glance shows that most of the action in this image is revealed to the unique eyes of Spitzer. The image covers an area about two times that of a full moon. Each of the constituent images is shown below the large mosaic. The Digital Sky Survey (DSS) image (lower left) provides a familiar view of deep space, with stars scattered around a dark field. The reddish hue is from gas heated by foreground stars in this region. This fluorescence fades away in the near-infrared Two-Micron All-Sky Survey (2MASS) image (lower center), but other features start to appear through the obscuring clouds of dust, now increasingly transparent. Many more stars are discerned in this image because near-infrared light pierces through some of the obscuration of the interstellar dust. Note that some stars seen as very bright in the visible image are muted in the near-infrared image, whereas other stars become more prominent. Embedded nebulae revealed in the Spitzer image are only hinted at in this picture. The Spitzer image (lower right) provides a vivid contrast to the other component images, revealing star-forming complexes and large-scale structures otherwise hidden from view. The Spitzer image is composed of photographs obtained at four wavelengths: 3.6 microns (blue), 4.5 microns (green), 5.8 microns (orange) and 8 microns (red). The brightest infrared cloud near the top center corresponds to DR21, which presumably contains a cluster of newly forming stars at a distance of nearly 10,000 light-years. The red filaments stretching across the Spitzer image denote the presence of polycyclic aromatic hydrocarbons. These organic molecules, comprised of carbon and hydrogen, are excited by surrounding interstellar radiation and become luminescent at wavelengths near 8 microns. The complex pattern of filaments is caused by an intricate combination of radiation pressure, gravity, and magnetic fields. The result is a tapestry in which winds, outflows, and turbulence move and shape the interstellar medium.Preferences for female body size in Britain and the South Pacific.
Swami, Viren; Knight, Daniel; Tovée, Martin J; Davies, Patrick; Furnham, Adrian
2007-06-01
To assess current attitudes to body weight and shape in the South Pacific, a region characterised by relatively high levels of obesity and traditionally positive views of large bodies, 38 high socio-economic status (SES) adolescent males and 38 low SES adolescent males in Independent Samoa were asked to rate a set of images of real women for physical attractiveness. Participants in both SES settings preferred women with a slender figure, as did a comparison group in Britain, suggesting that the traditional veneration of large bodies is no longer apparent in Samoa. However, the results also showed that low SES adolescents were more likely to view overweight figures as attractive, which suggests that the veneration of slim figures may be associated with increasing SES. Implications of this finding are discussed in conclusion.
MUSE field splitter unit: fan-shaped separator for 24 integral field units
NASA Astrophysics Data System (ADS)
Laurent, Florence; Renault, Edgard; Anwand, Heiko; Boudon, Didier; Caillier, Patrick; Kosmalski, Johan; Loupias, Magali; Nicklas, Harald; Seifert, Walter; Salaun, Yves; Xu, Wenli
2014-07-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph developed for the European Southern Observatory (ESO). It combines a 1' x 1' field of view sampled at 0.2 arcsec for its Wide Field Mode (WFM) and a 7.5"x7.5" field of view for its Narrow Field Mode (NFM). Both modes will operate with the improved spatial resolution provided by GALACSI (Ground Atmospheric Layer Adaptive Optics for Spectroscopic Imaging), that will use the VLT deformable secondary mirror and 4 Laser Guide Stars (LGS) foreseen in 2015. MUSE operates in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic instrument mechanical structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where it was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transferred in monolithic way onto VLT telescope where the first light was achieved. This paper describes the MUSE main optical component: the Field Splitter Unit. It splits the VLT image into 24 subfields and provides the first separation of the beam for the 24 Integral Field Units. This talk depicts its manufacturing at Winlight Optics and its alignment into MUSE instrument. The success of the MUSE alignment is demonstrated by the excellent results obtained onto MUSE positioning, image quality and throughput onto the sky. MUSE commissioning at the VLT is planned for September, 2014.
High throughput analysis of samples in flowing liquid
Ambrose, W. Patrick; Grace, W. Kevin; Goodwin, Peter M.; Jett, James H.; Orden, Alan Van; Keller, Richard A.
2001-01-01
Apparatus and method enable imaging multiple fluorescent sample particles in a single flow channel. A flow channel defines a flow direction for samples in a flow stream and has a viewing plane perpendicular to the flow direction. A laser beam is formed as a ribbon having a width effective to cover the viewing plane. Imaging optics are arranged to view the viewing plane to form an image of the fluorescent sample particles in the flow stream, and a camera records the image formed by the imaging optics.
EyeMIAS: a cloud-based ophthalmic image reading and auxiliary diagnosis system
NASA Astrophysics Data System (ADS)
Wu, Di; Zhao, Heming; Yu, Kai; Chen, Xinjian
2018-03-01
Relying solely on ophthalmic equipment is unable to meet the present health needs. It is urgent to find an efficient way to provide a quick screening and early diagnosis on diabetic retinopathy and other ophthalmic diseases. The purpose of this study is to develop a cloud-base system for medical image especially ophthalmic image to store, view and process and accelerate the screening and diagnosis. In this purpose the system with web application, upload client, storage dependency and algorithm support is implemented. After five alpha tests, the system bore the thousands of large traffic access and generated hundreds of reports with diagnosis.
Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images
NASA Astrophysics Data System (ADS)
Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow
2018-04-01
Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Imaging electric field dynamics with graphene optoelectronics.
Horng, Jason; Balch, Halleh B; McGuire, Allister F; Tsai, Hsin-Zon; Forrester, Patrick R; Crommie, Michael F; Cui, Bianxiao; Wang, Feng
2016-12-16
The use of electric fields for signalling and control in liquids is widespread, spanning bioelectric activity in cells to electrical manipulation of microstructures in lab-on-a-chip devices. However, an appropriate tool to resolve the spatio-temporal distribution of electric fields over a large dynamic range has yet to be developed. Here we present a label-free method to image local electric fields in real time and under ambient conditions. Our technique combines the unique gate-variable optical transitions of graphene with a critically coupled planar waveguide platform that enables highly sensitive detection of local electric fields with a voltage sensitivity of a few microvolts, a spatial resolution of tens of micrometres and a frequency response over tens of kilohertz. Our imaging platform enables parallel detection of electric fields over a large field of view and can be tailored to broad applications spanning lab-on-a-chip device engineering to analysis of bioelectric phenomena.
The Haskins Optically Corrected Ultrasound System
ERIC Educational Resources Information Center
Whalen, D. H.; Iskarous, Khalil; Tiede, Mark K.; Ostry, David J.; Lehnert-LeHouillier, Heike; Vatikiotis-Bateson, Eric; Hailey, Donald S.
2005-01-01
The tongue is critical in the production of speech, yet its nature has made it difficult to measure. Not only does its ability to attain complex shapes make it difficult to track, it is also largely hidden from view during speech. The present article describes a new combination of optical tracking and ultrasound imaging that allows for a…
Evaluating the Potential of the GeoWall for Geographic Education
ERIC Educational Resources Information Center
Slocum, Terry A.; Dunbar, Matthew D.; Egbert, Stephen L.
2007-01-01
This article discusses modern stereoscopic displays for geographic education, focusing on a large-format display--the GeoWall. To evaluate the potential of the GeoWall, geography instructors were asked to express their reactions to images viewed on the GeoWall during a focus group experiment. Instructors overwhelmingly supported using the GeoWall,…
Large area MEMS based ultrasound device for cancer detection
NASA Astrophysics Data System (ADS)
Wodnicki, Robert; Thomenius, Kai; Ming Hooi, Fong; Sinha, Sumedha P.; Carson, Paul L.; Lin, Der-Song; Zhuang, Xuefeng; Khuri-Yakub, Pierre; Woychik, Charles
2011-08-01
We present image results obtained using a prototype ultrasound array that demonstrates the fundamental architecture for a large area MEMS based ultrasound device for detection of breast cancer. The prototype array consists of a tiling of capacitive Micromachined Ultrasound Transducers (cMUTs) that have been flip-chip attached to a rigid organic substrate. The pitch on the cMUT elements is 185 μm and the operating frequency is nominally 9 MHz. The spatial resolution of the new probe is comparable to those of production PZT probes; however the sensitivity is reduced by conditions that should be correctable. Simulated opposed-view image registration and Speed of Sound volume reconstruction results for ultrasound in the mammographic geometry are also presented.
Maximizing the Science Output of GOES-R SUVI during Operations
NASA Astrophysics Data System (ADS)
Shaw, M.; Vasudevan, G.; Mathur, D. P.; Mansir, D.; Shing, L.; Edwards, C. G.; Seaton, D. B.; Darnel, J.; Nwachuku, C.
2017-12-01
Regular manual calibrations are an often-unavoidable demand on ground operations personnel during long-term missions. This paper describes a set of features built into the instrument control software and the techniques employed by the Solar Ultraviolet Imager (SUVI) team to automate a large fraction of regular on-board calibration activities, allowing SUVI to be operated with little manual commanding from the ground and little interruption to nominal sequencing. SUVI is a Generalized Cassegrain telescope with a large field of view that images the Sun in six extreme ultraviolet (EUV) narrow bandpasses centered at 9.4, 13.1, 17.1, 19.5, 28.4 and 30.4 nm. It is part of the payload of the Geostationary Operational Environmental Satellite (GOES-R) mission.
Perspective View with Landsat Overlay, Metro Los Angeles, Calif.: Malibu to Mount Baldy
NASA Technical Reports Server (NTRS)
2002-01-01
Mount San Antonio (more commonly known as Mount Baldy) crowns the San Gabriel Mountains northeast of Los Angeles in this computer-generated east-northeast perspective viewed from above the Malibu coastline. On the right, the Pacific Ocean and Santa Monica are in the foreground. Further away are downtown Los Angeles (appearing grey) and then the San Gabriel Valley, which lies adjacent to the mountain front. The San Fernando Valley appears in the left foreground, separated from the ocean by the Santa Monica Mountains. At 3,068 meters (10,064 feet) Mount Baldy rises above the tree line, exposing bright white rocks that are not snow capped in this early autumn scene.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM), an enhanced color Landsat 7 satellite image, and a false sky. Topographic expression is exaggerated one and one-half times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive. The Landsat 7 Thematic Mapper image used here was provided to the SRTM project by the United States Geological Survey, Earth Resources Observation Systems (EROS) Data Center, Sioux Falls, S.D.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 26 kilometers (16 miles), View distance 85 kilometers (53 miles) Location: 34.2 deg. North lat., 118.2 deg. West lon. Orientation: View east-northeast, 3 degrees below horizontal Image Data: Landsat Bands 3, 2+4, 1 as red, green, blue, respectively, sharpened with Band 8 panchromatic detail Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters color plus 15 meters sharpening (98 and 49 feet, respectively) Date Acquired: February 2000 (SRTM) 20 September 1999 (Landsat)A calibration method based on virtual large planar target for cameras with large FOV
NASA Astrophysics Data System (ADS)
Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu
2018-02-01
In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.
An Insect Eye Inspired Miniaturized Multi-Camera System for Endoscopic Imaging.
Cogal, Omer; Leblebici, Yusuf
2017-02-01
In this work, we present a miniaturized high definition vision system inspired by insect eyes, with a distributed illumination method, which can work in dark environments for proximity imaging applications such as endoscopy. Our approach is based on modeling biological systems with off-the-shelf miniaturized cameras combined with digital circuit design for real time image processing. We built a 5 mm radius hemispherical compound eye, imaging a 180 ° ×180 ° degrees field of view while providing more than 1.1 megapixels (emulated ommatidias) as real-time video with an inter-ommatidial angle ∆ϕ = 0.5 ° at 18 mm radial distance. We made an FPGA implementation of the image processing system which is capable of generating 25 fps video with 1080 × 1080 pixel resolution at a 120 MHz processing clock frequency. When compared to similar size insect eye mimicking systems in literature, the system proposed in this paper features 1000 × resolution increase. To the best of our knowledge, this is the first time that a compound eye with built-in illumination idea is reported. We are offering our miniaturized imaging system for endoscopic applications like colonoscopy or laparoscopic surgery where there is a need for large field of view high definition imagery. For that purpose we tested our system inside a human colon model. We also present the resulting images and videos from the human colon model in this paper.
Lee, Yong-Beom; Yang, Cheol-Jung; Li, Cheng Zhen; Zhuan, Zhong; Kwon, Seung-Cheol; Noh, Kyu-Cheol
2018-03-01
This study aimed to investigate whether fatty infiltration (FI) measured on a single sagittal magnetic resonance imaging (MRI) slice can represent FI of the whole supraspinatus muscle. This study retrospectively reviewed the MRIs of 106 patients (age 50-79 years) divided into three rotator cuff tear-size groups: medium, large, and massive. Fat mass and muscle mass on all T1-weighted sagittal MRI scans (FA and MA) were measured. Of the total MRI scans, the Y-view was defined as the most lateral image of the junction of the scapular spine with the scapular body on the oblique sagittal T1-weighted image. Fat mass and muscle mass seen on this Y-view single slice were recorded as F1 and M1, respectively. Fat mass and muscle mass were also assessed on MRI scans lateral and medial to the Y-view. The means of fat mass and muscle mass on these three slices were recorded as F3 and M3, respectively. Average FI ratios (fat mass/muscle mass) of the three assessment methods (F1/M1, FA/MA, and F3/M3) were compared. Intraclass correlation coefficients (ICCs) were calculated for inter- and intraobserver reliability. ICCs showed higher reliability (> 0.8) for all measurements. F1/M1 values were not statistically different from FA/MA and F3/M3 values ( p > 0.05), except in males with medium and large tears. F3/M3 and FA/MA were not statistically different. The difference between F1/M1 and FA/MA did not exceed 2%. A single sagittal MRI slice can represent the whole FI in chronic rotator cuff tears, except in some patient groups. We recommend measurement of FI using a single sagittal MRI slice, given the effort required for repeated measurements.
Lee, Yong-Beom; Yang, Cheol-Jung; Li, Cheng Zhen; Zhuan, Zhong; Kwon, Seung-Cheol
2018-01-01
Background This study aimed to investigate whether fatty infiltration (FI) measured on a single sagittal magnetic resonance imaging (MRI) slice can represent FI of the whole supraspinatus muscle. Methods This study retrospectively reviewed the MRIs of 106 patients (age 50–79 years) divided into three rotator cuff tear-size groups: medium, large, and massive. Fat mass and muscle mass on all T1-weighted sagittal MRI scans (FA and MA) were measured. Of the total MRI scans, the Y-view was defined as the most lateral image of the junction of the scapular spine with the scapular body on the oblique sagittal T1-weighted image. Fat mass and muscle mass seen on this Y-view single slice were recorded as F1 and M1, respectively. Fat mass and muscle mass were also assessed on MRI scans lateral and medial to the Y-view. The means of fat mass and muscle mass on these three slices were recorded as F3 and M3, respectively. Average FI ratios (fat mass/muscle mass) of the three assessment methods (F1/M1, FA/MA, and F3/M3) were compared. Intraclass correlation coefficients (ICCs) were calculated for inter- and intraobserver reliability. Results ICCs showed higher reliability (> 0.8) for all measurements. F1/M1 values were not statistically different from FA/MA and F3/M3 values (p > 0.05), except in males with medium and large tears. F3/M3 and FA/MA were not statistically different. The difference between F1/M1 and FA/MA did not exceed 2%. Conclusions A single sagittal MRI slice can represent the whole FI in chronic rotator cuff tears, except in some patient groups. We recommend measurement of FI using a single sagittal MRI slice, given the effort required for repeated measurements. PMID:29564048
NetVLAD: CNN Architecture for Weakly Supervised Place Recognition.
Arandjelovic, Relja; Gronat, Petr; Torii, Akihiko; Pajdla, Tomas; Sivic, Josef
2018-06-01
We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following four principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we create a new weakly supervised ranking loss, which enables end-to-end learning of the architecture's parameters from images depicting the same places over time downloaded from Google Street View Time Machine. Third, we develop an efficient training procedure which can be applied on very large-scale weakly labelled tasks. Finally, we show that the proposed architecture and training procedure significantly outperform non-learnt image representations and off-the-shelf CNN descriptors on challenging place recognition and image retrieval benchmarks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mooney, K; Zhao, T; Green, O
Purpose: To assess the performance of the deformable image registration algorithm used for MRI-guided adaptive radiation therapy using image feature analysis. Methods: MR images were collected from five patients treated on the MRIdian (ViewRay, Inc., Oakwood Village, OH), a three head Cobalt-60 therapy machine with an 0.35 T MR system. The images were acquired immediately prior to treatment with a uniform 1.5 mm resolution. Treatment sites were as follows: head/neck, lung, breast, stomach, and bladder. Deformable image registration was performed using the ViewRay software between the first fraction MRI and the final fraction MRI, and the DICE similarity coefficient (DSC)more » for the skin contours was reported. The SIFT and Harris feature detection and matching algorithms identified point features in each image separately, then found matching features in the other image. The target registration error (TRE) was defined as the vector distance between matched features on the two image sets. Each deformation was evaluated based on comparison of average TRE and DSC. Results: Image feature analysis produced between 2000–9500 points for evaluation on the patient images. The average (± standard deviation) TRE for all patients was 3.3 mm (±3.1 mm), and the passing rate of TRE<3 mm was 60% on the images. The head/neck patient had the best average TRE (1.9 mm±2.3 mm) and the best passing rate (80%). The lung patient had the worst average TRE (4.8 mm±3.3 mm) and the worst passing rate (37.2%). DSC was not significantly correlated with either TRE (p=0.63) or passing rate (p=0.55). Conclusions: Feature matching provides a quantitative assessment of deformable image registration, with a large number of data points for analysis. The TRE of matched features can be used to evaluate the registration of many objects throughout the volume, whereas DSC mainly provides a measure of gross overlap. We have a research agreement with ViewRay Inc.« less
SU-E-I-27: Establishing Target Exposure Index Values for Computed Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, N; Tchou, P; Belcher, K
2014-06-01
Purpose: To develop a standard set of target exposure index (TEI) values to be applied to Agfa Computed Radiography (CR) readers in accordance with International Electrotechnical Committee 62494-1 (ed. 1.0). Methods: A large data cohort was collected from six USAF Medical Treatment Facilities that exclusively use Agfa CR Readers. Dose monitoring statistics were collected from each reader. The data was analyzed based on anatomic region, view, and processing speed class. The Agfa specific exposure metric, logarithmic mean (LGM), was converted to exposure index (EI) for each data set. The optimum TEI value was determined by minimizing the number of studiesmore » that fell outside the acceptable deviation index (DI) range of +/− 2 for phototimed techniques or a range of +/−3 for fixed techniques. An anthropomorphic radiographic phantom was used to corroborate the TEI recommendations. Images were acquired of several anatomic regions and views using standard techniques. The images were then evaluated by two radiologists as either acceptable or unacceptable. The acceptable image with the lowest exposure and EI value was compared to the recommended TEI values using a passing DI range. Results: Target EI values were determined for a comprehensive list of anatomic regions and views. Conclusion: Target EI values must be established on each CR unit in order to provide a positive feedback system for the technologist. This system will serve as a mechanism to prevent under or overexposures of patients. The TEI recommendations are a first attempt at a large scale process improvement with the goal of setting reasonable and standardized TEI values. The implementation and effectiveness of the recommended TEI values should be monitored and adjustments made as necessary.« less
Imaging simulation of active EO-camera
NASA Astrophysics Data System (ADS)
Pérez, José; Repasi, Endre
2018-04-01
A modeling scheme for active imaging through atmospheric turbulence is presented. The model consists of two parts: In the first part, the illumination laser beam is propagated to a target that is described by its reflectance properties, using the well-known split-step Fourier method for wave propagation. In the second part, the reflected intensity distribution imaged on a camera is computed using an empirical model developed for passive imaging through atmospheric turbulence. The split-step Fourier method requires carefully chosen simulation parameters. These simulation requirements together with the need to produce dynamic scenes with a large number of frames led us to implement the model on GPU. Validation of this implementation is shown for two different metrics. This model is well suited for Gated-Viewing applications. Examples of imaging simulation results are presented here.
NASA Technical Reports Server (NTRS)
Rinehart, S. A.; Armstrong, T.; Frey, Bradley J.; Jung, J.; Kirk, J.; Leisawitz, David T.; Leviton, Douglas B.; Lyon, R.; Maher, Stephen; Martino, Anthony J.;
2007-01-01
The Wide-Field Imaging Interferometry Testbed (WIIT) was designed to develop techniques for wide-field of view imaging interferometry, using "double-Fourier" methods. These techniques will be important for a wide range of future spacebased interferometry missions. We have provided simple demonstrations of the methodology already, and continuing development of the testbed will lead to higher data rates, improved data quality, and refined algorithms for image reconstruction. At present, the testbed effort includes five lines of development; automation of the testbed, operation in an improved environment, acquisition of large high-quality datasets, development of image reconstruction algorithms, and analytical modeling of the testbed. We discuss the progress made towards the first four of these goals; the analytical modeling is discussed in a separate paper within this conference.
Performance of bent-crystal x-ray microscopes for high energy density physics research
Schollmeier, Marius S.; Geissel, Matthias; Shores, Jonathon E.; ...
2015-05-29
We present calculations for the field of view (FOV), image fluence, image monochromaticity, spectral acceptance, and image aberrations for spherical crystal microscopes, which are used as self-emission imaging or backlighter systems at large-scale high energy density physics facilities. Our analytic results are benchmarked with ray-tracing calculations as well as with experimental measurements from the 6.151 keV backlighter system at Sandia National Laboratories. Furthermore, the analytic expressions can be used for x-ray source positions anywhere between the Rowland circle and object plane. We discovered that this enables quick optimization of the performance of proposed but untested, bent-crystal microscope systems to findmore » the best compromise between FOV, image fluence, and spatial resolution for a particular application.« less
X-ray mosaic nanotomography of large microorganisms.
Mokso, R; Quaroni, L; Marone, F; Irvine, S; Vila-Comamala, J; Blanke, A; Stampanoni, M
2012-02-01
Full-field X-ray microscopy is a valuable tool for 3D observation of biological systems. In the soft X-ray domain organelles can be visualized in individual cells while hard X-ray microscopes excel in imaging of larger complex biological tissue. The field of view of these instruments is typically 10(3) times the spatial resolution. We exploit the assets of the hard X-ray sub-micrometer imaging and extend the standard approach by widening the effective field of view to match the size of the sample. We show that global tomography of biological systems exceeding several times the field of view is feasible also at the nanoscale with moderate radiation dose. We address the performance issues and limitations of the TOMCAT full-field microscope and more generally for Zernike phase contrast imaging. Two biologically relevant systems were investigated. The first being the largest known bacteria (Thiomargarita namibiensis), the second is a small myriapod species (Pauropoda sp.). Both examples illustrate the capacity of the unique, structured condenser based broad-band full-field microscope to access the 3D structural details of biological systems at the nanoscale while avoiding complicated sample preparation, or even keeping the sample environment close to the natural state. Copyright © 2012 Elsevier Inc. All rights reserved.
Majewski, Stanislaw [Yorktown, VA; Proffitt, James [Newport News, VA
2011-12-06
A compact, mobile, dedicated SPECT brain imager that can be easily moved to the patient to provide in-situ imaging, especially when the patient cannot be moved to the Nuclear Medicine imaging center. As a result of the widespread availability of single photon labeled biomarkers, the SPECT brain imager can be used in many locations, including remote locations away from medical centers. The SPECT imager improves the detection of gamma emission from the patient's head and neck area with a large field of view. Two identical lightweight gamma imaging detector heads are mounted to a rotating gantry and precisely mechanically co-registered to each other at 180 degrees. A unique imaging algorithm combines the co-registered images from the detector heads and provides several SPECT tomographic reconstructions of the imaged object thereby improving the diagnostic quality especially in the case of imaging requiring higher spatial resolution and sensitivity at the same time.
NASA Astrophysics Data System (ADS)
Laurent, Florence; Renault, Edgard; Boudon, Didier; Caillier, Patrick; Daguisé, Eric; Dupuy, Christophe; Jarno, Aurélien; Lizon, Jean-Louis; Migniau, Jean-Emmanuel; Nicklas, Harald; Piqueras, Laure
2014-07-01
MUSE (Multi Unit Spectroscopic Explorer) is a second generation Very Large Telescope (VLT) integral field spectrograph developed for the European Southern Observatory (ESO). It combines a 1' x 1' field of view sampled at 0.2 arcsec for its Wide Field Mode (WFM) and a 7.5"x7.5" field of view for its Narrow Field Mode (NFM). Both modes will operate with the improved spatial resolution provided by GALACSI (Ground Atmospheric Layer Adaptive Optics for Spectroscopic Imaging), that will use the VLT deformable secondary mirror and 4 Laser Guide Stars (LGS) foreseen in 2015. MUSE operates in the visible wavelength range (0.465-0.93 μm). A consortium of seven institutes is currently commissioning MUSE in the Very Large Telescope for the Preliminary Acceptance in Chile, scheduled for September, 2014. MUSE is composed of several subsystems which are under the responsibility of each institute. The Fore Optics derotates and anamorphoses the image at the focal plane. A Splitting and Relay Optics feed the 24 identical Integral Field Units (IFU), that are mounted within a large monolithic structure. Each IFU incorporates an image slicer, a fully refractive spectrograph with VPH-grating and a detector system connected to a global vacuum and cryogenic system. During 2012 and 2013, all MUSE subsystems were integrated, aligned and tested to the P.I. institute at Lyon. After successful PAE in September 2013, MUSE instrument was shipped to the Very Large Telescope in Chile where that was aligned and tested in ESO integration hall at Paranal. After, MUSE was directly transported, fully aligned and without any optomechanical dismounting, onto VLT telescope where the first light was overcame the 7th of February, 2014. This paper describes the alignment procedure of the whole MUSE instrument with respect to the Very Large Telescope (VLT). It describes how 6 tons could be move with accuracy better than 0.025mm and less than 0.25 arcmin in order to reach alignment requirements. The success of the MUSE alignment is demonstrated by the excellent results obtained onto MUSE image quality and throughput directly onto the sky.
SRTM Stereo Pair: Bhuj, India, Two Weeks After earthquake
NASA Technical Reports Server (NTRS)
2001-01-01
On January 26, 2001, the city of Bhuj suffered the most deadly earthquake in India's history. About 20,000 people were killed, and more than one million homes were damaged or destroyed. Shortly after the quake, geologists conducted field investigations to inventory and analyze the natural effects of the event. Stereoscopic views, similar to this image, aided the geologists in locating landforms indicative of long-term (and possibly ongoing) deformation. Soon, elevation data from the Shuttle Radar Topography Mission (SRTM) will be used in the study of a wide variety of natural hazards worldwide.
In this image, the city of Bhuj appears as a gray area at the scene center, and the city airport is toward the north (top). Vegetation appears green. Rugged but low relief hills of previously folded and faulted bedrock appear south (bottom) and northwest (upper-left) of the city.This stereoscopic image was generated by draping a Landsat satellite image (taken just two weeks after the earthquake) over a preliminary SRTM elevation model. Two differing perspectives were then calculated, one for each eye. They can be seen in 3-D by viewing the left image with the right eye and the right image with the left eye (cross-eyed viewing) or by downloading and printing the image pair and viewing them with a stereoscope. When stereoscopically merged, the result is a vertically exaggerated view of the Earth's surface in its full three dimensions.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter resolution of most Landsat images and will substantially help in analyses of the large and growing Landsat image archive. The Landsat 7 Thematic Mapper image used here was provided to the SRTM project by the United States Geological Survey, Earth Resources Observation Systems (EROS) Data Center, Sioux Falls, South Dakota.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on February 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar(SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect three-dimensional measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter-long (200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between the National Aeronautics and Space Administration (NASA), the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense (DoD), and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Earth Science Enterprise, Washington, DC.Size: 13.5 x 20.6 kilometers ( 8.4 x 12.8 miles) Location: 23.3 deg. North lat., 69.7 deg. East lon. Orientation: North toward the top Image Data: Landsat Bands 1, 2+4, 3 as blue, green, red, respectively Date Acquired: February 2000 (SRTM), February 9, 2001 (Landsat)Mapping the human atria with optical coherence tomography
NASA Astrophysics Data System (ADS)
Lye, Theresa H.; Gan, Yu; Hendon, Christine P.
2017-02-01
Atrial structure plays an important role in the mechanisms of atrial disease. However, detailed imaging of human atria remains limited due to many imaging modalities lacking sufficient resolution. We propose the use of optical coherence tomography (OCT), which has micrometer resolution and millimeter-scale imaging depth well-suited for the atria, combined with image stitching algorithms, to develop large, detailed atria image maps. Human atria samples (n = 7) were obtained under approved protocols from the National Disease Research Interchange (NDRI). One right atria sample was imaged using an ultrahigh-resolution spectral domain OCT system, with 5.52 and 2.72 μm lateral and axial resolution in air, respectively, and 1.78 mm imaging depth. Six left atria and five pulmonary vein samples were imaged using the spectral domain OCT system, Telesto I (Thorlabs GmbH, Germany) with 15 and 6.5 μm lateral and axial resolution in air, respectively, and 2.51 mm imaging depth. Overlapping image volumes were obtained from areas of the human left and right atria and the pulmonary veins. Regions of collagen, adipose, and myocardium could be identified within the OCT images. Image stitching was applied to generate fields of view with side dimensions up to about 3 cm. This study established steps towards mapping large regions of the human atria and pulmonary veins in high resolution using OCT.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.
Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho
2016-04-18
We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.
NASA Astrophysics Data System (ADS)
Abeytunge, Sanjee; Li, Yongbiao; Larson, Bjorg; Peterson, Gary; Toledo-Crow, Ricardo; Rajadhyaksha, Milind
2013-03-01
Surgical oncology is guided by examining pathology that is prepared during or after surgery. The preparation time for Mohs surgery in skin is 20-45 minutes, for head-and-neck and breast cancer surgery is hours to days. Often this results in incomplete tumor removal such that positive margins remain. However, high resolution images of excised tissue taken within few minutes can provide a way to assess the margins for residual tumor. Current high resolution imaging methods such as confocal microscopy are limited to small fields of view and require assembling a mosaic of images in two dimensions (2D) to cover a large area, which requires long acquisition times and produces artifacts. To overcome this limitation we developed a confocal microscope that scans strips of images with high aspect ratios and stitches the acquired strip-images in one dimension (1D). Our "Strip Scanner" can image a 10 x 10 mm2 area of excised tissue with sub-cellular detail in about one minute. The strip scanner was tested on 17 Mohs excisions and the mosaics were read by a Mohs surgeon blinded to the pathology. After this initial trial, we built a mobile strip scanner that can be moved into different surgical settings. A tissue fixture capable of scanning up to 6 x 6 cm2 of tissue was also built. Freshly excised breast and head-and-neck tissues were imaged in the pathology lab. The strip-images were registered and displayed simultaneously with image acquisition resulting in large, high-resolution confocal mosaics of fresh surgical tissue in a clinical setting.
Content metamorphosis in synthetic holography
NASA Astrophysics Data System (ADS)
Desbiens, Jacques
2013-02-01
A synthetic hologram is an optical system made of hundreds of images amalgamated in a structure of holographic cells. Each of these images represents a point of view on a three-dimensional space which makes us consider synthetic holography as a multiple points of view perspective system. In the composition of a computer graphics scene for a synthetic hologram, the field of view of the holographic image can be divided into several viewing zones. We can attribute these divisions to any object or image feature independently and operate different transformations on image content. In computer generated holography, we tend to consider content variations as a continuous animation much like a short movie. However, by composing sequential variations of image features in relation with spatial divisions, we can build new narrative forms distinct from linear cinematographic narration. When observers move freely and change their viewing positions, they travel from one field of view division to another. In synthetic holography, metamorphoses of image content are within the observer's path. In all imaging Medias, the transformation of image features in synchronisation with the observer's position is a rare occurrence. However, this is a predominant characteristic of synthetic holography. This paper describes some of my experimental works in the development of metamorphic holographic images.
Effects of task and image properties on visual-attention deployment in image-quality assessment
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid
2015-03-01
It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.
Partially-overlapped viewing zone based integral imaging system with super wide viewing angle.
Xiong, Zhao-Long; Wang, Qiong-Hua; Li, Shu-Li; Deng, Huan; Ji, Chao-Chao
2014-09-22
In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.
NASA Technical Reports Server (NTRS)
2004-01-01
13 August 2004 This red wide angle Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a view of the retreating seasonal south polar cap in the most recent spring in late 2003. Bright areas are covered with frost, dark areas are those from which the solid carbon dioxide has sublimed away. The center of this image is located near 76.5oS, 28.2oW. The scene is large; it covers an area about 250 km (155 mi) across. The scene is illuminated by sunlight from the upper left.2015-04-15
Analysis of radio tracking data have enabled maps of the gravity field of Mercury to be derived. In this image, overlain on a mosaic obtained by MESSENGER's Mercury Dual Imaging System and illuminated with a shape model determined from stereo-photoclinometry, Mercury's gravity anomalies are depicted in colors. Red tones indicate mass concentrations, centered on the Caloris basin (center) and the Sobkou region (right limb). Such large-scale gravitational anomalies are signatures of subsurface structure and evolution. The north pole is near the top of the sunlit area in this view. http://photojournal.jpl.nasa.gov/catalog/PIA19285
NASA Astrophysics Data System (ADS)
Xiong, Ming; Davies, Jackie A.; Li, Bo; Yang, Liping; Liu, Ying D.; Xia, Lidong; Harrison, Richard A.; Keiji, Hayashi; Li, Huichao
2017-07-01
Interplanetary corotating interaction regions (CIRs) can be remotely imaged in white light (WL), as demonstrated by the Solar Mass Ejection Imager (SMEI) on board the Coriolis spacecraft and Heliospheric Imagers (HIs) on board the twin Solar TErrestrial RElations Observatory (STEREO) spacecraft. The interplanetary WL intensity, due to Thomson scattering of incident sunlight by free electrons, is jointly determined by the 3D distribution of electron number density and line-of-sight (LOS) weighting factors of the Thomson-scattering geometry. The 2D radiance patterns of CIRs in WL sky maps look very different from different 3D viewpoints. Because of the in-ecliptic locations of both the STEREO and Coriolis spacecraft, the longitudinal dimension of interplanetary CIRs has, up to now, always been integrated in WL imagery. To synthesize the WL radiance patterns of CIRs from an out-of-ecliptic (OOE) vantage point, we perform forward magnetohydrodynamic modeling of the 3D inner heliosphere during Carrington Rotation CR1967 at solar maximum. The mixing effects associated with viewing 3D CIRs are significantly minimized from an OOE viewpoint. Our forward modeling results demonstrate that OOE WL imaging from a latitude greater than 60° can (1) enable the garden-hose spiral morphology of CIRs to be readily resolved, (2) enable multiple coexisting CIRs to be differentiated, and (3) enable the continuous tracing of any interplanetary CIR back toward its coronal source. In particular, an OOE view in WL can reveal where nascent CIRs are formed in the extended corona and how these CIRs develop in interplanetary space. Therefore, a panoramic view from a suite of wide-field WL imagers in a solar polar orbit would be invaluable in unambiguously resolving the large-scale longitudinal structure of CIRs in the 3D inner heliosphere.
Bernhardt, Sylvain; Nicolau, Stéphane A; Agnus, Vincent; Soler, Luc; Doignon, Christophe; Marescaux, Jacques
2016-05-01
The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach. Copyright © 2016 Elsevier B.V. All rights reserved.
A Multi-Source Inverse-Geometry CT system: Initial results with an 8 spot x-ray source array
Baek, Jongduk; De Man, Bruno; Uribe, Jorge; Longtin, Randy; Harrison, Daniel; Reynolds, Joseph; Neculaes, Bogdan; Frutschy, Kristopher; Inzinna, Louis; Caiafa, Antonio; Senzig, Robert; Pelc, Norbert J.
2014-01-01
We present initial experimental results of a rotating-gantry multi-source inverse-geometry CT (MS-IGCT) system. The MS-IGCT system was built with a single module of 2×4 x-ray sources and a 2D detector array. It produced a 75 mm in-plane field-of-view (FOV) with 160 mm axial coverage in a single gantry rotation. To evaluate system performance, a 2.5 inch diameter uniform PMMA cylinder phantom, a 200 μm diameter tungsten wire, and a euthanized rat were scanned. Each scan acquired 125 views per source and the gantry rotation time was 1 second per revolution. Geometric calibration was performed using a bead phantom. The scanning parameters were 80 kVp, 125 mA, and 5.4 us pulse per source location per view. A data normalization technique was applied to the acquired projection data, and beam hardening and spectral nonlinearities of each detector channel were corrected. For image reconstruction, the projection data of each source row were rebinned into a full cone beam data set, and the FDK algorithm was used. The reconstructed volumes from upper and lower source rows shared an overlap volume which was combined in image space. The images of the uniform PMMA cylinder phantom showed good uniformity and no apparent artefacts. The measured in-plane MTF showed 13 lp/cm at 10% cutoff, in good agreement with expectations. The rat data were also reconstructed reliably. The initial experimental results from this rotating-gantry MS-IGCT system demonstrated its ability to image a complex anatomical object without any significant image artefacts and to achieve high image resolution and large axial coverage in a single gantry rotation. PMID:24556567
Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M
2008-08-01
The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.
Huang, Kuo-Wei; Su, Ting-Wei; Ozcan, Aydogan; Chiou, Pei-Yu
2013-06-21
We demonstrate an optoelectronic tweezer (OET) coupled to a lensfree holographic microscope for real-time interactive manipulation of cells and micro-particles over a large field-of-view (FOV). This integrated platform can record the holographic images of cells and particles over the entire active area of a CCD sensor array, perform digital image reconstruction to identify target cells, dynamically track the positions of cells and particles, and project light beams to trigger light-induced dielectrophoretic forces to pattern and sort cells on a chip. OET technology has been previously shown to be capable of performing parallel single cell manipulation over a large area. However, its throughput has been bottlenecked by the number of cells that can be imaged within the limited FOV of a conventional microscope objective lens. Integrating lensfree holographic imaging with OET solves this fundamental FOV barrier, while also creating a compact on-chip cell/particle manipulation platform. Using this unique platform, we have successfully demonstrated real-time interactive manipulation of thousands of single cells and micro-particles over an ultra-large area of e.g., 240 mm(2) (i.e. 17.96 mm × 13.52 mm).
Little Eyes on Large Solar Motions
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-10-01
Images taken during the solar eclipse in 2012. The central color composite of the eclipsed solar surface was captured by SDO, the white-light view of the solar corona around it was taken by the authors, and the background, wide-field black-and-white view is from LASCO. The white arrows mark the atypical structure. [Alzate et al. 2017]It seems like science is increasingly being done with advanced detectors on enormous ground- and space-based telescopes. One might wonder: is there anything left to learn from observations made with digital cameras mounted on 10-cm telescopes?The answer is yes plenty! Illustrating this point, a new study using such equipment recently reports on the structure and dynamics of the Suns corona during two solar eclipses.A Full View of the CoronaThe solar corona is the upper part of the Suns atmosphere, extending millions of kilometers into space. This plasma is dynamic, with changing structures that arise in response to activity on the Suns surface such as enormous ejections of energy known as coronal mass ejections (CMEs). Studying the corona is therefore important for understanding what drives its structure and how energy is released from the Sun.Though there exist a number of space-based telescopes that observe the Suns corona, they often have limited fields of view. The Solar Dynamics Observatory AIA, for instance, has spectacular resolution but only images out to 1/3 of a solar radius above the Suns limb. The space-based coronagraph LASCO C2, on the other hand, provides a broad view of the outer regions of the corona, but it only images down to 2.2 solar radii above the Suns limb. Piecing together observations from these telescopes therefore leaves a gap that prevents a full picture of the large-scale corona and how it connects to activity at the solar surface.Same as the previous figure, but for the eclipse in 2013. [Alzate et al. 2017]To provide this broad, continuous picture, a team of scientists used digital cameras mounted on 10-cm telescopes to capture white-light images from the solar surface out to several solar radii using a natural coronagraph: a solar eclipse. The team made two sets of observations: one during an eclipse in 2012 in Australia, and one during an eclipse in 2013 in Gabon, Africa. In a recent publication led by Nathalia Alzate (Honolulu Community College), the team now reports what they learned from these observations.Building Atypical StructuresThe authors image processing revealed two atypical large-scale structures with sharp edges, somewhat similar in appearance to what is seen near the boundaries of rapidly expanding polar coronal holes. But these structures, visible in the southeast quadrant of the images taken during both eclipses, were not located near the poles.By analyzing their images along with space-based images taken at the same time, Alzate and collaborators were able to determine that the shape the structures took was instead a direct consequence of a series of sudden brightenings due to low-level flaring events on the solar surface. These events were followed by small jets, and then very faint, puff-like CMEs that might otherwise have gone unnoticed.Impact of the passage of a series of puff-like CMEs (shown in the LASCO time sequence in the bottom panels) on coronal structures. [Alzate et al. 2017]The fact that such innocuous transient events in the Suns lower atmosphere can be enough to influence the coronas large-scale structure for timescales of 1248 hours is a significant discovery. There are roughly 3 CMEs per day during solar maximum, suggesting that atypical structures like the ones discovered in these images are likely very common. These results therefore have a significant impact on our understanding of the solar corona which goes to show that theres still a lot we can learn with small telescopes!CitationNathalia Alzate et al 2017 ApJ 848 84. doi:10.3847/1538-4357/aa8cd2
Ash and Steam, Soufriere Hills Volcano, Monserrat
NASA Technical Reports Server (NTRS)
2002-01-01
International Space Station crew members are regularly alerted to dynamic events on the Earth's surface. On request from scientists on the ground, the ISS crew observed and recorded activity from the summit of Soufriere Hills on March 20, 2002. These two images provide a context view of the island (bottom) and a detailed view of the summit plume (top). When the images were taken, the eastern side of the summit region experienced continued lava growth, and reports posted on the Smithsonian Institution's Weekly Volcanic Activity Report indicate that 'large (50-70 m high), fast-growing, spines developed on the dome's summit. These spines periodically collapsed, producing pyroclastic flows down the volcano's east flank that sometimes reached the Tar River fan. Small ash clouds produced from these events reached roughly 1 km above the volcano and drifted westward over Plymouth and Richmond Hill. Ash predominately fell into the sea. Sulfur dioxide emission rates remained high. Theodolite measurements of the dome taken on March 20 yielded a dome height of 1,039 m.' Other photographs by astronauts of Montserrat have been posted on the Earth Observatory: digital photograph number ISS002-E-9309, taken on July 9, 2001; and a recolored and reprojected version of the same image. Digital photograph numbers ISS004-E-8972 and 8973 were taken 20 March, 2002 from Space Station Alpha and were provided by the Earth Sciences and Image Analysis Laboratory at Johnson Space Center. Additional images taken by astronauts and cosmonauts can be viewed at the NASA-JSC Gateway to Astronaut Photography of Earth.
Atmospheric Science Data Center
2013-04-16
article title: Unique Views of a Shattered Ice Shelf View Larger Image ... views of the breakup of the northern section of the Larsen B ice shelf are shown in this image pair from the Multi-angle Imaging ...
2016-06-30
NASA's Juno spacecraft obtained this color view on June 28, 2016, at a distance of 3.9 million miles (6.2 million kilometers) from Jupiter. As Juno nears its destination, features on the giant planet are increasingly visible, including the Great Red Spot. The spacecraft is approaching over Jupiter's north pole, providing a unique perspective on the Jupiter system, including its four large moons. The scene was captured by the mission's imaging camera, called JunoCam, which is designed to acquire high resolution views of features in Jupiter's atmosphere from very close to the planet. http://photojournal.jpl.nasa.gov/catalog/PIA20705
Surface coil proton MR imaging at 2 T.
Röschmann, P; Tischler, R
1986-10-01
We describe the design and application of surface coils for magnetic resonance (MR) imaging at high resonance frequencies (85 MHz). Circular, rectangular-frame, and reflector-type surface coils were used in the transmit-and-receive mode. With these coils, the required radio frequency power is reduced by factors of two up to 100 with respect to head and body coils. With the small, circular coils, high-resolution images of a small region of interest can be obtained that are free of foldback and motion artifacts originating outside the field of interest. With the rectangular-frame and reflector coils, large fields of view are also accessible. As examples of applications, single- and multiple-section images of the eye, knee, head and shoulder, and spinal cord are provided.
Automatic computation of 2D cardiac measurements from B-mode echocardiography
NASA Astrophysics Data System (ADS)
Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin
2012-03-01
We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.
Saying Goodbye to 'Bonneville' Crater
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Annotated Image NASA's Mars Exploration Rover Spirit took this panoramic camera image on sol 86 (March 31, 2004) before driving 36 meters (118 feet) on sol 87 toward its future destination, the Columbia Hills. This is probably the last panoramic camera image that Spirit will take from the high rim of 'Bonneville' crater, and provides an excellent view of the ejecta-covered path the rover has journeyed thus far. The lander can be seen toward the upper right of the frame and is approximately 321 meters (1060 feet) away from Spirit's current location. The large hill on the horizon is Grissom Hill. The Colombia Hills, located to the left, are not visible in this image.2015-02-12
This montage of Cassini Synthetic Aperture Radar (SAR) images of the surface of Titan shows four examples of how a newly developed technique for handling noise results in clearer, easier to interpret views. The top row of images was produced in the manner used since the mission arrived in the Saturn system a decade ago; the row at bottom was produced using the new technique. The three leftmost image pairs show bays and spits of land in Ligea Mare, one of Titan's large hydrocarbon seas. The rightmost pair shows a valley network along Jingpo Lacus, one of Titan's larger northern lakes. North is toward the left in these images. Each thumbnail represents an area 70 miles (112 kilometers) wide. http://photojournal.jpl.nasa.gov/catalog/PIA19053
Predictive searching algorithm for Fourier ptychography
NASA Astrophysics Data System (ADS)
Li, Shunkai; Wang, Yifan; Wu, Weichen; Liang, Yanmei
2017-12-01
By capturing a set of low-resolution images under different illumination angles and stitching them together in the Fourier domain, Fourier ptychography (FP) is capable of providing high-resolution image with large field of view. Despite its validity, long acquisition time limits its real-time application. We proposed an incomplete sampling scheme in this paper, termed the predictive searching algorithm to shorten the acquisition and recovery time. Informative sub-regions of the sample’s spectrum are searched and the corresponding images of the most informative directions are captured for spectrum expansion. Its effectiveness is validated by both simulated and experimental results, whose data requirement is reduced by ˜64% to ˜90% without sacrificing image reconstruction quality compared with the conventional FP method.
Two Perspectives on Forest Fire
NASA Technical Reports Server (NTRS)
2002-01-01
Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.