] View Images Details ID: SIL32-035-02 Enlarge Image View Images Details ID: SIL32-038-02 Enlarge Image View Images Details ID: SIL-2004_CT_6_1 Enlarge Image View Images Details ID: SIL32-010-01 Enlarge Image View Images Details ID: SIL32-013-05 Enlarge Image View Images Details ID: SIL32-014-02 Enlarge
A dual-view digital tomosynthesis imaging technique for improved chest imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong, Yuncheng; Lai, Chao-Jen; Wang, Tianpeng
Purpose: Digital tomosynthesis (DTS) has been shown to be useful for reducing the overlapping of abnormalities with anatomical structures at various depth levels along the posterior–anterior (PA) direction in chest radiography. However, DTS provides crude three-dimensional (3D) images that have poor resolution in the lateral view and can only be displayed with reasonable quality in the PA view. Furthermore, the spillover of high-contrast objects from off-fulcrum planes generates artifacts that may impede the diagnostic use of the DTS images. In this paper, the authors describe and demonstrate the use of a dual-view DTS technique to improve the accuracy of themore » reconstructed volume image data for more accurate rendition of the anatomy and slice images with improved resolution and reduced artifacts, thus allowing the 3D image data to be viewed in views other than the PA view. Methods: With the dual-view DTS technique, limited angle scans are performed and projection images are acquired in two orthogonal views: PA and lateral. The dual-view projection data are used together to reconstruct 3D images using the maximum likelihood expectation maximization iterative algorithm. In this study, projection images were simulated or experimentally acquired over 360° using the scanning geometry for cone beam computed tomography (CBCT). While all projections were used to reconstruct CBCT images, selected projections were extracted and used to reconstruct single- and dual-view DTS images for comparison with the CBCT images. For realistic demonstration and comparison, a digital chest phantom derived from clinical CT images was used for the simulation study. An anthropomorphic chest phantom was imaged for the experimental study. The resultant dual-view DTS images were visually compared with the single-view DTS images and CBCT images for the presence of image artifacts and accuracy of CT numbers and anatomy and quantitatively compared with root-mean-square-deviation (RMSD) values computed using the digital chest phantom or the CBCT images as the reference in the simulation and experimental study, respectively. High-contrast wires with vertical, oblique, and horizontal orientations in a PA view plane were also imaged to investigate the spatial resolutions and how the wire signals spread in the PA view and lateral view slice images. Results: Both the digital phantom images (simulated) and the anthropomorphic phantom images (experimentally generated) demonstrated that the dual-view DTS technique resulted in improved spatial resolution in the depth (PA) direction, more accurate representation of the anatomy, and significantly reduced artifacts. The RMSD values corroborate well with visual observations with substantially lower RMSD values measured for the dual-view DTS images as compared to those measured for the single-view DTS images. The imaging experiment with the high-contrast wires shows that while the vertical and oblique wires could be resolved in the lateral view in both single- and dual-view DTS images, the horizontal wire could only be resolved in the dual-view DTS images. This indicates that with single-view DTS, the wire signals spread liberally to off-fulcrum planes and generated wire shadow there. Conclusions: The authors have demonstrated both visually and quantitatively that the dual-view DTS technique can be used to achieve more accurate rendition of the anatomy and to obtain slice images with improved resolution and reduced artifacts as compared to the single-view DTS technique, thus allowing the 3D image data to be viewed in views other than the PA view. These advantages could make the dual-view DTS technique useful in situations where better separation of the objects-of-interest from the off-fulcrum structures or more accurate 3D rendition of the anatomy are required while a regular CT examination is undesirable due to radiation dose considerations.« less
Son, Jung-Young; Saveljev, Vladmir V; Kim, Jae-Soon; Kim, Sung-Sik; Javidi, Bahram
2004-09-10
The viewing zone of autostereoscopic imaging systems that use lenticular, parallax-barrier, and microlens-array plates as the viewing-zone-forming optics is analyzed in order to verify the image-quality differences between different locations of the zone. The viewing zone consists of many subzones. The images seen at most of these subzones are composed of at least one image strip selected from the total number of different view images displayed. These different view images are not mixed but patched to form a complete image. This image patching deteriorates the quality of the image seen at different subzones. We attempt to quantify the quality of the image seen at these viewing subzones by taking the inverse of the number of different view images patched together at different subzones. Although the combined viewing zone can be extended to almost all of the front space of the imaging system, in reality it is limited mainly by the image quality.
Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary
2011-08-01
Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.
Characteristics of composite images in multiview imaging and integral photography.
Lee, Beom-Ryeol; Hwang, Jae-Jeong; Son, Jung-Young
2012-07-20
The compositions of images projected to a viewer's eyes from the various viewing regions of the viewing zone formed in one-dimensional integral photography (IP) and multiview imaging (MV) are identified. These compositions indicate that they are made up of pieces from different view images. Comparisons of the composite images with images composited at various regions of imaging space formed by camera arrays for multiview image acquisition reveal that the composite images do not involve any scene folding in the central viewing zone for either MV or IP. However, in the IP case, compositions from neighboring viewing regions aligned in the horizontal direction have reversed disparities, but in the viewing regions between the central and side viewing zones, no reversed disparities are expected. However, MV does exhibit them.
Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho
2007-10-01
Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.
Enjilela, Esmaeil; Lee, Ting-Yim; Hsieh, Jiang; Wisenberg, Gerald; Teefy, Patrick; Yadegari, Andrew; Bagur, Rodrigo; Islam, Ali; Branch, Kelley; So, Aaron
2018-03-01
We implemented and validated a compressed sensing (CS) based algorithm for reconstructing dynamic contrast-enhanced (DCE) CT images of the heart from sparsely sampled X-ray projections. DCE CT imaging of the heart was performed on five normal and ischemic pigs after contrast injection. DCE images were reconstructed with filtered backprojection (FBP) and CS from all projections (984-view) and 1/3 of all projections (328-view), and with CS from 1/4 of all projections (246-view). Myocardial perfusion (MP) measurements with each protocol were compared to those with the reference 984-view FBP protocol. Both the 984-view CS and 328-view CS protocols were in good agreements with the reference protocol. The Pearson correlation coefficients of 984-view CS and 328-view CS determined from linear regression analyses were 0.98 and 0.99 respectively. The corresponding mean biases of MP measurement determined from Bland-Altman analyses were 2.7 and 1.2ml/min/100g. When only 328 projections were used for image reconstruction, CS was more accurate than FBP for MP measurement with respect to 984-view FBP. However, CS failed to generate MP maps comparable to those with 984-view FBP when only 246 projections were used for image reconstruction. DCE heart images reconstructed from one-third of a full projection set with CS were minimally affected by aliasing artifacts, leading to accurate MP measurements with the effective dose reduced to just 33% of conventional full-view FBP method. The proposed CS sparse-view image reconstruction method could facilitate the implementation of sparse-view dynamic acquisition for ultra-low dose CT MP imaging. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of a single-view and a double-view aerosol optical depth retrieval algorithm
NASA Astrophysics Data System (ADS)
Henderson, Bradley G.; Chylek, Petr
2003-11-01
We compare the results of a single-view and a double-view aerosol optical depth (AOD) retrieval algorithm applied to image pairs acquired over NASA Stennis Space Center, Mississippi. The image data were acquired by the Department of Energy's (DOE) Multispectral Thermal Imager (MTI), a pushbroom satellite imager with 15 bands from the visible to the thermal infrared. MTI has the ability to acquire imagery in pairs in which the first image is a near-nadir view and the second image is off-nadir with a zenith angle of approximately 60°. A total of 15 image pairs were used in the analysis. For a given image pair, AOD retrieval is performed twice---once using a single-view algorithm applied to the near-nadir image, then again using a double-view algorithm. Errors for both retrievals are computed by comparing the results to AERONET AOD measurements obtained at the same time and place. The single-view algorithm showed an RMS error about the mean of 0.076 in AOD units, whereas the double-view algorithm showed a modest improvement with an RMS error of 0.06. The single-view errors show a positive bias which is presumed to be a result of the empirical relationship used to determine ground reflectance in the visible. A plot of AOD error of the double-view algorithm versus time shows a noticeable trend which is interpreted to be a calibration drift. When this trend is removed, the RMS error of the double-view algorithm drops to 0.030. The single-view algorithm qualitatively appears to perform better during the spring and summer whereas the double-view algorithm seems to be less sensitive to season.
Slater, Amy; Varsani, Neesha; Diedrichs, Phillippa C
2017-09-01
This study experimentally examined the impact of exposure to fitspiration images and self-compassion quotes on social media on young women's body satisfaction, body appreciation, self-compassion, and negative mood. Female undergraduate students (N=160) were randomly assigned to view either Instagram images of fitspiration, self-compassion quotes, a combination of both, or appearance-neutral images. Results showed no differences between viewing fitspiration images compared to viewing neutral images, except for poorer self-compassion among those who viewed fitspiration images. However, women who viewed self-compassion quotes showed greater body satisfaction, body appreciation, self-compassion, and reduced negative mood compared to women who viewed neutral images. Further, viewing a combination of fitspiration images and self-compassion quotes led to positive outcomes compared to viewing only fitspiration images. Trait levels of thin-ideal internalisation moderated some effects. The findings suggest that self-compassion might offer a novel avenue for attenuating the negative impact of social media on women's body satisfaction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jamaludin, Juliza; Rahim, Ruzairi Abdul; Fazul Rahiman, Mohd Hafiz; Mohd Rohani, Jemmy
2018-04-01
Optical tomography (OPT) is a method to capture a cross-sectional image based on the data obtained by sensors, distributed around the periphery of the analyzed system. This system is based on the measurement of the final light attenuation or absorption of radiation after crossing the measured objects. The number of sensor views will affect the results of image reconstruction, where the high number of sensor views per projection will give a high image quality. This research presents an application of charge-coupled device linear sensor and laser diode in an OPT system. Experiments in detecting solid and transparent objects in crystal clear water were conducted. Two numbers of sensors views, 160 and 320 views are evaluated in this research in reconstructing the images. The image reconstruction algorithms used were filtered images of linear back projection algorithms. Analysis on comparing the simulation and experiments image results shows that, with 320 image views giving less area error than 160 views. This suggests that high image view resulted in the high resolution of image reconstruction.
NOAA Photo Library - NOAA's Ark/Animals Album
options banner CATALOG View ALL images contained in the collection. Click on thumbnails to view larger images. ALBUMS Images are arranged by themes. Click on thumbnails to view larger images. Note that not all images are contained in the albums - select the above option to view ALL current images. NOAA's
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Partially-overlapped viewing zone based integral imaging system with super wide viewing angle.
Xiong, Zhao-Long; Wang, Qiong-Hua; Li, Shu-Li; Deng, Huan; Ji, Chao-Chao
2014-09-22
In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.
A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.
Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan
2017-12-01
A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.
The power of Kawaii: viewing cute images promotes a careful behavior and narrows attentional focus.
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning "cute") things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE=43.9 ± 10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9 ± 5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7 ± 2.2% improvement) than after viewing less cute images (1.4 ± 2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2 ± 2.1%). In the third experiment, participants performed a global-local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work.
The Power of Kawaii: Viewing Cute Images Promotes a Careful Behavior and Narrows Attentional Focus
Nittono, Hiroshi; Fukushima, Michiko; Yano, Akihiro; Moriya, Hiroki
2012-01-01
Kawaii (a Japanese word meaning “cute”) things are popular because they produce positive feelings. However, their effect on behavior remains unclear. In this study, three experiments were conducted to examine the effects of viewing cute images on subsequent task performance. In the first experiment, university students performed a fine motor dexterity task before and after viewing images of baby or adult animals. Performance indexed by the number of successful trials increased after viewing cute images (puppies and kittens; M ± SE = 43.9±10.3% improvement) more than after viewing images that were less cute (dogs and cats; 11.9±5.5% improvement). In the second experiment, this finding was replicated by using a non-motor visual search task. Performance improved more after viewing cute images (15.7±2.2% improvement) than after viewing less cute images (1.4±2.1% improvement). Viewing images of pleasant foods was ineffective in improving performance (1.2±2.1%). In the third experiment, participants performed a global–local letter task after viewing images of baby animals, adult animals, and neutral objects. In general, global features were processed faster than local features. However, this global precedence effect was reduced after viewing cute images. Results show that participants performed tasks requiring focused attention more carefully after viewing cute images. This is interpreted as the result of a narrowed attentional focus induced by the cuteness-triggered positive emotion that is associated with approach motivation and the tendency toward systematic processing. For future applications, cute objects may be used as an emotion elicitor to induce careful behavioral tendencies in specific situations, such as driving and office work. PMID:23050022
Omniview motionless camera orientation system
NASA Technical Reports Server (NTRS)
Zimmermann, Steven D. (Inventor); Martin, H. Lee (Inventor)
1999-01-01
A device for omnidirectional image viewing providing pan-and-tilt orientation, rotation, and magnification within a hemispherical field-of-view that utilizes no moving parts. The imaging device is based on the effect that the image from a fisheye lens, which produces a circular image of at entire hemispherical field-of-view, which can be mathematically corrected using high speed electronic circuitry. More specifically, an incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical field-of-view without the need for any mechanical mechanisms. The preferred embodiment of the image transformation device can provide corrected images at real-time rates, compatible with standard video equipment. The device can be used for any application where a conventional pan-and-tilt or orientation mechanism might be considered including inspection, monitoring, surveillance, and target acquisition.
X-ray cargo container inspection system with few-view projection imaging
NASA Astrophysics Data System (ADS)
Duan, Xinhui; Cheng, Jianping; Zhang, Li; Xing, Yuxiang; Chen, Zhiqiang; Zhao, Ziran
2009-01-01
An X-ray cargo inspection system with few-view projection imaging is developed for detecting contraband in air containers. This paper describes this developing inspection system, including its configuration and the process of inspection using three imaging modalities: digital radiography (DR), few view imaging and computed tomography (CT). The few-view imaging can provide 3D images with much faster scanning speed than CT and do great help to quickly locate suspicious cargo in a container. An algorithm to reconstruct tomographic images from severely sparse projection data of few-view imaging is discussed. A cooperative work manner of the three modalities is presented to make the inspection more convenient and effective. Numerous experiments of performance tests and modality comparison are performed on our system for inspecting air containers. Results demonstrate the effectiveness of our methods and implementation of few-view imaging in practical inspection systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, X; Mazur, T; Yang, D
Purpose: To investigate an approach of automatically recognizing anatomical sites and imaging views (the orientation of the image acquisition) in 2D X-ray images. Methods: A hierarchical (binary tree) multiclass recognition model was developed to recognize the treatment sites and views in x-ray images. From top to bottom of the tree, the treatment sites are grouped hierarchically from more general to more specific. Each node in the hierarchical model was designed to assign images to one of two categories of anatomical sites. The binary image classification function of each node in the hierarchical model is implemented by using a PCA transformationmore » and a support vector machine (SVM) model. The optimal PCA transformation matrices and SVM models are obtained by learning from a set of sample images. Alternatives of the hierarchical model were developed to support three scenarios of site recognition that may happen in radiotherapy clinics, including two or one X-ray images with or without view information. The performance of the approach was tested with images of 120 patients from six treatment sites – brain, head-neck, breast, lung, abdomen and pelvis – with 20 patients per site and two views (AP and RT) per patient. Results: Given two images in known orthogonal views (AP and RT), the hierarchical model achieved a 99% average F1 score to recognize the six sites. Site specific view recognition models have 100 percent accuracy. The computation time to process a new patient case (preprocessing, site and view recognition) is 0.02 seconds. Conclusion: The proposed hierarchical model of site and view recognition is effective and computationally efficient. It could be useful to automatically and independently confirm the treatment sites and views in daily setup x-ray 2D images. It could also be applied to guide subsequent image processing tasks, e.g. site and view dependent contrast enhancement and image registration. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
Fiducial marker for correlating images
Miller, Lisa Marie [Rocky Point, NY; Smith, Randy J [Wading River, NY; Warren, John B [Port Jefferson, NY; Elliott, Donald [Hampton Bays, NY
2011-06-21
The invention relates to a fiducial marker having a marking grid that is used to correlate and view images produced by different imaging modalities or different imaging and viewing modalities. More specifically, the invention relates to the fiducial marking grid that has a grid pattern for producing either a viewing image and/or a first analytical image that can be overlaid with at least one other second analytical image in order to view a light path or to image different imaging modalities. Depending on the analysis, the grid pattern has a single layer of a certain thickness or at least two layers of certain thicknesses. In either case, the grid pattern is imageable by each imaging or viewing modality used in the analysis. Further, when viewing a light path, the light path of the analytical modality cannot be visualized by viewing modality (e.g., a light microscope objective). By correlating these images, the ability to analyze a thin sample that is, for example, biological in nature but yet contains trace metal ions is enhanced. Specifically, it is desired to analyze both the organic matter of the biological sample and the trace metal ions contained within the biological sample without adding or using extrinsic labels or stains.
NOAA Photo Library - Navigating the Collection
will have to change the setting to 800x600 to view the full image without having to scroll from left to view or download the highest resolution image available, click on the message "High Resolution viewing individual images associated with albums. If wishing to view the image ID number of a thumbnail
Stereoscopic wide field of view imaging system
NASA Technical Reports Server (NTRS)
Prechtl, Eric F. (Inventor); Sedwick, Raymond J. (Inventor); Jonas, Eric M. (Inventor)
2011-01-01
A stereoscopic imaging system incorporates a plurality of imaging devices or cameras to generate a high resolution, wide field of view image database from which images can be combined in real time to provide wide field of view or panoramic or omni-directional still or video images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Huiqiao; Yang, Yi; Tang, Xiangyang
2015-06-15
Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, whichmore » are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality for advanced clinical applications wherein only unevenly distributed sparse views are available. Research Grants: W81XWH-12-1-0138 (DoD), Sinovision Technologies.« less
Improved integral images compression based on multi-view extraction
NASA Astrophysics Data System (ADS)
Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric
2016-09-01
Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.
Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.
Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho
2016-04-18
We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.
Ash from Kilauea Eruption Viewed by NASA's MISR
Atmospheric Science Data Center
2018-06-07
... title: Ash from Kilauea Eruption Viewed by NASA's MISR View Larger Image Ash ... Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite captured this view of the island as it passed overhead. ...
View generation for 3D-TV using image reconstruction from irregularly spaced samples
NASA Astrophysics Data System (ADS)
Vázquez, Carlos
2007-02-01
Three-dimensional television (3D-TV) will become the next big step in the development of advanced TV systems. One of the major challenges for the deployment of 3D-TV systems is the diversity of display technologies and the high cost of capturing multi-view content. Depth image-based rendering (DIBR) has been identified as a key technology for the generation of new views for stereoscopic and multi-view displays from a small number of views captured and transmitted. We propose a disparity compensation method for DIBR that does not require spatial interpolation of the disparity map. We use a forward-mapping disparity compensation with real precision. The proposed method deals with the irregularly sampled image resulting from this disparity compensation process by applying a re-sampling algorithm based on a bi-cubic spline function space that produces smooth images. The fact that no approximation is made on the position of the samples implies that geometrical distortions in the final images due to approximations in sample positions are minimized. We also paid attention to the occlusion problem. Our algorithm detects the occluded regions in the newly generated images and uses simple depth-aware inpainting techniques to fill the gaps created by newly exposed areas. We tested the proposed method in the context of generation of views needed for viewing on SynthaGram TM auto-stereoscopic displays. We used as input either a 2D image plus a depth map or a stereoscopic pair with the associated disparity map. Our results show that this technique provides high quality images to be viewed on different display technologies such as stereoscopic viewing with shutter glasses (two views) and lenticular auto-stereoscopic displays (nine views).
View-interpolation of sparsely sampled sinogram using convolutional neural network
NASA Astrophysics Data System (ADS)
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Wentink, M; Jakimowicz, J J; Vos, L M; Meijer, D W; Wieringa, P A
2002-08-01
Compared to open surgery, minimally invasive surgery (MIS) relies heavily on advanced technology, such as endoscopic viewing systems and innovative instruments. The aim of the study was to objectively compare three technologically advanced laparoscopic viewing systems with the standard viewing system currently used in most Dutch hospitals. We evaluated the following advanced laparoscopic viewing systems: a Thin Film Transistor (TFT) display, a stereo endoscope, and an image projection display. The standard viewing system was comprised of a monocular endoscope and a high-resolution monitor. Task completion time served as the measure of performance. Eight surgeons with laparoscopic experience participated in the experiment. The average task time was significantly greater (p <0.05) with the stereo viewing system than with the standard viewing system. The average task times with the TFT display and the image projection display did not differ significantly from the standard viewing system. Although the stereo viewing system promises improved depth perception and the TFT and image projection displays are supposed to improve hand-eye coordination, none of these systems provided better task performance than the standard viewing system in this pelvi-trainer experiment.
A 3D Freehand Ultrasound System for Multi-view Reconstructions from Sparse 2D Scanning Planes
2011-01-01
Background A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. Methods We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes. For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Results Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Conclusions Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views. PMID:21251284
A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.
Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M
2011-01-20
A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical measures than measures from single view reconstructions. Multi-view 3D reconstruction from sparse 2D freehand B-mode images leads to more accurate volume quantification compared to single view systems. The flexibility and low-cost of the proposed system allow for fine control of the image acquisition planes for optimal 3D reconstructions from multiple views.
Presence and preferable viewing conditions when using an ultrahigh-definition large-screen display
NASA Astrophysics Data System (ADS)
Masaoka, Kenichiro; Emoto, Masaki; Sugawara, Masayuki; Okano, Fumio
2005-01-01
We are investigating psychological aspects to obtain guidelines for the design of TVs aimed at future high-presence broadcasting. In this study, we performed subjective assessment tests to examine the psychological effects of different combinations of viewing conditions obtained by varying the viewing distance, screen size, and picture resolution (between 4000 and 1000 scan lines). The evaluation images were presented in the form of two-minute programs comprising a sequence of 10 still images, and the test subjects were asked to complete a questionnaire consisting of 20 items relating to psychological effects such as "presence", "adverse effects", and "preferability". It was found that the test subjects reported a higher feeling of presence for 1000-line images when viewed around a distance of 1.5H (less than the standard viewing distance of 3H, which is recommended as a viewing distance for subjective evaluation of image quality for HDTV), and reported a higher feeling of presence for 4000-line images than for 1000-line images. The adverse effects such as "difficulty of viewing" did not differ significantly with resolution, but were evaluated to be lower as the viewing distance increased and tended to saturate at viewing distances above 2H. The viewing conditions were evaluated as being more preferable as the screen size increased, showing that it is possible to broadcast comfortable high-presence pictures using high-resolution large-screen displays.
Expansion of the visual angle of a car rear-view image via an image mosaic algorithm
NASA Astrophysics Data System (ADS)
Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng
2015-05-01
The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear-view image in all-weather conditions.
An electronic pan/tilt/zoom camera system
NASA Technical Reports Server (NTRS)
Zimmermann, Steve; Martin, H. Lee
1991-01-01
A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.
High throughput analysis of samples in flowing liquid
Ambrose, W. Patrick; Grace, W. Kevin; Goodwin, Peter M.; Jett, James H.; Orden, Alan Van; Keller, Richard A.
2001-01-01
Apparatus and method enable imaging multiple fluorescent sample particles in a single flow channel. A flow channel defines a flow direction for samples in a flow stream and has a viewing plane perpendicular to the flow direction. A laser beam is formed as a ribbon having a width effective to cover the viewing plane. Imaging optics are arranged to view the viewing plane to form an image of the fluorescent sample particles in the flow stream, and a camera records the image formed by the imaging optics.
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng
2016-12-01
In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.
Atmospheric Science Data Center
2013-04-16
article title: Unique Views of a Shattered Ice Shelf View Larger Image ... views of the breakup of the northern section of the Larsen B ice shelf are shown in this image pair from the Multi-angle Imaging ...
VIEW-Station software and its graphical user interface
NASA Astrophysics Data System (ADS)
Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki
1992-04-01
VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.
The utility of multiple synthesized views in the recognition of unfamiliar faces.
Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B
2017-05-01
The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.
How does c-view image quality compare with conventional 2D FFDM?
Nelson, Jeffrey S; Wells, Jered R; Baker, Jay A; Samei, Ehsan
2016-05-01
The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to compare the intrinsic image quality of synthesized 2D c-view and 2D FFDM images in terms of resolution, contrast, and noise. Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than c-view according to both the average observer and automated scores. In addition, between 50% and 70% of c-view images failed to meet the nominal minimum ACR accreditation requirements-primarily due to fiber breaks. Software analysis demonstrated that c-view provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the c-view image (11 lp/mm FFDM, 5 lp/mm c-view) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with c-view. Whereas the FFDM image contained approximately white noise texture, the c-view image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Their analysis demonstrates many instances where the c-view image quality differs from FFDM. Compared to FFDM, c-view offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of c-view images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + c-view performs relative to DBT + FFDM or FFDM alone.
All-around viewing display system for group activity on life review therapy
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Okumura, Mitsuru
2009-10-01
This paper describes 360 degree viewing display system that can be viewed from any direction. A conventional monitor display is viewed from one direction, i.e., the display has narrow viewing angle and observers cannot view the screen from the opposite side. To solve this problem, we developed the 360 degree viewing display for collaborative tasks on the round table. This developed 360 degree viewing system has a liquid crystal display screen and a 360 degree rotating table by motor. The principle is very simple. The screen of a monitor only rotates at a uniform speed, but the optical techniques are also utilized. Moreover, we have developed a floating 360 degree viewing display that can be viewed from any direction. This new viewing system has a display screen, a rotating table and dual parabolic mirrors. In order to float the only image screen above the table, the rotating mechanism works in the parabolic mirrors. Because the dual parabolic mirrors generate a "mirage" image over the upper mirror, observers can view a floating 2D image on the virtual screen in front of them. Then the observer can view a monitor screen at any position surrounding the round table.
Two Perspectives on Forest Fire
NASA Technical Reports Server (NTRS)
2002-01-01
Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.
Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays.
Choi, Heejin; Min, Sung-Wook; Jung, Sungyong; Park, Jae-Hyeung; Lee, Byoungho
2003-04-21
In spite of many advantages of integral imaging, the viewing zone in which an observer can see three-dimensional images is limited within a narrow range. Here, we propose a novel method to increase the number of viewing zones by using a dynamic barrier array. We prove our idea by fabricating and locating the dynamic barrier array between a lens array and a display panel. By tilting the barrier array, it is possible to distribute images for each viewing zone. Thus, the number of viewing zones can be increased with an increment of the states of the barrier array tilt.
Kernel-aligned multi-view canonical correlation analysis for image recognition
NASA Astrophysics Data System (ADS)
Su, Shuzhi; Ge, Hongwei; Yuan, Yun-Hao
2016-09-01
Existing kernel-based correlation analysis methods mainly adopt a single kernel in each view. However, only a single kernel is usually insufficient to characterize nonlinear distribution information of a view. To solve the problem, we transform each original feature vector into a 2-dimensional feature matrix by means of kernel alignment, and then propose a novel kernel-aligned multi-view canonical correlation analysis (KAMCCA) method on the basis of the feature matrices. Our proposed method can simultaneously employ multiple kernels to better capture the nonlinear distribution information of each view, so that correlation features learned by KAMCCA can have well discriminating power in real-world image recognition. Extensive experiments are designed on five real-world image datasets, including NIR face images, thermal face images, visible face images, handwritten digit images, and object images. Promising experimental results on the datasets have manifested the effectiveness of our proposed method.
Streamlining emergent hand and wrist radiography with a modified four-view protocol.
Chou, Henry Y; Steenburg, Scott D; Dunkle, Jeffrey W; Gussick, Sean D; Petersen, Matthew J; Kohli, Marc D; Shen, Changyu; Lin, Hongbo
2016-08-01
This study aims to determine whether a modified four-view hand and wrist study performs comparably to the traditional seven views in the evaluation of acute hand and wrist fractures. This retrospective study was approved by the institutional review board with waiver of informed consent. Two hundred forty patients (50 % male; ages 18-92 years) with unilateral three-view hand (posteroanterior, oblique, and lateral) and four-view wrist (posteroanterior, oblique, lateral, and ulnar deviation) radiographs obtained concurrently following trauma were included in this study. Four emergency radiologists interpreted the original seven images, with two radiologists independently evaluating each study. The patients' radiographs were then recombined into four-view series using the three hand images and the ulnar deviated wrist image. These were interpreted by the same radiologists following an 8-week delay. Kappa statistics were generated to measure inter-observer and inter-method agreement. Generalized linear mixed model analysis was performed between the seven- and four-view methods. Of the 480 reports generated in each of the seven- and four-view image sets, 142 (29.6 %) of the seven-view and 126 (26.2 %) of the four-view reports conveyed certain or suspected acute osseous findings. Average inter-observer kappa coefficients were 0.7845 and 0.8261 for the seven- and four-view protocols, respectively. The average inter-method kappa was 0.823. The odds ratio of diagnosing injury using the four-view compared to the seven-view algorithm was 0.69 (CI 0.45-1.06, P = 0.0873). The modified four-view hand and wrist radiographic series produces diagnostic results comparable to the traditional seven views for acute fracture evaluation.
Content metamorphosis in synthetic holography
NASA Astrophysics Data System (ADS)
Desbiens, Jacques
2013-02-01
A synthetic hologram is an optical system made of hundreds of images amalgamated in a structure of holographic cells. Each of these images represents a point of view on a three-dimensional space which makes us consider synthetic holography as a multiple points of view perspective system. In the composition of a computer graphics scene for a synthetic hologram, the field of view of the holographic image can be divided into several viewing zones. We can attribute these divisions to any object or image feature independently and operate different transformations on image content. In computer generated holography, we tend to consider content variations as a continuous animation much like a short movie. However, by composing sequential variations of image features in relation with spatial divisions, we can build new narrative forms distinct from linear cinematographic narration. When observers move freely and change their viewing positions, they travel from one field of view division to another. In synthetic holography, metamorphoses of image content are within the observer's path. In all imaging Medias, the transformation of image features in synchronisation with the observer's position is a rare occurrence. However, this is a predominant characteristic of synthetic holography. This paper describes some of my experimental works in the development of metamorphic holographic images.
Effects of task and image properties on visual-attention deployment in image-quality assessment
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith; Liu, Hantao; Heynderickx, Ingrid
2015-03-01
It is important to understand how humans view images and how their behavior is affected by changes in the properties of the viewed images and the task they are given, particularly the task of scoring the image quality (IQ). This is a complex behavior that holds great importance for the field of image-quality research. This work builds upon 4 years of research work spanning three databases studying image-viewing behavior. Using eye-tracking equipment, it was possible to collect information on human viewing behavior of different kinds of stimuli and under different experimental settings. This work performs a cross-analysis on the results from all these databases using state-of-the-art similarity measures. The results strongly show that asking the viewers to score the IQ significantly changes their viewing behavior. Also muting the color saturation seems to affect the saliency of the images. However, a change in IQ was not consistently found to modify visual attention deployment, neither under free looking nor during scoring. These results are helpful in gaining a better understanding of image viewing behavior under different conditions. They also have important implications on work that collects subjective image-quality scores from human observers.
How does C-VIEW image quality compare with conventional 2D FFDM?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Jeffrey S., E-mail: nelson.jeffrey@duke.edu; Wells, Jered R.; Baker, Jay A.
Purpose: The FDA approved the use of digital breast tomosynthesis (DBT) in 2011 as an adjunct to 2D full field digital mammography (FFDM) with the constraint that all DBT acquisitions must be paired with a 2D image to assure adequate interpretative information is provided. Recently manufacturers have developed methods to provide a synthesized 2D image generated from the DBT data with the hope of sparing patients the radiation exposure from the FFDM acquisition. While this much needed alternative effectively reduces the total radiation burden, differences in image quality must also be considered. The goal of this study was to comparemore » the intrinsic image quality of synthesized 2D C-VIEW and 2D FFDM images in terms of resolution, contrast, and noise. Methods: Two phantoms were utilized in this study: the American College of Radiology mammography accreditation phantom (ACR phantom) and a novel 3D printed anthropomorphic breast phantom. Both phantoms were imaged using a Hologic Selenia Dimensions 3D system. Analysis of the ACR phantom includes both visual inspection and objective automated analysis using in-house software. Analysis of the 3D anthropomorphic phantom includes visual assessment of resolution and Fourier analysis of the noise. Results: Using ACR-defined scoring criteria for the ACR phantom, the FFDM images scored statistically higher than C-VIEW according to both the average observer and automated scores. In addition, between 50% and 70% of C-VIEW images failed to meet the nominal minimum ACR accreditation requirements—primarily due to fiber breaks. Software analysis demonstrated that C-VIEW provided enhanced visualization of medium and large microcalcification objects; however, the benefits diminished for smaller high contrast objects and all low contrast objects. Visual analysis of the anthropomorphic phantom showed a measureable loss of resolution in the C-VIEW image (11 lp/mm FFDM, 5 lp/mm C-VIEW) and loss in detection of small microcalcification objects. Spectral analysis of the anthropomorphic phantom showed higher total noise magnitude in the FFDM image compared with C-VIEW. Whereas the FFDM image contained approximately white noise texture, the C-VIEW image exhibited marked noise reduction at midfrequency and high frequency with far less noise suppression at low frequencies resulting in a mottled noise appearance. Conclusions: Their analysis demonstrates many instances where the C-VIEW image quality differs from FFDM. Compared to FFDM, C-VIEW offers a better depiction of objects of certain size and contrast, but provides poorer overall resolution and noise properties. Based on these findings, the utilization of C-VIEW images in the clinical setting requires careful consideration, especially if considering the discontinuation of FFDM imaging. Not explicitly explored in this study is how the combination of DBT + C-VIEW performs relative to DBT + FFDM or FFDM alone.« less
Thomas, W P; Gaber, C E; Jacobs, G J; Kaplan, P M; Lombard, C W; Moise, N S; Moses, B L
1993-01-01
Recommendations are presented for standardized imaging planes and display conventions for two-dimensional echocardiography in the dog and cat. Three transducer locations ("windows") provide access to consistent imaging planes: the right parasternal location, the left caudal (apical) parasternal location, and the left cranial parasternal location. Recommendations for image display orientations are very similar to those for comparable human cardiac images, with the heart base or cranial aspect of the heart displayed to the examiner's right on the video display. From the right parasternal location, standard views include a long-axis four-chamber view and a long-axis left ventricular outflow view, and short-axis views at the levels of the left ventricular apex, papillary muscles, chordae tendineae, mitral valve, aortic valve, and pulmonary arteries. From the left caudal (apical) location, standard views include long-axis two-chamber and four-chamber views. From the left cranial parasternal location, standard views include a long-axis view of the left ventricular outflow tract and ascending aorta (with variations to image the right atrium and tricuspid valve, and the pulmonary valve and pulmonary artery), and a short-axis view of the aortic root encircled by the right heart. These images are presented by means of idealized line drawings. Adoption of these standards should facilitate consistent performance, recording, teaching, and communicating results of studies obtained by two-dimensional echocardiography.
Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation
NASA Astrophysics Data System (ADS)
Inamoto, Naho; Saito, Hideo
2003-06-01
This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..
Stereo View of Phoenix Test Sample Site
2008-06-02
This anaglyph image, acquired by NASA’s Phoenix Lander’s Surface Stereo Imager on June 1, 2008, shows a stereoscopic 3D view of the so-called Knave of Hearts first-dig test area to the north of the lander. 3D glasses are necessary to view this image.
NOAA's National Weather Service/Environmental Protection Agency - United
Integration Image | Loop View | Daily View | Point Guidance | | Experimental Air Quality Guidance | Product Map To View Additional Guidance Graphic of Air Quality Forecast Guidance for the CONUS Mouse over or Image Alaska 1-Hr Average Ozone Concentration Image Hawaii 1-Hr Average Ozone Concentration Image 8-Hr
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Petrick, Nicholas; Chan, Heang-Ping; Paquerault, Sophie; Helvie, Mark A.; Hadjiiski, Lubomir M.
2001-07-01
We used the correspondence of detected structures on two views of the same breast for false-positive (FP) reduction in computerized detection of mammographic masses. For each initially detected object on one view, we considered all possible pairings with objects on the other view that fell within a radial band defined by the nipple-to-object distances. We designed a 'correspondence classifier' to classify these pairs as either the same mass (a TP-TP pair) or a mismatch (a TP-FP, FP-TP or FP-FP pair). For each pair, similarity measures of morphological and texture features were derived and used as input features in the correspondence classifier. Two-view mammograms from 94 cases were used as a preliminary data set. Initial detection provided 6.3 FPs/image at 96% sensitivity. Further FP reduction in single view resulted in 1.9 FPs/image at 80% sensitivity and 1.1 FPs/image at 70% sensitivity. By combining single-view detection with the correspondence classifier, detection accuracy improved to 1.5 FPs/image at 80% sensitivity and 0.7 FPs/image at 70% sensitivity. Our preliminary results indicate that the correspondence of geometric, morphological, and textural features of a mass on two different views provides valuable additional information for reducing FPs.
Zhu, S; Yang, Y; Khambay, B
2017-03-01
Clinicians are accustomed to viewing conventional two-dimensional (2D) photographs and assume that viewing three-dimensional (3D) images is similar. Facial images captured in 3D are not viewed in true 3D; this may alter clinical judgement. The aim of this study was to evaluate the reliability of using conventional photographs, 3D images, and stereoscopic projected 3D images to rate the severity of the deformity in pre-surgical class III patients. Forty adult patients were recruited. Eight raters assessed facial height, symmetry, and profile using the three different viewing media and a 100-mm visual analogue scale (VAS), and appraised the most informative viewing medium. Inter-rater consistency was above good for all three media. Intra-rater reliability was not significantly different for rating facial height using 2D (P=0.704), symmetry using 3D (P=0.056), and profile using projected 3D (P=0.749). Using projected 3D for rating profile and symmetry resulted in significantly lower median VAS scores than either 3D or 2D images (all P<0.05). For 75% of the raters, stereoscopic 3D projection was the preferred method for rating. The reliability of assessing specific characteristics was dependent on the viewing medium. Clinicians should be aware that the visual information provided when viewing 3D images is not the same as when viewing 2D photographs, especially for facial depth, and this may change the clinical impression. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Schooler, Deborah; Daniels, Elizabeth A
2014-01-01
Using a quasi-experimental design, 118 Latina girls, ages 13-18, viewed five color photographs of White women. Girls viewed either images of sexualized women or images of non-sexualized women. After viewing the images, girls were asked to complete the sentence stem, "I am…" 20 times. Thirty percent of girls spontaneously described their ethnicity in one of their sentence completions. Spontaneous use of ethnicity was taken as an indicator of the salience of ethnic identity. Among girls who viewed sexualized, thin-ideal White media images, spontaneously using an ethnic descriptor was related to more positive descriptions of one's own body and appearance. Analyses supported the premise that ethnic identity may act as a protective factor, buffering Latina girls from the negative effects of viewing sexualized, thin-ideal White media images. Copyright © 2013 Elsevier Ltd. All rights reserved.
BigView Image Viewing on Tiled Displays
NASA Technical Reports Server (NTRS)
Sandstrom, Timothy
2007-01-01
BigView allows for interactive panning and zooming of images of arbitrary size on desktop PCs running Linux. Additionally, it can work in a multi-screen environment where multiple PCs cooperate to view a single, large image. Using this software, one can explore on relatively modest machines images such as the Mars Orbiter Camera mosaic [92,160 33,280 pixels]. The images must be first converted into paged format, where the image is stored in 256 256 pages to allow rapid movement of pixels into texture memory. The format contains an image pyramid : a set of scaled versions of the original image. Each scaled image is 1/2 the size of the previous, starting with the original down to the smallest, which fits into a single 256 x 256 page.
Fisheye camera around view monitoring system
NASA Astrophysics Data System (ADS)
Feng, Cong; Ma, Xinjun; Li, Yuanyuan; Wu, Chenchen
2018-04-01
360 degree around view monitoring system is the key technology of the advanced driver assistance system, which is used to assist the driver to clear the blind area, and has high application value. In this paper, we study the transformation relationship between multi coordinate system to generate panoramic image in the unified car coordinate system. Firstly, the panoramic image is divided into four regions. By using the parameters obtained by calibration, four fisheye images pixel corresponding to the four sub regions are mapped to the constructed panoramic image. On the basis of 2D around view monitoring system, 3D version is realized by reconstructing the projection surface. Then, we compare 2D around view scheme and 3D around view scheme in unified coordinate system, 3D around view scheme solves the shortcomings of the traditional 2D scheme, such as small visual field, prominent ground object deformation and so on. Finally, the image collected by a fisheye camera installed around the car body can be spliced into a 360 degree panoramic image. So it has very high application value.
Three-dimensional imaging from a unidirectional hologram: wide-viewing-zone projection type.
Okoshi, T; Oshima, K
1976-04-01
In ordinary holography reconstructing a virtual image, the hologram must be wider than either the visual field or the viewing zone. In this paper, an economical method of recording a wide-viewing-zone wide-visual-field 3-D holographic image is proposed. In this method, many mirrors are used to collect object waves onto a small hologram. In the reconstruction, a real image from the hologram is projected onto a horizontally direction-selective stereoscreen through the same mirrors. In the experiment, satisfactory 3-D images have been observed from a wide viewing zone. The optimum design and information reduction techniques are also discussed.
Sensitivity images for multi-view ultrasonic array inspection
NASA Astrophysics Data System (ADS)
Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony J.; Zhang, Jie; Wilcox, Paul D.; Kashubin, Artem; Cawley, Peter
2018-04-01
The multi-view total focusing method (TFM) is an imaging technique for ultrasonic full matrix array data that typically exploits ray paths with zero, one or two internal reflections in the inspected object and for all combinations of longitudinal and transverse modes. The fusion of this vast quantity of views is expected to increase the reliability of ultrasonic inspection; however, it is not trivial to determine which views and which areas are the most suited for the detection of a given type and orientation of defect. This work introduces sensitivity images that give the expected response of a defect in any part of the inspected object and for any view. These images are based on a ray-based analytical forward model. They can be used to determine which views and which areas lead to the highest probability of detection of the defect. They can also be used for quantitatively analyzing the effects of the parameters of the inspection (probe angle and position, for example) on the overall probability of detection. Finally, they can be used to rescale TFM images so that the different views have comparable amplitudes. This methodology is applied to experimental data and discussed.
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun
2017-11-01
An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.
2016-09-01
Ceres' lonely mountain, Ahuna Mons, is seen in this simulated perspective view. The elevation has been exaggerated by a factor of two. The view was made using enhanced-color images from NASA's Dawn mission. Images taken using blue (440 nanometers), green (750 nanometers) and infrared (960 nanometers) spectral filters were combined to create the view. The spacecraft's framing camera took the images from Dawn's low-altitude mapping orbit, from an altitude of 240 miles (385 kilometers) in August 2016. The resolution of the component images is 120 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20915
2016-09-01
Ceres' lonely mountain, Ahuna Mons, is seen in this simulated perspective view. The elevation has been exaggerated by a factor of two. The view was made using enhanced-color images from NASA's Dawn mission. Images taken using blue (440 nanometers), green (750 nanometers) and infrared (960 nanometers) spectral filters were combined to create the view. The spacecraft's framing camera took the images from Dawn's low-altitude mapping orbit, from an altitude of 240 miles (385 kilometers) in August 2016. The resolution of the component images is 120 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20915
The effects of gender stereotypic and counter-stereotypic textbook images on science performance.
Good, Jessica J; Woodzicka, Julie A; Wingfield, Lylan C
2010-01-01
We investigated the effect of gender stereotypic and counter-stereotypic images on male and female high school students' science comprehension and anxiety. We predicted stereotypic images to induce stereotype threat in females and impair science performance. Counter-stereotypic images were predicted to alleviate threat and enhance female performance. Students read one of three chemistry lessons, each containing the same text, with photograph content varied according to stereotype condition. Participants then completed a comprehension test and anxiety measure. Results indicate that female students had higher comprehension after viewing counter-stereotypic images (female scientists) than after viewing stereotypic images (male scientists). Male students had higher comprehension after viewing stereotypic images than after viewing counter-stereotypic images. Implications for alleviating the gender gap in science achievement are discussed.
McDonald, R W; Rice, M J; Reller, M D; Marcella, C P; Sahn, D J
1996-01-01
Sinus venosus atrial septal defects are frequently missed and difficult to visualize with conventional two-dimensional echocardiographic views. Using modified subcostal and right parasternal longitudinal views, nine patients were found to have a sinus venosus atrial septal defect. The modified subcostal view showed a sinus venosus atrial septal defect in all nine patients; three patients had secundum atrial septal defects as well. The right parasternal view detected only six patients with sinus venosus atrial septal defect. Partial anomalous pulmonary venous return was diagnosed in seven patients using these views. The combination of subcostal and right parasternal longitudinal imaging views will improve the detection of sinus venosus atrial septal defects.
Chen, Yan; James, Jonathan J; Turnbull, Anne E; Gale, Alastair G
2015-10-01
To establish whether lower resolution, lower cost viewing devices have the potential to deliver mammographic interpretation training. On three occasions over eight months, fourteen consultant radiologists and reporting radiographers read forty challenging digital mammography screening cases on three different displays: a digital mammography workstation, a standard LCD monitor, and a smartphone. Standard image manipulation software was available for use on all three devices. Receiver operating characteristic (ROC) analysis and ANOVA (Analysis of Variance) were used to determine the significance of differences in performance between the viewing devices with/without the application of image manipulation software. The effect of reader's experience was also assessed. Performance was significantly higher (p < .05) on the mammography workstation compared to the other two viewing devices. When image manipulation software was applied to images viewed on the standard LCD monitor, performance improved to mirror levels seen on the mammography workstation with no significant difference between the two. Image interpretation on the smartphone was uniformly poor. Film reader experience had no significant effect on performance across all three viewing devices. Lower resolution standard LCD monitors combined with appropriate image manipulation software are capable of displaying mammographic pathology, and are potentially suitable for delivering mammographic interpretation training. • This study investigates potential devices for training in mammography interpretation. • Lower resolution standard LCD monitors are potentially suitable for mammographic interpretation training. • The effect of image manipulation tools on mammography workstation viewing is insignificant. • Reader experience had no significant effect on performance in all viewing devices. • Smart phones are not suitable for displaying mammograms.
A beam-splitter-type 3-D endoscope for front view and front-diagonal view images.
Kamiuchi, Hiroki; Masamune, Ken; Kuwana, Kenta; Dohi, Takeyoshi; Kim, Keri; Yamashita, Hiromasa; Chiba, Toshio
2013-01-01
In endoscopic surgery, surgeons must manipulate an endoscope inside the body cavity to observe a large field-of-view while estimating the distance between surgical instruments and the affected area by reference to the size or motion of the surgical instruments in 2-D endoscopic images on a monitor. Therefore, there is a risk of the endoscope or surgical instruments physically damaging body tissues. To overcome this problem, we developed a Ø7- mm 3-D endoscope that can switch between providing front and front-diagonal view 3-D images by simply rotating its sleeves. This 3-D endoscope consists of a conventional 3-D endoscope and an outer and inner sleeve with a beam splitter and polarization plates. The beam splitter was used for visualizing both the front and front-diagonal view and was set at 25° to the outer sleeve's distal end in order to eliminate a blind spot common to both views. Polarization plates were used to avoid overlap of the two views. We measured signal-to-noise ratio (SNR), sharpness, chromatic aberration (CA), and viewing angle of this 3-D endoscope and evaluated its feasibility in vivo. Compared to the conventional 3-D endoscope, SNR and sharpness of this 3-D endoscope decreased by 20 and 7 %, respectively. No significant difference was found in CA. The viewing angle for both the front and front-diagonal views was about 50°. In the in vivo experiment, this 3-D endoscope can provide clear 3-D images of both views by simply rotating its inner sleeve. The developed 3-D endoscope can provide the front and front-diagonal view by simply rotating the inner sleeve, therefore the risk of damage to fragile body tissues can be significantly decreased.
NASA Astrophysics Data System (ADS)
Betancur, Julián.; Simon, Antoine; Schnell, Frédéric; Donal, Erwan; Hernández, Alfredo; Garreau, Mireille
2013-11-01
The acquisition of ECG-gated cine magnetic resonance images of the heart is routinely performed in apnea in order to suppress the motion artifacts caused by breathing. However, many factors including the 2D nature of the acquisition and the use of di erent beats to acquire the multiple-view cine images, cause this kind of artifacts to appear. This paper presents the qualitative evaluation of a method aiming to remove motion artifacts in multipleview cine images acquired on patients with hypertrophic cardiomyopathy diagnosis. The approach uses iconic registration to reduce for in-plane artifacts in long-axis-view image stacks and in-plane and out-of-plane motion artifacts in sort-axis-view image stack. Four similarity measures were evaluated: the normalized correlation, the normalized mutual information, the sum of absolute voxel di erences and the Slomka metric proposed by Slomka et al. The qualitative evaluation assessed the misalignment of di erent anatomical structures of the left ventricle as follows: the misalignment of the interventricular septum and the lateral wall for short-axis-view acquisitions and the misalignment between the short-axis-view image and long-axis-view images. Results showed the correction using the normalized correlation as the most appropriated with an 80% of success.
Topographic View of Ceres Mountain
2015-09-30
This view, made using images taken by NASA's Dawn spacecraft, features a tall conical mountain on Ceres. Elevations span a range of about 5 miles (8 kilometers) from the lowest places in this region to the highest terrains. Blue represents the lowest elevation, and brown is the highest. The white streaks seen running down the side of the mountain are especially bright parts of the surface. The image was generated using two components: images of the surface taken during Dawn's High Altitude Mapping Orbit (HAMO) phase, where it viewed the surface at a resolution of about 450 feet (140 meters) per pixel, and a shape model generated using images taken at varying sun and viewing angles during Dawn's lower-resolution Survey phase. The image of the region is color-coded according to elevation, and then draped over the shape model to give this view. http://photojournal.jpl.nasa.gov/catalog/PIA19976
Large-viewing-angle electroholography by space projection
NASA Astrophysics Data System (ADS)
Sato, Koki; Obana, Kazuki; Okumura, Toshimichi; Kanaoka, Takumi; Nishikawa, Satoko; Takano, Kunihiko
2004-06-01
The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel ( time shared CGH of RGB three colors ). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.
View synthesis using parallax invariance
NASA Astrophysics Data System (ADS)
Dornaika, Fadi
2001-06-01
View synthesis becomes a focus of attention of both the computer vision and computer graphics communities. It consists of creating novel images of a scene as it would appear from novel viewpoints. View synthesis can be used in a wide variety of applications such as video compression, graphics generation, virtual reality and entertainment. This paper addresses the following problem. Given a dense disparity map between two reference images, we would like to synthesize a novel view of the same scene associated with a novel viewpoint. Most of the existing work is relying on building a set of 3D meshes which are then projected onto the new image (the rendering process is performed using texture mapping). The advantages of our view synthesis approach are as follows. First, the novel view is specified by a rotation and a translation which are the most natural way to express the virtual location of the camera. Second, the approach is able to synthesize highly realistic images whose viewing position is significantly far away from the reference viewpoints. Third, the approach is able to handle the visibility problem during the synthesis process. Our developed framework has two main steps. The first step (analysis step) consists of computing the homography at infinity, the epipoles, and thus the parallax field associated with the reference images. The second step (synthesis step) consists of warping the reference image into a new one, which is based on the invariance of the computed parallax field. The analysis step is working directly on the reference views, and only need to be performed once. Examples of synthesizing novel views using either feature correspondences or dense disparity map have demonstrated the feasibility of the proposed approach.
mPano: cloud-based mobile panorama view from single picture
NASA Astrophysics Data System (ADS)
Li, Hongzhi; Zhu, Wenwu
2013-09-01
Panorama view provides people an informative and natural user experience to represent the whole scene. The advances on mobile augmented reality, mobile-cloud computing, and mobile internet can enable panorama view on mobile phone with new functionalities, such as anytime anywhere query where a landmark picture is and what the whole scene looks like. To generate and explore panorama view on mobile devices faces significant challenges due to the limitations of computing capacity, battery life, and memory size of mobile phones, as well as the bandwidth of mobile Internet connection. To address the challenges, this paper presents a novel cloud-based mobile panorama view system that can generate and view panorama-view on mobile devices from a single picture, namely "Pano". In our system, first, we propose a novel iterative multi-modal image retrieval (IMIR) approach to get spatially adjacent images using both tag and content information from the single picture. Second, we propose a cloud-based parallel server synthing approach to generate panorama view in cloud, against today's local-client synthing approach that is almost impossible for mobile phones. Third, we propose predictive-cache solution to reduce latency of image delivery from cloud server to the mobile client. We have built a real mobile panorama view system and perform experiments. The experimental results demonstrated the effectiveness of our system and the proposed key component technologies, especially for landmark images.
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
NIH Seeks Input on In-patient Clinical Research Areas | Division of Cancer Prevention
[[{"fid":"2476","view_mode":"default","fields":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of the National Institutes of Health Clinical Center (Building 10) in Bethesda, Maryland.","field_file_image_title_text[und][0][value]":false},"type":"media","field_deltas":{"1":{"format":"default","field_file_image_alt_text[und][0][value]":"Aerial view of
The Next Generation of Infrared Views
2009-11-17
The image on the left shows an infrared view of the center of our Milky Way galaxy as seen by the 1983 Infrared Astronomical Satellite, which surveyed the whole sky with only 62 pixels. The image on the right shows an infrared view similar to what NASA
Interior view of the Flight Deck looking forward, the Commander's ...
Interior view of the Flight Deck looking forward, the Commander's seat and controls are on the left and the pilot's seat and controls are on the right of the view. Note that the flight deck windows have protective covers over them in this view. This images can be digitally stitched with image HAER No. TX-116-A-20 to expand the view to include the overhead control panels of the flight deck. This view was taken in the Orbiter Processing Facility at the Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
NASA Astrophysics Data System (ADS)
Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia
2018-05-01
The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.
2015-08-20
NASA Cassini spacecraft captured this parting view showing the rough and icy crescent of Saturn moon Dione following the spacecraft last close flyby of the moon on Aug. 17, 2015. Cassini obtained a similar crescent view in 2005 (see PIA07745). The earlier view has an image scale about four times higher, but does not show the moon's full crescent as this view does. Five visible light (clear spectral filter), narrow-angle camera images were combined to create this mosaic view. The scene is an orthographic projection centered on terrain at 0.4 degrees north latitude, 30.6 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. The view was acquired at distances ranging from approximately 37,000 miles (59,000 kilometers) to 47,000 miles (75,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 145 degrees. Image scale is about 1,300 feet (400 meters) per pixel. North on Dione is up and rotated 34 degrees to the right. http://photojournal.jpl.nasa.gov/catalog/PIA19649
Evaluation of target coverage and margins adequacy during CyberKnife Lung Optimized Treatment.
Ricotti, Rosalinda; Seregni, Matteo; Ciardo, Delia; Vigorito, Sabrina; Rondi, Elena; Piperno, Gaia; Ferrari, Annamaria; Zerella, Maria Alessia; Arculeo, Simona; Francia, Claudia Maria; Sibio, Daniela; Cattani, Federica; De Marinis, Filippo; Spaggiari, Lorenzo; Orecchia, Roberto; Riboldi, Marco; Baroni, Guido; Jereczek-Fossa, Barbara Alicja
2018-04-01
Evaluation of target coverage and verification of safety margins, in motion management strategies implemented by Lung Optimized Treatment (LOT) module in CyberKnife system. Three fiducial-less motion management strategies provided by LOT can be selected according to tumor visibility in the X ray images acquired during treatment. In 2-view modality the tumor is visible in both X ray images and full motion tracking is performed. In 1-view modality the tumor is visible in a single X ray image, therefore, motion tracking is combined with an internal target volume (ITV)-based margin expansion. In 0-view modality the lesion is not visible, consequently the treatment relies entirely on an ITV-based approach. Data from 30 patients treated in 2-view modality were selected providing information on the three-dimensional tumor motion in correspondence to each X ray image. Treatments in 1-view and 0-view modalities were simulated by processing log files and planning volumes. Planning target volume (PTV) margins were defined according to the tracking modality: end-exhale clinical target volume (CTV) + 3 mm in 2-view and ITV + 5 mm in 0-view. In the 1-view scenario, the ITV encompasses only tumor motion along the non-visible direction. Then, non-uniform ITV to PTV margins were applied: 3 mm and 5 mm in the visible and non-visible direction, respectively. We defined the coverage of each voxel of the CTV as the percentage of X ray images where such voxel was included in the PTV. In 2-view modality coverage was calculated as the intersection between the CTV centred on the imaged target position and the PTV centred on the predicted target position, as recorded in log files. In 1-view modality, coverage was calculated as the intersection between the CTV centred on the imaged target position and the PTV centred on the projected predictor data. In 0-view modality coverage was calculated as the intersection between the CTV centred on the imaged target position and the non-moving PTV. Similar to dose-volume histogram, CTV coverage-volume histograms (defined as CVH) were derived for each patient and treatment modality. The geometric coverages of the 90% and 95% of CTV volume (C90, C95, respectively) were evaluated. Patient-specific optimal margins (ensuring C95 ≥ 95%) were computed retrospectively. The median ± interquartile-rage of C90 and C95 for upper lobe lesions was 99.1 ± 0.6% and 99.0 ± 3.1%, whereas they were 98.9 ± 4.2% and 97.8 ± 7.5% for lower and middle lobe tumors. In 2-view, 1-view and 0-view modality, adopted margins ensured C95 ≥ 95% in 70%, 85% and 63% of cases and C95 ≥ 90% in 90%, 88% and 83% of cases, respectively. In 2-view, 1-view and 0-view a reduction in margins still ensured C95 ≥ 95% in 33%, 78% and 59% of cases, respectively. CTV coverage analysis provided an a-posteriori evaluation of the treatment geometric accuracy and allowed a quantitative verification of the adequacy of the PTV margins applied in CyberKnife LOT treatments offering guidance in the selection of CTV margins. © 2018 American Association of Physicists in Medicine.
PTSD in Limb Trauma and Recovery
2008-10-16
field of view, much greater image fidelity and more comfortable viewing than the Emagin head-mounted display, and is well-suited to deployment in a...run on display platforms other than the eMagin Head-Mounted Display (HMD). This will include Brown University’s Cave, an eight- foot immersive VR...Samsung display provides wider field of view, much greater image fidelity and more comfortable viewing than the Emagin head-mounted display, and is
NASA Technical Reports Server (NTRS)
Chamberlain, F. R. (Inventor)
1980-01-01
A system for generating, within a single frame of photographic film, a quadrified image including images of angularly (including orthogonally) related fields of view of a near field three dimensional object is described. It is characterized by three subsystems each of which includes a plurality of reflective surfaces for imaging a different field of view of the object at a different quadrant of the quadrified image. All of the subsystems have identical path lengths to the object photographed.
Colonoscopy tutorial software made with a cadaver's sectioned images.
Chung, Beom Sun; Chung, Min Suk; Park, Hyung Seon; Shin, Byeong-Seok; Kwon, Koojoo
2016-11-01
Novice doctors may watch tutorial videos in training for actual or computed tomographic (CT) colonoscopy. The conventional learning videos can be complemented by virtual colonoscopy software made with a cadaver's sectioned images (SIs). The objective of this study was to assist colonoscopy trainees with the new interactive software. Submucosal segmentation on the SIs was carried out through the whole length of the large intestine. With the SIs and segmented images, a three dimensional model was reconstructed. Six-hundred seventy-one proximal colonoscopic views (conventional views) and corresponding distal colonoscopic views (simulating the retroflexion of a colonoscope) were produced. Not only navigation views showing the current location of the colonoscope tip and its course, but also, supplementary description views were elaborated. The four corresponding views were put into convenient browsing software to be downloaded free from the homepage (anatomy.co.kr). The SI colonoscopy software with the realistic images and supportive tools was available to anybody. Users could readily notice the position and direction of the virtual colonoscope tip and recognize meaningful structures in colonoscopic views. The software is expected to be an auxiliary learning tool to improve technique and related knowledge in actual and CT colonoscopies. Hopefully, the software will be updated using raw images from the Visible Korean project. Copyright © 2016 Elsevier GmbH. All rights reserved.
Initial experience with a nuclear medicine viewing workstation
NASA Astrophysics Data System (ADS)
Witt, Robert M.; Burt, Robert W.
1992-07-01
Graphical User Interfaced (GUI) workstations are now available from commercial vendors. We recently installed a GUI workstation in our nuclear medicine reading room for exclusive use of staff and resident physicians. The system is built upon a Macintosh platform and has been available as a DELTAmanager from MedImage and more recently as an ICON V from Siemens Medical Systems. The workstation provides only display functions and connects to our existing nuclear medicine imaging system via ethernet. The system has some processing capabilities to create oblique, sagittal and coronal views from transverse tomographic views. Hard copy output is via a screen save device and a thermal color printer. The DELTAmanager replaced a MicroDELTA workstation which had both process and view functions. The mouse activated GUI has made remarkable changes to physicians'' use of the nuclear medicine viewing system. Training time to view and review studies has been reduced from hours to about 30-minutes. Generation of oblique views and display of brain and heart tomographic studies has been reduced from about 30-minutes of technician''s time to about 5-minutes of physician''s time. Overall operator functionality has been increased so that resident physicians with little prior computer experience can access all images on the image server and display pertinent patient images when consulting with other staff.
Table screen 360-degree holographic display using circular viewing-zone scanning.
Inoue, Tatsuaki; Takaki, Yasuhiro
2015-03-09
A table screen 360-degree holographic display is proposed, with an increased screen size, having an expanded viewing zone over all horizontal directions around the table screen. It consists of a microelectromechanical systems spatial light modulator (MEMS SLM), a magnifying imaging system, and a rotating screen. The MEMS SLM generates hologram patterns at a high frame rate, the magnifying imaging system increases the screen of the MEMS SLM, and the reduced viewing zones are scanned circularly by the rotating screen. The viewing zones are localized to practically realize wavefront reconstruction. An experimental system has been constructed. The generation of 360-degree three-dimensional (3D) images was achieved by scanning 800 reduced and localized viewing zones circularly. The table screen had a diameter of 100 mm, and the frame rate of 3D image generation was 28.4 Hz.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080
Generalization between canonical and non-canonical views in object recognition
Ghose, Tandra; Liu, Zili
2013-01-01
Viewpoint generalization in object recognition is the process that allows recognition of a given 3D object from many different viewpoints despite variations in its 2D projections. We used the canonical view effects as a foundation to empirically test the validity of a major theory in object recognition, the view-approximation model (Poggio & Edelman, 1990). This model predicts that generalization should be better when an object is first seen from a non-canonical view and then a canonical view than when seen in the reversed order. We also manipulated object similarity to study the degree to which this view generalization was constrained by shape details and task instructions (object vs. image recognition). Old-new recognition performance for basic and subordinate level objects was measured in separate blocks. We found that for object recognition, view generalization between canonical and non-canonical views was comparable for basic level objects. For subordinate level objects, recognition performance was more accurate from non-canonical to canonical views than the other way around. When the task was changed from object recognition to image recognition, the pattern of the results reversed. Interestingly, participants responded “old” to “new” images of “old” objects with a substantially higher rate than to “new” objects, despite instructions to the contrary, thereby indicating involuntary view generalization. Our empirical findings are incompatible with the prediction of the view-approximation theory, and argue against the hypothesis that views are stored independently. PMID:23283692
NASA Astrophysics Data System (ADS)
Smith, Brandon M.; Stork, David G.; Zhang, Li
2009-01-01
The problem of reconstructing a three-dimensional scene from single or multiple views has been thoroughly studied in the computer vision literature, and recently has been applied to problems in the history of art. Criminisi pioneered the application of single-view metrology to reconstructing the fictive spaces in Renaissance paintings, such as the vault in Masaccio's Trinità and the plaza in Piero della Francesca's Flagellazione. While the vast majority of realist paintings provide but a single view, some provide multiple views, through mirrors depicted within their tableaus. The contemporary American realist Scott Fraser's Three way vanitas is a highly realistic still-life containing three mirrors; each mirror provides a new view of the objects in the tableau. We applied multiple-view reconstruction methods to the direct image and the images reflected by these mirrors to reconstruct the three-dimensional tableau. Our methods estimate virtual viewpoints for each view using the geometric constraints provided by the direct view of the mirror frames, along with the reflected images themselves. Moreover, our methods automatically discover inconsistencies between the different views, including ones that might elude careful scrutiny by eye, for example the fact that the height of the water in the glass differs between the direct view and that in the mirror at the right. We believe our work provides the first application of multiple-view reconstruction to a single painting and will have application to other paintings and questions in the history of art.
2017-08-11
These two views of Saturn's moon Titan exemplify how NASA's Cassini spacecraft has revealed the surface of this fascinating world. Cassini carried several instruments to pierce the veil of hydrocarbon haze that enshrouds Titan. The mission's imaging cameras also have several spectral filters sensitive to specific wavelengths of infrared light that are able to make it through the haze to the surface and back into space. These "spectral windows" have enable the imaging cameras to map nearly the entire surface of Titan. In addition to Titan's surface, images from both the imaging cameras and VIMS have provided windows into the moon's ever-changing atmosphere, chronicling the appearance and movement of hazes and clouds over the years. A large, bright and feathery band of summer clouds can be seen arcing across high northern latitudes in the view at right. These views were obtained with the Cassini spacecraft narrow-angle camera on March 21, 2017. Images taken using red, green and blue spectral filters were combined to create the natural-color view at left. The false-color view at right was made by substituting an infrared image (centered at 938 nanometers) for the red color channel. The views were acquired at a distance of approximately 613,000 miles (986,000 kilometers) from Titan. Image scale is about 4 miles (6 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21624
Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.
2010-01-01
Objective To describe a novel and simple algorithm (FAST Echo: Four chamber view And Swing Technique) to visualize standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) “swings” through the ductal arch image (“swing technique”), providing an infinite number of cardiac planes in sequence. Each line generated the following plane(s): 1) Line 1: three-vessels and trachea view; 2) Line 2: five-chamber view and long axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); 3) Line 3: four-chamber view; and 4) “Swing” line: three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach. The algorithm was then tested in 50 normal hearts (15.3 – 40 weeks of gestation) and visualization rates for cardiac diagnostic planes were calculated. To determine if the algorithm could identify planes that departed from the normal images, we tested the algorithm in 5 cases with proven congenital heart defects. Results In normal cases, the FAST Echo algorithm (3 locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long axis view of the aorta, four-chamber view): 1) individually in 100% of cases [except for the three-vessel and trachea view, which was seen in 98% (49/50)]; and 2) simultaneously in 98% (49/50). The “swing technique” was able to generate the three-vessels and trachea view, five-chamber view and/or long axis view of the aorta, four-chamber view, and stomach in 100% of normal cases. In the abnormal cases, the FAST Echo algorithm demonstrated the cardiac defects and displayed views that deviated from what was expected from the examination of normal hearts. The “swing technique” was useful in demonstrating the specific diagnosis due to visualization of an infinite number of cardiac planes in sequence. Conclusions This novel and simple algorithm can be used to visualize standard fetal echocardiographic planes in normal fetal hearts. The FAST Echo algorithm may simplify examination of the fetal heart and could reduce operator dependency. Using this algorithm, the inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease. PMID:20878671
Yeo, L; Romero, R; Jodicke, C; Oggè, G; Lee, W; Kusanovic, J P; Vaisbuch, E; Hassan, S
2011-04-01
To describe a novel and simple algorithm (four-chamber view and 'swing technique' (FAST) echo) for visualization of standard diagnostic planes of fetal echocardiography from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). We developed an algorithm to image standard fetal echocardiographic planes by drawing four dissecting lines through the longitudinal view of the ductal arch contained in a STIC volume dataset. Three of the lines are locked to provide simultaneous visualization of targeted planes, and the fourth line (unlocked) 'swings' through the ductal arch image (swing technique), providing an infinite number of cardiac planes in sequence. Each line generates the following plane(s): (a) Line 1: three-vessels and trachea view; (b) Line 2: five-chamber view and long-axis view of the aorta (obtained by rotation of the five-chamber view on the y-axis); (c) Line 3: four-chamber view; and (d) 'swing line': three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach. The algorithm was then tested in 50 normal hearts in fetuses at 15.3-40 weeks' gestation and visualization rates for cardiac diagnostic planes were calculated. To determine whether the algorithm could identify planes that departed from the normal images, we tested the algorithm in five cases with proven congenital heart defects. In normal cases, the FAST echo algorithm (three locked lines and rotation of the five-chamber view on the y-axis) was able to generate the intended planes (longitudinal view of the ductal arch, pulmonary artery, three-vessels and trachea view, five-chamber view, long-axis view of the aorta, four-chamber view) individually in 100% of cases (except for the three-vessels and trachea view, which was seen in 98% (49/50)) and simultaneously in 98% (49/50). The swing technique was able to generate the three-vessels and trachea view, five-chamber view and/or long-axis view of the aorta, four-chamber view and stomach in 100% of normal cases. In the abnormal cases, the FAST echo algorithm demonstrated the cardiac defects and displayed views that deviated from what was expected from the examination of normal hearts. The swing technique was useful for demonstrating the specific diagnosis due to visualization of an infinite number of cardiac planes in sequence. This novel and simple algorithm can be used to visualize standard fetal echocardiographic planes in normal fetal hearts. The FAST echo algorithm may simplify examination of the fetal heart and could reduce operator dependency. Using this algorithm, inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease. Copyright © 2011 ISUOG. Published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.
Efficient fabrication method of nano-grating for 3D holographic display with full parallax views.
Wan, Wenqiang; Qiao, Wen; Huang, Wenbin; Zhu, Ming; Fang, Zongbao; Pu, Donglin; Ye, Yan; Liu, Yanhua; Chen, Linsen
2016-03-21
Without any special glasses, multiview 3D displays based on the diffractive optics can present high resolution, full-parallax 3D images in an ultra-wide viewing angle. The enabling optical component, namely the phase plate, can produce arbitrarily distributed view zones by carefully designing the orientation and the period of each nano-grating pixel. However, such 3D display screen is restricted to a limited size due to the time-consuming fabricating process of nano-gratings on the phase plate. In this paper, we proposed and developed a lithography system that can fabricate the phase plate efficiently. Here we made two phase plates with full nano-grating pixel coverage at a speed of 20 mm2/mins, a 500 fold increment in the efficiency when compared to the method of E-beam lithography. One 2.5-inch phase plate generated 9-view 3D images with horizontal-parallax, while the other 6-inch phase plate produced 64-view 3D images with full-parallax. The angular divergence in horizontal axis and vertical axis was 1.5 degrees, and 1.25 degrees, respectively, slightly larger than the simulated value of 1.2 degrees by Finite Difference Time Domain (FDTD). The intensity variation was less than 10% for each viewpoint, in consistency with the simulation results. On top of each phase plate, a high-resolution binary masking pattern containing amplitude information of all viewing zone was well aligned. We achieved a resolution of 400 pixels/inch and a viewing angle of 40 degrees for 9-view 3D images with horizontal parallax. In another prototype, the resolution of each view was 160 pixels/inch and the view angle was 50 degrees for 64-view 3D images with full parallax. As demonstrated in the experiments, the homemade lithography system provided the key fabricating technology for multiview 3D holographic display.
A Low-Cost PC-Based Image Workstation for Dynamic Interactive Display of Three-Dimensional Anatomy
NASA Astrophysics Data System (ADS)
Barrett, William A.; Raya, Sai P.; Udupa, Jayaram K.
1989-05-01
A system for interactive definition, automated extraction, and dynamic interactive display of three-dimensional anatomy has been developed and implemented on a low-cost PC-based image workstation. An iconic display is used for staging predefined image sequences through specified increments of tilt and rotation over a solid viewing angle. Use of a fast processor facilitates rapid extraction and rendering of the anatomy into predefined image views. These views are formatted into a display matrix in a large image memory for rapid interactive selection and display of arbitrary spatially adjacent images within the viewing angle, thereby providing motion parallax depth cueing for efficient and accurate perception of true three-dimensional shape, size, structure, and spatial interrelationships of the imaged anatomy. The visual effect is that of holding and rotating the anatomy in the hand.
Digital 3D holographic display using scattering layers for enhanced viewing angle and image size
NASA Astrophysics Data System (ADS)
Yu, Hyeonseung; Lee, KyeoReh; Park, Jongchan; Park, YongKeun
2017-05-01
In digital 3D holographic displays, the generation of realistic 3D images has been hindered by limited viewing angle and image size. Here we demonstrate a digital 3D holographic display using volume speckle fields produced by scattering layers in which both the viewing angle and the image size are greatly enhanced. Although volume speckle fields exhibit random distributions, the transmitted speckle fields have a linear and deterministic relationship with the input field. By modulating the incident wavefront with a digital micro-mirror device, volume speckle patterns are controlled to generate 3D images of micrometer-size optical foci with 35° viewing angle in a volume of 2 cm × 2 cm × 2 cm.
MyFreePACS: a free web-based radiology image storage and viewing tool.
de Regt, David; Weinberger, Ed
2004-08-01
We developed an easy-to-use method for central storage and subsequent viewing of radiology images for use on any PC equipped with Internet Explorer. We developed MyFreePACS, a program that uses a DICOM server to receive and store images and transmit them over the Web to the MyFreePACS Web client. The MyFreePACS Web client is a Web page that uses an ActiveX control for viewing and manipulating images. The client contains many of the tools found in modern image viewing stations including 3D localization and multiplanar reformation. The system is built entirely with free components and is freely available for download and installation from the Web at www.myfreepacs.com.
NASA Astrophysics Data System (ADS)
Díaz, Elkin; Arguello, Henry
2016-05-01
Urban ecosystem studies require monitoring, controlling and planning to analyze building density, urban density, urban planning, atmospheric modeling and land use. In urban planning, there are many methods for building height estimation using optical remote sensing images. These methods however, highly depend on sun illumination and cloud-free weather. In contrast, high resolution synthetic aperture radar provides images independent from daytime and weather conditions, although, these images rely on special hardware and expensive acquisition. Most of the biggest cities around the world have been photographed by Google street view under different conditions. Thus, thousands of images from the principal streets of a city can be accessed online. The availability of this and similar rich city imagery such as StreetSide from Microsoft, represents huge opportunities in computer vision because these images can be used as input in many applications such as 3D modeling, segmentation, recognition and stereo correspondence. This paper proposes a novel algorithm to estimate building heights using public Google Street-View imagery. The objective of this work is to obtain thousands of geo-referenced images from Google Street-View using a representational state transfer system, and estimate their average height using single view metrology. Furthermore, the resulting measurements and image metadata are used to derive a layer of heights in a Google map available online. The experimental results show that the proposed algorithm can estimate an accurate average building height map of thousands of images using Google Street-View Imagery of any city.
Intra- and Interobserver Variability of Cochlear Length Measurements in Clinical CT.
Iyaniwura, John E; Elfarnawany, Mai; Riyahi-Alam, Sadegh; Sharma, Manas; Kassam, Zahra; Bureau, Yves; Parnes, Lorne S; Ladak, Hanif M; Agrawal, Sumit K
2017-07-01
The cochlear A-value measurement exhibits significant inter- and intraobserver variability, and its accuracy is dependent on the visualization method in clinical computed tomography (CT) images of the cochlea. An accurate estimate of the cochlear duct length (CDL) can be used to determine electrode choice, and frequency map the cochlea based on the Greenwood equation. Studies have described estimating the CDL using a single A-value measurement, however the observer variability has not been assessed. Clinical and micro-CT images of 20 cadaveric cochleae were acquired. Four specialists measured A-values on clinical CT images using both standard views and multiplanar reconstructed (MPR) views. Measurements were repeated to assess for intraobserver variability. Observer variabilities were evaluated using intra-class correlation and absolute differences. Accuracy was evaluated by comparison to the gold standard micro-CT images of the same specimens. Interobserver variability was good (average absolute difference: 0.77 ± 0.42 mm) using standard views and fair (average absolute difference: 0.90 ± 0.31 mm) using MPR views. Intraobserver variability had an average absolute difference of 0.31 ± 0.09 mm for the standard views and 0.38 ± 0.17 mm for the MPR views. MPR view measurements were more accurate than standard views, with average relative errors of 9.5 and 14.5%, respectively. There was significant observer variability in A-value measurements using both the standard and MPR views. Creating the MPR views increased variability between experts, however MPR views yielded more accurate results. Automated A-value measurement algorithms may help to reduce variability and increase accuracy in the future.
Esthetic smile preferences and the orientation of the maxillary occlusal plane.
Kattadiyil, Mathew T; Goodacre, Charles J; Naylor, W Patrick; Maveli, Thomas C
2012-12-01
The anteroposterior orientation of the maxillary occlusal plane has an important role in the creation, assessment, and perception of an esthetic smile. However, the effect of the angle at which this plane is visualized (the viewing angle) in a broad smile has not been quantified. The purpose of this study was to assess the esthetic preferences of dental professionals and nondentists by using 3 viewing angles of the anteroposterior orientation of the maxillary occlusal plane. After Institutional Review Board approval, standardized digital photographic images of the smiles of 100 participants were recorded by simultaneously triggering 3 cameras set at different viewing angles. The top camera was positioned 10 degrees above the occlusal plane (camera #1, Top view); the center camera was positioned at the level of the occlusal plane (camera #2, Center view); and the bottom camera was located 10 degrees below the occlusal plane (camera #3, Bottom view). Forty-two dental professionals and 31 nondentists (persons from the general population) independently evaluated digital images of each participant's smile captured from the Top view, Center view, and Bottom view. The 73 evaluators were asked individually through a questionnaire to rank the 3 photographic images of each patient as 'most pleasing,' 'somewhat pleasing,' or 'least pleasing,' with most pleasing being the most esthetic view and the preferred orientation of the occlusal plane. The resulting esthetic preferences were statistically analyzed by using the Friedman test. In addition, the participants were asked to rank their own images from the 3 viewing angles as 'most pleasing,' 'somewhat pleasing,' and 'least pleasing.' The 73 evaluators found statistically significant differences in the esthetic preferences between the Top and Bottom views and between the Center and Bottom views (P<.001). No significant differences were found between the Top and Center views. The Top position was marginally preferred over the Center, and both were significantly preferred over the Bottom position. When the participants evaluated their own smiles, a significantly greater number (P< .001) preferred the Top view over the Center or the Bottom views. No significant differences were found in preferences based on the demographics of the evaluators when comparing age, education, gender, profession, and race. The esthetic preference for the maxillary occlusal plane was influenced by the viewing angle with the higher (Top) and center views preferred by both dental and nondental evaluators. The participants themselves preferred the higher view of their smile significantly more often than the center or lower angle views (P<.001). Copyright © 2012 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.
Collecting and Animating Online Satellite Images.
ERIC Educational Resources Information Center
Irons, Ralph
1995-01-01
Describes how to generate automated classroom resources from the Internet. Topics covered include viewing animated satellite weather images using file transfer protocol (FTP); sources of images on the Internet; shareware available for viewing images; software for automating image retrieval; procedures for animating satellite images; and storing…
Optics of wide-angle panoramic viewing system-assisted vitreous surgery.
Chalam, Kakarla V; Shah, Vinay A
2004-01-01
The purpose of the article is to describe the optics of the contact wide-angle lens system with stereo-reinverter for vitreous surgery. A panoramic viewing system is made up of two components; an indirect ophthalmoscopy lens system for fundus image viewing, which is placed on the patient's cornea as a contact lens, and a separate removable prism system for reinversion of the image mounted on the microscope above the zooming system. The system provides a 104 degrees field of view in a phakic emmetropic eye with minification, which can be magnified by the operating microscope. It permits a binocular stereoptic view even through a small pupil (3 mm) or larger. In an air-filled phakic eye, field of view increases to approximately 130 degrees. The obtained image of the patient's fundus is reinverted to form true, erect, stereoscopic image by the reinversion system. In conclusion, this system permits wide-angle panoramic view of the surgical field. The contact lens neutralizes the optical irregularities of the corneal surface and allows improved visualization in eyes with irregular astigmatism induced by corneal scars. Excellent visualization is achieved in complex clinical situations such as miotic pupils, lenticular opacities, and in air-filled phakic eyes.
Dawn Color Topography of Ahuna Mons on Ceres
2016-03-11
These color topographic views show variations in surface height around Ahuna Mons, a mysterious mountain on Ceres. The views are colorized versions of PIA20348 and PIA20349. They represent an update to the view in PIA19976, which showed the mountain using data from an earlier, higher orbit. Both views were made using images taken by NASA's Dawn spacecraft during its low-altitude mapping orbit, at a distance of about 240 miles (385 kilometers) from the surface. The resolution of the component images is about 120 feet (35 meters) per pixel. Elevations span a range of about 5.5 miles (9 kilometers) from the lowest places in the region to the highest terrains. Blue represents the lowest elevation, and brown is the highest. The streaks running down the side of the mountain, which appear white in the grayscale view, are especially bright parts of the surface (the brightness does not relate to elevation). The elevations are from a shape model generated using images taken at varying sun and viewing angles during Dawn's lower-resolution, high-altitude mapping orbit (HAMO) phase. The side perspective view was generated by draping the image mosaics over the shape model. http://photojournal.jpl.nasa.gov/catalog/PIA20399
View subspaces for indexing and retrieval of 3D models
NASA Astrophysics Data System (ADS)
Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel
2010-02-01
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
Intermediate view synthesis algorithm using mesh clustering for rectangular multiview camera system
NASA Astrophysics Data System (ADS)
Choi, Byeongho; Kim, Taewan; Oh, Kwan-Jung; Ho, Yo-Sung; Choi, Jong-Soo
2010-02-01
A multiview video-based three-dimensional (3-D) video system offers a realistic impression and a free view navigation to the user. The efficient compression and intermediate view synthesis are key technologies since 3-D video systems deal multiple views. We propose an intermediate view synthesis using a rectangular multiview camera system that is suitable to realize 3-D video systems. The rectangular multiview camera system not only can offer free view navigation both horizontally and vertically but also can employ three reference views such as left, right, and bottom for intermediate view synthesis. The proposed view synthesis method first represents the each reference view to meshes and then finds the best disparity for each mesh element by using the stereo matching between reference views. Before stereo matching, we separate the virtual image to be synthesized into several regions to enhance the accuracy of disparities. The mesh is classified into foreground and background groups by disparity values and then affine transformed. By experiments, we confirm that the proposed method synthesizes a high-quality image and is suitable for 3-D video systems.
Test Image of Earth Rocks by Mars Camera Stereo
2010-11-16
This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.
Social relevance drives viewing behavior independent of low-level salience in rhesus macaques
Solyst, James A.; Buffalo, Elizabeth A.
2014-01-01
Quantifying attention to social stimuli during the viewing of complex social scenes with eye tracking has proven to be a sensitive method in the diagnosis of autism spectrum disorders years before average clinical diagnosis. Rhesus macaques provide an ideal model for understanding the mechanisms underlying social viewing behavior, but to date no comparable behavioral task has been developed for use in monkeys. Using a novel scene-viewing task, we monitored the gaze of three rhesus macaques while they freely viewed well-controlled composed social scenes and analyzed the time spent viewing objects and monkeys. In each of six behavioral sessions, monkeys viewed a set of 90 images (540 unique scenes) with each image presented twice. In two-thirds of the repeated scenes, either a monkey or an object was replaced with a novel item (manipulated scenes). When viewing a repeated scene, monkeys made longer fixations and shorter saccades, shifting from a rapid orienting to global scene contents to a more local analysis of fewer items. In addition to this repetition effect, in manipulated scenes, monkeys demonstrated robust memory by spending more time viewing the replaced items. By analyzing attention to specific scene content, we found that monkeys strongly preferred to view conspecifics and that this was not related to their salience in terms of low-level image features. A model-free analysis of viewing statistics found that monkeys that were viewed earlier and longer had direct gaze and redder sex skin around their face and rump, two important visual social cues. These data provide a quantification of viewing strategy, memory and social preferences in rhesus macaques viewing complex social scenes, and they provide an important baseline with which to compare to the effects of therapeutics aimed at enhancing social cognition. PMID:25414633
NASA Astrophysics Data System (ADS)
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-09-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 s. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.043 75 mAs, were investigated. Both the analytical Feldkamp, Davis and Kress (FDK) algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode.
Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno
2015-01-01
For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels simulated, sparse view protocols with 41 and 24 views best balanced the tradeoff between electronic noise and aliasing artifacts. In terms of lesion activity error and ensemble RMSE of the PET images, these two protocols, when combined with MBIR, are able to provide results that are comparable to the baseline full dose CT scan. View interpolation significantly improves the performance of FDK reconstruction but was not necessary for MBIR. With the more technically feasible continuous exposure data acquisition, the CT images show an increase in azimuthal blur compared to tube pulsing. However, this blurring generally does not have a measureable impact on PET reconstructed images. Conclusions Our simulations demonstrated that ultra-low-dose CT-based attenuation correction can be achieved at dose levels on the order of 0.044 mAs with little impact on PET image quality. Highly sparse 41- or 24- view ultra-low dose CT scans are feasible for PET attenuation correction, providing the best tradeoff between electronic noise and view aliasing artifacts. The continuous exposure acquisition mode could potentially be implemented in current commercially available scanners, thus enabling sparse view data acquisition without requiring x-ray tubes capable of operating in a pulsing mode. PMID:26352168
Sliding window adaptive histogram equalization of intraoral radiographs: effect on image quality.
Sund, T; Møystad, A
2006-05-01
To investigate whether contrast enhancement by non-interactive, sliding window adaptive histogram equalization (SWAHE) can enhance the image quality of intraoral radiographs in the dental clinic. Three dentists read 22 periapical and 12 bitewing storage phosphor (SP) radiographs. For the periapical readings they graded the quality of the examination with regard to visually locating the root apex. For the bitewing readings they registered all occurrences of approximal caries on a confidence scale. Each reading was first done on an unprocessed radiograph ("single-view"), and then re-done with the image processed with SWAHE displayed beside the unprocessed version ("twin-view"). The processing parameters for SWAHE were the same for all the images. For the periapical examinations, twin-view was judged to raise the image quality for 52% of those cases where the single-view quality was below the maximum. For the bitewing radiographs, there was a change of caries classification (both positive and negative) with twin-view in 19% of the cases, but with only a 3% net increase in the total number of caries registrations. For both examinations interobserver variance was unaffected. Non-interactive SWAHE applied to dental SP radiographs produces a supplemental contrast enhanced image which in twin-view reading improves the image quality of periapical examinations. SWAHE also affects caries diagnosis of bitewing images, and further study using a gold standard is warranted.
Information Learn about the program, receive free newsletter, see webcasts, view images, see FAQ, much more mm film Images & Video View webcasts, see photo gallery, download free images and logos Girls
MISR Stereo Imaging Distinguishes Smoke from Cloud
NASA Technical Reports Server (NTRS)
2000-01-01
These views of western Alaska were acquired by MISR on June 25, 2000 during Terra orbit 2775. The images cover an area of about 150 kilometers x 225 kilometers, and have been oriented with north to the left. The left image is from the vertical-viewing (nadir) camera, whereas the right image is a stereo 'anaglyph' that combines data from the forward-viewing 45-degree and 60-degree cameras. This image appears three-dimensional when viewed through red/blue glasses with the red filter over the left eye. It may help to darken the room lights when viewing the image on a computer screen.The Yukon River is seen wending its way from upper left to lower right. A forest fire in the Kaiyuh Mountains produced the long smoke plume that originates below and to the right of image center. In the nadir view, the high cirrus clouds at the top of the image and the smoke plume are similar in appearance, and the lack of vertical information makes them hard to differentiate. Viewing the righthand image with stereo glasses, on the other hand, demonstrates that the scene consists of several vertically-stratified layers, including the surface terrain, the smoke, some scattered cumulus clouds, and streaks of high, thin cirrus. This added dimensionality is one of the ways MISR data helps scientists identify and classify various components of terrestrial scenes.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Atmospheric Science Data Center
2013-04-17
article title: Coccoliths in the Celtic Sea View Larger Image As ... This image is a natural-color view of the Celtic Sea and English Channel regions, and was acquired by the Multi-angle Imaging ...
Assessing the Relationships Among Forgiveness by God, God Images, and Death Anxiety.
Krause, Neal; Hill, Peter C
2018-01-01
Previous research suggests that people who feel forgiven by God may experience lower levels of death anxiety. The purpose of the current study is to contribute to this work by assessing whether the relationship between forgiveness by God and death anxiety varies according to how people view God. Three images of God are assessed: a pantheistic view of God, a theistic view of God, and a deistic view of God. Data from nationwide survey that was conducted in 2014 ( N = 2,650) suggest that the relationship between forgiveness by God and death anxiety is strongest among people with a theistic view of God, significantly weaker among people with a pantheistic view of God, and not significant among individuals with a deistic view of God. The findings point to the importance of taking views of God into account when assessing the relationship between forgiveness by God and death anxiety.
Scalable screen-size enlargement by multi-channel viewing-zone scanning holography.
Takaki, Yasuhiro; Nakaoka, Mitsuki
2016-08-08
Viewing-zone scanning holographic displays can enlarge both the screen size and the viewing zone. However, limitations exist in the screen size enlargement process even if the viewing zone is effectively enlarged. This study proposes a multi-channel viewing-zone scanning holographic display comprising multiple projection systems and a planar scanner to enable the scalable enlargement of the screen size. Each projection system produces an enlarged image of the screen of a MEMS spatial light modulator. The multiple enlarged images produced by the multiple projection systems are seamlessly tiled on the planar scanner. This screen size enlargement process reduces the viewing zones of the projection systems, which are horizontally scanned by the planar scanner comprising a rotating off-axis lens and a vertical diffuser to enlarge the viewing zone. A screen size of 7.4 in. and a viewing-zone angle of 43.0° are demonstrated.
Radial line method for rear-view mirror distortion detection
NASA Astrophysics Data System (ADS)
Rahmah, Fitri; Kusumawardhani, Apriani; Setijono, Heru; Hatta, Agus M.; Irwansyah, .
2015-01-01
An image of the object can be distorted due to a defect in a mirror. A rear-view mirror is an important component for the vehicle safety. One of standard parameters of the rear-view mirror is a distortion factor. This paper presents a radial line method for distortion detection of the rear-view mirror. The rear-view mirror was tested for the distortion detection by using a system consisting of a webcam sensor and an image-processing unit. In the image-processing unit, the captured image from the webcam were pre-processed by using smoothing and sharpening techniques and then a radial line method was used to define the distortion factor. It was demonstrated successfully that the radial line method could be used to define the distortion factor. This detection system is useful to be implemented such as in Indonesian's automotive component industry while the manual inspection still be used.
Kim, Jin-su; Moon, Yong-ju; Choi, Yun Sun; Park, Young Uk; Park, Seung Min; Lee, Kyung Tai
2012-01-01
The purpose of the present study was to clarify the usefulness of the oblique axial scan parallel to the course of the anterior talofibular ligament in magnetic resonance imaging of the anterior talofibular ligament in patients with chronic ankle instability. We evaluated this anterior talofibular ligament view and routine axial magnetic resonance imaging planes of 115 ankles. We diagnosed the grade of the anterior talofibular ligament injury and confirmed full-length views of the anterior talofibular ligament. Associated lesions were also checked. The subjective diagnostic convenience of associated problems was determined. The full-length view of the anterior talofibular ligament was checked in 85 (73.9%) patients in the routine axial view and 112 (97.4%) patients in the anterior talofibular ligament view. The grade of injury increased in the anterior talofibular ligament view in 26 (22.6%) patients compared with the routine axial view. There were 64 associated injuries. The anterior inferior tibiofibular ligament, posterior inferior tibiofibular ligament, and posterior tibialis tendinitis were more easily diagnosed on the routine axial view than on the anterior talofibular ligament view. An additional anterior talofibular ligament view is useful in the evaluation of the anterior talofibular ligament in patients with chronic ankle instability. Copyright © 2012 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Detail view of the interior of the flight deck looking ...
Detail view of the interior of the flight deck looking forward showing the overhead control panels. Note that the flight deck windows have protective covers over them in this view. This images can be digitally stitched with image HAER No. TX-116-A-19 to expand the view to include the Commander and Pilot positions during ascent and reentry and landing. This view was taken in the Orbiter Processing Facility at the Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
GeoEye(TradeMark) Corporate Overview
NASA Technical Reports Server (NTRS)
Jones, Dennis
2007-01-01
This viewgraph presentation gives a corporate overview of GeoEye, the world's largest commercial remote sensing company. The contents include: 1) About GeoEye; 2) GeoEye Mission; 3) The Company; 4) Com,pany Summary; 5) U.S. Government Commitment; 6) GeoEye Constellation; 7) Other Imaging Resources; 8) OrbView-3 & OrbView-2; 9) OrbView-3 System Architecture; 10) OrbView-3; 11) OrbView-2; 12) IKONOS; 13) Largest Image Archive in the World; 14) GeoEye-1; 15) Best-In-Class Development Team; 16) Highest Performance Available in the Commercial Market; and 17) Key Themes
Atmospheric Science Data Center
2013-04-19
article title: MISR Global Images See the Light of Day View Larger Image ... camera and combines data from the red, green and blue spectral bands to create a natural color image. The central view combines ...
Atmospheric Science Data Center
2014-05-15
article title: Unique Views of Gulf Oil Slick View Larger Image ... image. The red symbol indicates the former location of the drilling platform. The image dimensions are 346 by 258 kilometers (215 by 160 ...
View compensated compression of volume rendered images for remote visualization.
Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S
2009-07-01
Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.
Whole-animal imaging with high spatio-temporal resolution
NASA Astrophysics Data System (ADS)
Chhetri, Raghav; Amat, Fernando; Wan, Yinan; Höckendorf, Burkhard; Lemon, William C.; Keller, Philipp J.
2016-03-01
We developed isotropic multiview (IsoView) light-sheet microscopy in order to image fast cellular dynamics, such as cell movements in an entire developing embryo or neuronal activity throughput an entire brain or nervous system, with high resolution in all dimensions, high imaging speeds, good physical coverage and low photo-damage. To achieve high temporal resolution and high spatial resolution at the same time, IsoView microscopy rapidly images large specimens via simultaneous light-sheet illumination and fluorescence detection along four orthogonal directions. In a post-processing step, these four views are then combined by means of high-throughput multiview deconvolution to yield images with a system resolution of ≤ 450 nm in all three dimensions. Using IsoView microscopy, we performed whole-animal functional imaging of Drosophila embryos and larvae at a spatial resolution of 1.1-2.5 μm and at a temporal resolution of 2 Hz for up to 9 hours. We also performed whole-brain functional imaging in larval zebrafish and multicolor imaging of fast cellular dynamics across entire, gastrulating Drosophila embryos with isotropic, sub-cellular resolution. Compared with conventional (spatially anisotropic) light-sheet microscopy, IsoView microscopy improves spatial resolution at least sevenfold and decreases resolution anisotropy at least threefold. Compared with existing high-resolution light-sheet techniques, such as lattice lightsheet microscopy or diSPIM, IsoView microscopy effectively doubles the penetration depth and provides subsecond temporal resolution for specimens 400-fold larger than could previously be imaged.
Enceladus Stetting Behind Saturn (Image & Movie)
2017-09-15
Saturn's active, ocean-bearing moon Enceladus sinks behind the giant planet in a farewell portrait from NASA's Cassini spacecraft. This view of Enceladus was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back. The view is part of a movie sequence of images taken over a period of 40 minutes as the icy moon passed behind Saturn from the spacecraft's point of view. Images taken using red, green and blue spectral filters were assembled to create the natural color view. (A monochrome version of the image, taken using a clear spectral filter, is also available.) The images were taken using Cassini's narrow-angle camera at a distance of 810,000 million miles (1.3 million kilometers) from Enceladus and about 620,000 miles (1 million kilometers) from Saturn. Image scale on Enceladus is 5 miles (8 kilometers) per pixel. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21889
The pupil's response to affective pictures: Role of image duration, habituation, and viewing mode
O'Farrell, Katherine R.; Burley, Daniel; Erichsen, Jonathan T.; Newton, Naomi V.; Gray, Nicola S.
2016-01-01
Abstract The pupil has been shown to be sensitive to the emotional content of stimuli. We examined this phenomenon by comparing fearful and neutral images carefully matched in the domains of luminance, image contrast, image color, and complexity of content. The pupil was more dilated after viewing affective pictures, and this effect was (a) shown to be independent of the presentation time of the images (from 100–3,000 ms), (b) not diminished by repeated presentations of the images, and (c) not affected by actively naming the emotion of the stimuli in comparison to passive viewing. Our results show that the emotional modulation of the pupil is present over a range of variables that typically vary from study to study (image duration, number of trials, free viewing vs. task), and encourages the use of pupillometry as a measure of emotional processing in populations where alternative techniques may not be appropriate. PMID:27172997
2015-07-25
Four images from NASA's New Horizons' Long Range Reconnaissance Imager (LORRI) were combined with color data from the Ralph instrument to create this global view of Pluto. (The lower right edge of Pluto in this view currently lacks high-resolution color coverage.) The images, taken when the spacecraft was 280,000 miles (450,000 kilometers) away, show features as small as 1.4 miles (2.2 kilometers), twice the resolution of the single-image view taken on July 13. http://photojournal.jpl.nasa.gov/catalog/PIA19857
Automatic view synthesis by image-domain-warping.
Stefanoski, Nikolce; Wang, Oliver; Lang, Manuel; Greisen, Pierre; Heinzle, Simon; Smolic, Aljosa
2013-09-01
Today, stereoscopic 3D (S3D) cinema is already mainstream, and almost all new display devices for the home support S3D content. S3D distribution infrastructure to the home is already established partly in the form of 3D Blu-ray discs, video on demand services, or television channels. The necessity to wear glasses is, however, often considered as an obstacle, which hinders broader acceptance of this technology in the home. Multiviewautostereoscopic displays enable a glasses free perception of S3D content for several observers simultaneously, and support head motion parallax in a limited range. To support multiviewautostereoscopic displays in an already established S3D distribution infrastructure, a synthesis of new views from S3D video is needed. In this paper, a view synthesis method based on image-domain-warping (IDW) is presented that automatically synthesizes new views directly from S3D video and functions completely. IDW relies on an automatic and robust estimation of sparse disparities and image saliency information, and enforces target disparities in synthesized images using an image warping framework. Two configurations of the view synthesizer in the scope of a transmission and view synthesis framework are analyzed and evaluated. A transmission and view synthesis system that uses IDW is recently submitted to MPEG's call for proposals on 3D video technology, where it is ranked among the four best performing proposals.
Raspberry Pi: a 35-dollar device for viewing DICOM images.
Paiva, Omir Antunes; Moreira, Renata de Oliveira
2014-01-01
Raspberry Pi is a low-cost computer created with educational purposes. It uses Linux and, most of times, freeware applications, particularly a software for viewing DICOM images. With an external monitor, the supported resolution (1920 × 1200 pixels) allows for the set up of simple viewing workstations at a reduced cost.
Raspberry Pi: a 35-dollar device for viewing DICOM images*
Paiva, Omir Antunes; Moreira, Renata de Oliveira
2014-01-01
Raspberry Pi is a low-cost computer created with educational purposes. It uses Linux and, most of times, freeware applications, particularly a software for viewing DICOM images. With an external monitor, the supported resolution (1920 × 1200 pixels) allows for the set up of simple viewing workstations at a reduced cost. PMID:25741057
ERIC Educational Resources Information Center
Cupchik, Gerald C.; Vartanian, Oshin; Crawley, Adrian; Mikulis, David J.
2009-01-01
When we view visual images in everyday life, our perception is oriented toward object identification. In contrast, when viewing visual images "as artworks", we also tend to experience subjective reactions to their stylistic and structural properties. This experiment sought to determine how cognitive control and perceptual facilitation contribute…
Understanding Clinical Mammographic Breast Density Assessment: a Deep Learning Perspective.
Mohamed, Aly A; Luo, Yahong; Peng, Hong; Jankowitz, Rachel C; Wu, Shandong
2017-09-20
Mammographic breast density has been established as an independent risk marker for developing breast cancer. Breast density assessment is a routine clinical need in breast cancer screening and current standard is using the Breast Imaging and Reporting Data System (BI-RADS) criteria including four qualitative categories (i.e., fatty, scattered density, heterogeneously dense, or extremely dense). In each mammogram examination, a breast is typically imaged with two different views, i.e., the mediolateral oblique (MLO) view and cranial caudal (CC) view. The BI-RADS-based breast density assessment is a qualitative process made by visual observation of both the MLO and CC views by radiologists, where there is a notable inter- and intra-reader variability. In order to maintain consistency and accuracy in BI-RADS-based breast density assessment, gaining understanding on radiologists' reading behaviors will be educational. In this study, we proposed to leverage the newly emerged deep learning approach to investigate how the MLO and CC view images of a mammogram examination may have been clinically used by radiologists in coming up with a BI-RADS density category. We implemented a convolutional neural network (CNN)-based deep learning model, aimed at distinguishing the breast density categories using a large (15,415 images) set of real-world clinical mammogram images. Our results showed that the classification of density categories (in terms of area under the receiver operating characteristic curve) using MLO view images is significantly higher than that using the CC view. This indicates that most likely it is the MLO view that the radiologists have predominately used to determine the breast density BI-RADS categories. Our study holds a potential to further interpret radiologists' reading characteristics, enhance personalized clinical training to radiologists, and ultimately reduce reader variations in breast density assessment.
Alnaghy, S; Cutajar, D L; Bucci, J A; Enari, K; Safavi-Naeini, M; Favoino, M; Tartaglia, M; Carriero, F; Jakubek, J; Pospisil, S; Lerch, M; Rosenfeld, A B; Petasecca, M
2017-02-01
BrachyView is a novel in-body imaging system which aims to provide LDR brachytherapy seeds position reconstruction within the prostate in real-time. The first prototype is presented in this study: the probe consists of a gamma camera featuring three single cone pinhole collimators embedded in a tungsten tube, above three, high resolution pixelated detectors (Timepix). The prostate was imaged with a TRUS system using a sagittal crystal with a 2.5mm slice thickness. Eleven needles containing a total of thirty 0.508U 125 I seeds were implanted under ultrasound guidance. A CT scan was used to localise the seed positions, as well as provide a reference when performing the image co-registration between the BrachyView coordinate system and the TRUS coordinate system. An in-house visualisation software interface was developed to provide a quantitative 3D reconstructed prostate based on the TRUS images and co-registered with the LDR seeds in situ. A rigid body image registration was performed between the BrachyView and TRUS systems, with the BrachyView and CT-derived source locations compared. The reconstructed seed positions determined by the BrachyView probe showed a maximum discrepancy of 1.78mm, with 75% of the seeds reconstructed within 1mm of their nominal location. An accurate co-registration between the BrachyView and TRUS coordinate system was established. The BrachyView system has shown its ability to reconstruct all implanted LDR seeds within a tissue equivalent prostate gel phantom, providing both anatomical and seed position information in a single interface. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Template match using local feature with view invariance
NASA Astrophysics Data System (ADS)
Lu, Cen; Zhou, Gang
2013-10-01
Matching the template image in the target image is the fundamental task in the field of computer vision. Aiming at the deficiency in the traditional image matching methods and inaccurate matching in scene image with rotation, illumination and view changing, a novel matching algorithm using local features are proposed in this paper. The local histograms of the edge pixels (LHoE) are extracted as the invariable feature to resist view and brightness changing. The merits of the LHoE is that the edge points have been little affected with view changing, and the LHoE can resist not only illumination variance but also the polution of noise. For the process of matching are excuded only on the edge points, the computation burden are highly reduced. Additionally, our approach is conceptually simple, easy to implement and do not need the training phase. The view changing can be considered as the combination of rotation, illumination and shear transformation. Experimental results on simulated and real data demonstrated that the proposed approach is superior to NCC(Normalized cross-correlation) and Histogram-based methods with view changing.
Optic for industrial endoscope/borescope with narrow field of view and low distortion
Stone, Gary F.; Trebes, James E.
2005-08-16
An optic for the imaging optics on the distal end of a flexible fiberoptic endoscope or rigid borescope inspection tool. The image coverage is over a narrow (<20 degrees) field of view with very low optical distortion (<5% pin cushion or barrel distortion), compared to the typical <20% distortion. The optic will permit non-contact surface roughness measurements using optical techniques. This optic will permit simultaneous collection of selected image plane data, which data can then be subsequently optically processed. The image analysis will yield non-contact surface topology data for inspection where access to the surface does not permit a mechanical styles profilometer verification of surface topology. The optic allows a very broad spectral band or range of optical inspection. It is capable of spectroscopic imaging and fluorescence induced imaging when a scanning illumination source is used. The total viewing angle for this optic is 10 degrees for the full field of view of 10 degrees, compared to 40-70 degrees full angle field of view of the conventional gradient index or GRIN's lens systems.
Discriminative Multi-View Interactive Image Re-Ranking.
Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng
2017-07-01
Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.
Servo-controlled intravital microscope system
NASA Technical Reports Server (NTRS)
Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)
1975-01-01
A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.
Inoue, Makoto; Noda, Toru; Ohnuma, Kazuhiko; Bissen-Miyajima, Hiroko; Hirakata, Akito
2011-11-01
To determine the quality of the image of a grating target placed in the vitreous of isolated pig eyes and photographed through implanted refractive and diffractive multifocal intraocular lenses (IOL). Refractive multifocal (NXG1, PY60MV), diffractive multifocal (ZM900, SA60D3) and monofocal (SA60AT, ZA9003) IOL were implanted in the capsular bag of isolated pig eyes. A grating target was placed in the vitreous and photographed through a flat or a wide-field viewing contact lens. The contrast of the grating targets of different spatial frequencies was measured. With the flat corneal contact lens, the gratings appeared clear and not distorted when viewed through the optics of the NXG1 and PY60MV for far vision but were distorted with reduced contrast when viewed through the optical zone for near vision. The images through the diffractive zone of the ZM900 and SA60D3 were more defocused than with the monofocal IOL (p < 0.005). Ghost images oriented centrifugally of the original image were seen with the ZM900 resulting in lower contrast at higher spatial frequencies than with the SA60D3 with less defocused images only in the central area. With the wide-field viewing contact lens, the images were less defocused and the contrast was comparable to both refractive and diffractive multifocal IOL. Both refractive and diffractive multifocal IOL reduced the contrast of the retinal image when viewed through a flat corneal contact lens but less defocused when viewed through a wide-field viewing contact lens. © 2011 The Authors. Acta Ophthalmologica © 2011 Acta Ophthalmologica Scandinavica Foundation.
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
Optic for an endoscope/borescope having high resolution and narrow field of view
Stone, Gary F.; Trebes, James E.
2003-10-28
An optic having optimized high spatial resolution, minimal nonlinear magnification distortion while at the same time having a limited chromatic focal shift or chromatic aberrations. The optic located at the distal end of an endoscopic inspection tool permits a high resolution, narrow field of view image for medical diagnostic applications, compared to conventional optics for endoscopic instruments which provide a wide field of view, low resolution image. The image coverage is over a narrow (<20 degrees) field of view with very low optical distortion (<5% pin cushion or barrel distortion. The optic is also optimized for best color correction as well as to aid medical diagnostics.
Optimization-based image reconstruction from sparse-view data in offset-detector CBCT
NASA Astrophysics Data System (ADS)
Bian, Junguo; Wang, Jiong; Han, Xiao; Sidky, Emil Y.; Shao, Lingxiong; Pan, Xiaochuan
2013-01-01
The field of view (FOV) of a cone-beam computed tomography (CBCT) unit in a single-photon emission computed tomography (SPECT)/CBCT system can be increased by offsetting the CBCT detector. Analytic-based algorithms have been developed for image reconstruction from data collected at a large number of densely sampled views in offset-detector CBCT. However, the radiation dose involved in a large number of projections can be of a health concern to the imaged subject. CBCT-imaging dose can be reduced by lowering the number of projections. As analytic-based algorithms are unlikely to reconstruct accurate images from sparse-view data, we investigate and characterize in the work optimization-based algorithms, including an adaptive steepest descent-weighted projection onto convex sets (ASD-WPOCS) algorithms, for image reconstruction from sparse-view data collected in offset-detector CBCT. Using simulated data and real data collected from a physical pelvis phantom and patient, we verify and characterize properties of the algorithms under study. Results of our study suggest that optimization-based algorithms such as ASD-WPOCS may be developed for yielding images of potential utility from a number of projections substantially smaller than those used currently in clinical SPECT/CBCT imaging, thus leading to a dose reduction in CBCT imaging.
Escott, Edward J; Rubinstein, David
2004-01-01
It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.
Rotationally Invariant Image Representation for Viewing Direction Classification in Cryo-EM
Zhao, Zhizhen; Singer, Amit
2014-01-01
We introduce a new rotationally invariant viewing angle classification method for identifying, among a large number of cryo-EM projection images, similar views without prior knowledge of the molecule. Our rotationally invariant features are based on the bispectrum. Each image is denoised and compressed using steerable principal component analysis (PCA) such that rotating an image is equivalent to phase shifting the expansion coefficients. Thus we are able to extend the theory of bispectrum of 1D periodic signals to 2D images. The randomized PCA algorithm is then used to efficiently reduce the dimensionality of the bispectrum coefficients, enabling fast computation of the similarity between any pair of images. The nearest neighbors provide an initial classification of similar viewing angles. In this way, rotational alignment is only performed for images with their nearest neighbors. The initial nearest neighbor classification and alignment are further improved by a new classification method called vector diffusion maps. Our pipeline for viewing angle classification and alignment is experimentally shown to be faster and more accurate than reference-free alignment with rotationally invariant K-means clustering, MSA/MRA 2D classification, and their modern approximations. PMID:24631969
Ground-based full-sky imaging polarimeter based on liquid crystal variable retarders.
Zhang, Ying; Zhao, Huijie; Song, Ping; Shi, Shaoguang; Xu, Wujian; Liang, Xiao
2014-04-07
A ground-based full-sky imaging polarimeter based on liquid crystal variable retarders (LCVRs) is proposed in this paper. Our proposed method can be used to realize the rapid detection of the skylight polarization information with hemisphere field-of-view for the visual band. The characteristics of the incidence angle of light on the LCVR are investigated, based on the electrically controlled birefringence. Then, the imaging polarimeter with hemisphere field-of-view is designed. Furthermore, the polarization calibration method with the field-of-view multiplexing and piecewise linear fitting is proposed, based on the rotation symmetry of the polarimeter. The polarization calibration of the polarimeter is implemented with the hemisphere field-of-view. This imaging polarimeter is investigated by the experiment of detecting the skylight image. The consistency between the obtained experimental distribution of polarization angle with that due to Rayleigh scattering model is 90%, which confirms the effectivity of our proposed imaging polarimeter.
Clinical Study of Orthogonal-View Phase-Matched Digital Tomosynthesis for Lung Tumor Localization.
Zhang, You; Ren, Lei; Vergalasova, Irina; Yin, Fang-Fang
2017-01-01
Compared to cone-beam computed tomography, digital tomosynthesis imaging has the benefits of shorter scanning time, less imaging dose, and better mechanical clearance for tumor localization in radiation therapy. However, for lung tumors, the localization accuracy of the conventional digital tomosynthesis technique is affected by the lack of depth information and the existence of lung tumor motion. This study investigates the clinical feasibility of using an orthogonal-view phase-matched digital tomosynthesis technique to improve the accuracy of lung tumor localization. The proposed orthogonal-view phase-matched digital tomosynthesis technique benefits from 2 major features: (1) it acquires orthogonal-view projections to improve the depth information in reconstructed digital tomosynthesis images and (2) it applies respiratory phase-matching to incorporate patient motion information into the synthesized reference digital tomosynthesis sets, which helps to improve the localization accuracy of moving lung tumors. A retrospective study enrolling 14 patients was performed to evaluate the accuracy of the orthogonal-view phase-matched digital tomosynthesis technique. Phantom studies were also performed using an anthropomorphic phantom to investigate the feasibility of using intratreatment aggregated kV and beams' eye view cine MV projections for orthogonal-view phase-matched digital tomosynthesis imaging. The localization accuracy of the orthogonal-view phase-matched digital tomosynthesis technique was compared to that of the single-view digital tomosynthesis techniques and the digital tomosynthesis techniques without phase-matching. The orthogonal-view phase-matched digital tomosynthesis technique outperforms the other digital tomosynthesis techniques in tumor localization accuracy for both the patient study and the phantom study. For the patient study, the orthogonal-view phase-matched digital tomosynthesis technique localizes the tumor to an average (± standard deviation) error of 1.8 (0.7) mm for a 30° total scan angle. For the phantom study using aggregated kV-MV projections, the orthogonal-view phase-matched digital tomosynthesis localizes the tumor to an average error within 1 mm for varying magnitudes of scan angles. The pilot clinical study shows that the orthogonal-view phase-matched digital tomosynthesis technique enables fast and accurate localization of moving lung tumors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miao, J; Fan, J; Gopinatha Pillai, A
Purpose: To further reduce CT dose, a practical sparse-view acquisition scheme is proposed to provide the same attenuation estimation as higher dose for PET imaging in the extended scan field-of-view. Methods: CT scans are often used for PET attenuation correction and can be acquired at very low CT radiation dose. Low dose techniques often employ low tube voltage/current accompanied with a smooth filter before backprojection to reduce CT image noise. These techniques can introduce bias in the conversion from HU to attenuation values, especially in the extended CT scan field-of-view (FOV). In this work, we propose an ultra-low dose CTmore » technique for PET attenuation correction based on sparse-view acquisition. That is, instead of an acquisition of full amount of views, only a fraction of views are acquired. We tested this technique on a 64-slice GE CT scanner using multiple phantoms. CT scan FOV truncation completion was performed based on the published water-cylinder extrapolation algorithm. A number of continuous views per rotation: 984 (full), 246, 123, 82 and 62 have been tested, corresponding to a CT dose reduction of none, 4x, 8x, 12x and 16x. We also simulated sparse-view acquisition by skipping views from the fully-acquired view data. Results: FBP reconstruction with Q. AC filter on reduced views in the full extended scan field-of-view possesses similar image quality to the reconstruction on acquired full view data. The results showed a further potential for dose reduction compared to the full acquisition, without sacrificing any significant attenuation support to the PET. Conclusion: With the proposed sparse-view method, one can potential achieve at least 2x more CT dose reduction compared to the current Ultra-Low Dose (ULD) PET/CT protocol. A pre-scan based dose modulation scheme can be combined with the above sparse-view approaches, which can even further reduce the CT scan dose during a PET/CT exam.« less
Effects of affective picture viewing on postural control.
Stins, John F; Beek, Peter J
2007-10-04
Emotion theory holds that unpleasant events prime withdrawal actions, whereas pleasant events prime approach actions. Recent studies have suggested that passive viewing of emotion eliciting images results in postural adjustments, which become manifest as changes in body center of pressure (COP) trajectories. From those studies it appears that posture is modulated most when viewing pictures with negative valence. The present experiment was conducted to test the hypothesis that pictures with negative valence have a greater impact on postural control than neutral or positive ones. Thirty-four healthy subjects passively viewed a series of emotion eliciting images, while standing either in a bipedal or unipedal stance on a force plate. The images were adopted from the International Affective Picture System (IAPS). We analysed mean and variability of the COP and the length of the associated sway path as a function of emotion. The mean position of the COP was unaffected by emotion, but unipedal stance resulted in overall greater body sway than bipedal stance. We found a modest effect of emotion on COP: viewing pictures of mutilation resulted in a smaller sway path, but only in unipedal stance. We obtained valence and arousal ratings of the images with an independent sample of viewers. These subjects rated the unpleasant images as significantly less pleasant than neutral images, and the pleasant images as significantly more pleasant than neutral images. However, the subjects rated the images as overall less pleasant and less arousing than viewers in a closely comparable American study, pointing to unknown differences in viewer characteristics. Overall, viewing emotion eliciting images had little effect on body sway. Our finding of a reduction in sway path length when viewing pictures of mutilation was indicative of a freezing strategy, i.e. fear bradycardia. The results are consistent with current knowledge about the neuroanatomical organization of the emotion system and the neural control of behavior.
Effects of affective picture viewing on postural control
Stins, John F; Beek, Peter J
2007-01-01
Background Emotion theory holds that unpleasant events prime withdrawal actions, whereas pleasant events prime approach actions. Recent studies have suggested that passive viewing of emotion eliciting images results in postural adjustments, which become manifest as changes in body center of pressure (COP) trajectories. From those studies it appears that posture is modulated most when viewing pictures with negative valence. The present experiment was conducted to test the hypothesis that pictures with negative valence have a greater impact on postural control than neutral or positive ones. Thirty-four healthy subjects passively viewed a series of emotion eliciting images, while standing either in a bipedal or unipedal stance on a force plate. The images were adopted from the International Affective Picture System (IAPS). We analysed mean and variability of the COP and the length of the associated sway path as a function of emotion. Results The mean position of the COP was unaffected by emotion, but unipedal stance resulted in overall greater body sway than bipedal stance. We found a modest effect of emotion on COP: viewing pictures of mutilation resulted in a smaller sway path, but only in unipedal stance. We obtained valence and arousal ratings of the images with an independent sample of viewers. These subjects rated the unpleasant images as significantly less pleasant than neutral images, and the pleasant images as significantly more pleasant than neutral images. However, the subjects rated the images as overall less pleasant and less arousing than viewers in a closely comparable American study, pointing to unknown differences in viewer characteristics. Conclusion Overall, viewing emotion eliciting images had little effect on body sway. Our finding of a reduction in sway path length when viewing pictures of mutilation was indicative of a freezing strategy, i.e. fear bradycardia. The results are consistent with current knowledge about the neuroanatomical organization of the emotion system and the neural control of behavior. PMID:17916245
Trans-pulmonary echocardiography as a guide for device closure of patent ductus arteriosus.
Kudo, Yoshiyuki; Suda, Kenji; Yoshimoto, Hironaga; Teramachi, Yozo; Kishimoto, Shintaro; Iemura, Motofumi; Matsuishi, Toyojiro
2015-08-01
The aim of this study was to develop trans-pulmonary echocardiography (TPE) to guide device closure of patent ductus arteriosus (DC-PDA). Aortography requires a large amount of contrast yet may give us an inadequate image to evaluate anatomy or residual shunt in patients with large PDA or dilated vessels and is precluded in patients with renal dysfunction. Practically, there is no imaging modality to monitor the entire procedure except for trans-esophageal echocardiography that requires general anesthesia. Subjects were seven patients with ages ranged from 6- to 77-years old and body weight > 15 kg. The size of the PDA ranged from 1.8 to 6.3 mm with pulmonary to systemic flow ratios from 1.2 to 2.2. During DC-PDA using Ampaltzer Duct Occluder or coil, an intra-cardiac echocardiographic (ICE) catheter was advanced into pulmonary arteries and standard views were developed to guide DC-PDA. We have developed two standard views; the main pulmonary artery view (MPA view) and the left pulmonary artery view (LPA view). The MPA view provided aortic short axis view equivalent to that seen by trans-thoracic echocardiography in children. The LPA view, obtained by the echo probe in the LPA and turned it up upside down, provided long axis view of the PDA allowing more precise anatomical evaluation. TPE allowed us to monitor the entire procedure and determine residual shunts. TPE in the MPA and LPA view can be an effective guide for DC-PDA. This report leads to new application of this imaging device. © 2015 Wiley Periodicals, Inc.
Toward Simultaneous Real-Time Fluoroscopic and Nuclear Imaging in the Intervention Room.
Beijst, Casper; Elschot, Mattijs; Viergever, Max A; de Jong, Hugo W A M
2016-01-01
To investigate the technical feasibility of hybrid simultaneous fluoroscopic and nuclear imaging. An x-ray tube, an x-ray detector, and a gamma camera were positioned in one line, enabling imaging of the same field of view. Since a straightforward combination of these elements would block the lines of view, a gamma camera setup was developed to be able to view around the x-ray tube. A prototype was built by using a mobile C-arm and a gamma camera with a four-pinhole collimator. By using the prototype, test images were acquired and sensitivity, resolution, and coregistration error were analyzed. Nuclear images (two frames per second) were acquired simultaneously with fluoroscopic images. Depending on the distance from point source to detector, the system resolution was 1.5-1.9-cm full width at half maximum, the sensitivity was (0.6-1.5) × 10(-5) counts per decay, and the coregistration error was -0.13 to 0.15 cm. With good spatial and temporal alignment of both modalities throughout the field of view, fluoroscopic images can be shown in grayscale and corresponding nuclear images in color overlay. Measurements obtained with the hybrid imaging prototype device that combines simultaneous fluoroscopic and nuclear imaging of the same field of view have demonstrated the feasibility of real-time simultaneous hybrid imaging in the intervention room. © RSNA, 2015
Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L
2017-01-01
There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy.
Kreplin, Ute; Fairclough, Stephen H
2015-05-01
The medial area of the rostral prefrontal cortex (rPFC) has been implicated in self-relevant processing, autobiographical memory and emotional processing, including the processing of pleasure during aesthetic experiences. The goal of this study was to investigate changes in rPFC activity using functional near-infrared spectroscopy (fNIRS) in response to affective stimuli viewed in a self-relevant or other-relevant context. Positive and negative images were displayed to 20 participants under two viewing conditions where participants were asked to think of their own emotions (self) or think about the emotions of the artist who created the work (other). The results revealed an increase of HbO when participants viewed images during the other-condition compared to the self-condition. It was concluded that viewing stimuli from the perspective of another was associated with an increase of cognitive demand. The analysis of deoxygenated haemoglobin (HHb) at right hemispheric areas revealed that activation of the rPFC during the other-condition was specific to the negative images. When images were viewed from the perspective of the self, activation of the rPFC significantly increased at the right-medial area of the rPFC for positive images. Our findings indicate that the influence of valence on rPFC activation during aesthetic experience is contingent on the context of the viewing experience and there is a bias towards positive emotion when images are viewed from the context of the self. Copyright © 2015 Elsevier Ltd. All rights reserved.
The effect of viewing distance on observer performance in skeletal radiographs
NASA Astrophysics Data System (ADS)
Butler, M. L.; Lowe, J.; Toomey, R. J.; Maher, M.; Evanoff, M. E.; Rainford, L.
2013-03-01
A number of different viewing distances are recommended by international agencies, however none with specific reference to radiologist performance. The purpose of this study was to ascertain the extent to which radiologists performance is affected by viewing distance on softcopy skeletal reporting. Eighty dorsi-palmar (DP) wrist radiographs, of which half feature 1 or more fractures, were viewed by seven observers at 2 viewing distances, 30cm and 70cm. Observers rated the images as normal or not on a scale of 1 to 5 and could mark multiple locations on the images when they visualised a fracture. Viewing distance was measured from the centre of the face plate to the outer canthus of the eye. The DBM MRM analysis showed no statistically significant differences between the area under the curve for the two distances (p = 0.482). The JAFROC analysis, however, demonstrated a statistically significantly higher area under the curve with the 30cm viewing distance than with the 70 cm distance (p = 0.035). This suggests that while observers were able to make decisions about whether an image contained a fracture or not equally well at both viewing distances, they may have been less reliable in terms of fracture localisation or detection of multiple fractures. The impact of viewing distance warrants further attention from both clinical and scientific perspectives.
Reference View Selection in DIBR-Based Multiview Coding.
Maugey, Thomas; Petrazzuoli, Giovanni; Frossard, Pascal; Cagnazzo, Marco; Pesquet-Popescu, Beatrice
2016-04-01
Augmented reality, interactive navigation in 3D scenes, multiview video, and other emerging multimedia applications require large sets of images, hence larger data volumes and increased resources compared with traditional video services. The significant increase in the number of images in multiview systems leads to new challenging problems in data representation and data transmission to provide high quality of experience on resource-constrained environments. In order to reduce the size of the data, different multiview video compression strategies have been proposed recently. Most of them use the concept of reference or key views that are used to estimate other images when there is high correlation in the data set. In such coding schemes, the two following questions become fundamental: 1) how many reference views have to be chosen for keeping a good reconstruction quality under coding cost constraints? And 2) where to place these key views in the multiview data set? As these questions are largely overlooked in the literature, we study the reference view selection problem and propose an algorithm for the optimal selection of reference views in multiview coding systems. Based on a novel metric that measures the similarity between the views, we formulate an optimization problem for the positioning of the reference views, such that both the distortion of the view reconstruction and the coding rate cost are minimized. We solve this new problem with a shortest path algorithm that determines both the optimal number of reference views and their positions in the image set. We experimentally validate our solution in a practical multiview distributed coding system and in the standardized 3D-HEVC multiview coding scheme. We show that considering the 3D scene geometry in the reference view, positioning problem brings significant rate-distortion improvements and outperforms the traditional coding strategy that simply selects key frames based on the distance between cameras.
NASA Astrophysics Data System (ADS)
Hu, Bihe; Bolus, Daniel; Brown, J. Quincy
2018-02-01
Current gold-standard histopathology for cancerous biopsies is destructive, time consuming, and limited to 2D slices, which do not faithfully represent true 3D tumor micro-morphology. Light sheet microscopy has emerged as a powerful tool for 3D imaging of cancer biospecimens. Here, we utilize the versatile dual-view inverted selective plane illumination microscopy (diSPIM) to render digital histological images of cancer biopsies. Dual-view architecture enabled more isotropic resolution in X, Y, and Z; and different imaging modes, such as adding electronic confocal slit detection (eCSD) or structured illumination (SI), can be used to improve degraded image quality caused by background signal of large, scattering samples. To obtain traditional H&E-like images, we used DRAQ5 and eosin (D&E) staining, with 488nm and 647nm laser illumination, and multi-band filter sets. Here, phantom beads and a D&E stained buccal cell sample have been used to verify our dual-view method. We also show that via dual view imaging and deconvolution, more isotropic resolution has been achieved for optical cleared human prostate sample, providing more accurate quantitation of 3D tumor architecture than was possible with single-view SPIM methods. We demonstrate that the optimized diSPIM delivers more precise analysis of 3D cancer microarchitecture in human prostate biopsy than simpler light sheet microscopy arrangements.
JPEG vs. JPEG 2000: an objective comparison of image encoding quality
NASA Astrophysics Data System (ADS)
Ebrahimi, Farzad; Chamik, Matthieu; Winkler, Stefan
2004-11-01
This paper describes an objective comparison of the image quality of different encoders. Our approach is based on estimating the visual impact of compression artifacts on perceived quality. We present a tool that measures these artifacts in an image and uses them to compute a prediction of the Mean Opinion Score (MOS) obtained in subjective experiments. We show that the MOS predictions by our proposed tool are a better indicator of perceived image quality than PSNR, especially for highly compressed images. For the encoder comparison, we compress a set of 29 test images with two JPEG encoders (Adobe Photoshop and IrfanView) and three JPEG2000 encoders (JasPer, Kakadu, and IrfanView) at various compression ratios. We compute blockiness, blur, and MOS predictions as well as PSNR of the compressed images. Our results show that the IrfanView JPEG encoder produces consistently better images than the Adobe Photoshop JPEG encoder at the same data rate. The differences between the JPEG2000 encoders in our test are less pronounced; JasPer comes out as the best codec, closely followed by IrfanView and Kakadu. Comparing the JPEG- and JPEG2000-encoding quality of IrfanView, we find that JPEG has a slight edge at low compression ratios, while JPEG2000 is the clear winner at medium and high compression ratios.
Method for accurately positioning a device at a desired area of interest
Jones, Gary D.; Houston, Jack E.; Gillen, Kenneth T.
2000-01-01
A method for positioning a first device utilizing a surface having a viewing translation stage, the surface being movable between a first position where the viewing stage is in operational alignment with a first device and a second position where the viewing stage is in operational alignment with a second device. The movable surface is placed in the first position and an image is produced with the first device of an identifiable characteristic of a calibration object on the viewing stage. The moveable surface is then placed in the second position and only the second device is moved until an image of the identifiable characteristic in the second device matches the image from the first device. The calibration object is then replaced on the stage of the surface with a test object, and the viewing translation stage is adjusted until the second device images the area of interest. The surface is then moved to the first position where the test object is scanned with the first device to image the area of interest. An alternative embodiment where the devices move is also disclosed.
Video-based teleradiology for intraosseous lesions. A receiver operating characteristic analysis.
Tyndall, D A; Boyd, K S; Matteson, S R; Dove, S B
1995-11-01
Immediate access to off-site expert diagnostic consultants regarding unusual radiographic findings or radiographic quality assurance issues could be a current problem for private dental practitioners. Teleradiology, a system for transmitting radiographic images, offers a potential solution to this problem. Although much research has been done to evaluate feasibility and utilization of teleradiology systems in medical imaging, little research on dental applications has been performed. In this investigation 47 panoramic films with an equal distribution of images with intraosseous jaw lesions and no disease were viewed by a panel of observers with teleradiology and conventional viewing methods. The teleradiology system consisted of an analog video-based system simulating remote radiographic consultation between a general dentist and a dental imaging specialist. Conventional viewing consisted of traditional viewbox methods. Observers were asked to identify the presence or absence of 24 intraosseous lesions and to determine their locations. No statistically significant differences in modalities or observers were identified between methods at the 0.05 level. The results indicate that viewing intraosseous lesions of video-based panoramic images is equal to conventional light box viewing.
Software for Displaying Data from Planetary Rovers
NASA Technical Reports Server (NTRS)
Powell, Mark; Backers, Paul; Norris, Jeffrey; Vona, Marsette; Steinke, Robert
2003-01-01
Science Activity Planner (SAP) DownlinkBrowser is a computer program that assists in the visualization of processed telemetric data [principally images, image cubes (that is, multispectral images), and spectra] that have been transmitted to Earth from exploratory robotic vehicles (rovers) on remote planets. It is undergoing adaptation to (1) the Field Integrated Design and Operations (FIDO) rover (a prototype Mars-exploration rover operated on Earth as a test bed) and (2) the Mars Exploration Rover (MER) mission. This program has evolved from its predecessor - the Web Interface for Telescience (WITS) software - and surpasses WITS in the processing, organization, and plotting of data. SAP DownlinkBrowser creates Extensible Markup Language (XML) files that organize data files, on the basis of content, into a sortable, searchable product database, without the overhead of a relational database. The data-display components of SAP DownlinkBrowser (descriptively named ImageView, 3DView, OrbitalView, PanoramaView, ImageCubeView, and SpectrumView) are designed to run in a memory footprint of at least 256MB on computers that utilize the Windows, Linux, and Solaris operating systems.
Exploring adolescent views of body image: the influence of media.
Spurr, Shelley; Berry, Lois; Walker, Keith
2013-01-01
The purpose of this article is to present findings from two parallel qualitative studies that used focus groups to explore adolescent views of psychological wellness and healthy bodies. Nine focus groups were held with 46 adolescents aged 16-19 years from two Mid-Western Canadian high schools. Both studies were designed with an interpretive humanist perspective and then a 6-step thematic approach was used to analyze the data. Common themes emerging in the focus group discussions in both studies included the negative impact of media on adolescent body image and pressure to conform to the Western views of physical appearance. These findings illustrate the need for nurses to understand the influence of the media on adolescents' views of their body image and to incorporate protocols for assessment, education, and counseling of adolescents on the healthy usage of media into their pediatric clinical practice. Through consistent participation in the development and implementation of health policies, nurses play a critical role in supporting adolescents to develop healthy views of body image.
Group sparse multiview patch alignment framework with view consistency for image classification.
Gui, Jie; Tao, Dacheng; Sun, Zhenan; Luo, Yong; You, Xinge; Tang, Yuan Yan
2014-07-01
No single feature can satisfactorily characterize the semantic concepts of an image. Multiview learning aims to unify different kinds of features to produce a consensual and efficient representation. This paper redefines part optimization in the patch alignment framework (PAF) and develops a group sparse multiview patch alignment framework (GSM-PAF). The new part optimization considers not only the complementary properties of different views, but also view consistency. In particular, view consistency models the correlations between all possible combinations of any two kinds of view. In contrast to conventional dimensionality reduction algorithms that perform feature extraction and feature selection independently, GSM-PAF enjoys joint feature extraction and feature selection by exploiting l(2,1)-norm on the projection matrix to achieve row sparsity, which leads to the simultaneous selection of relevant features and learning transformation, and thus makes the algorithm more discriminative. Experiments on two real-world image data sets demonstrate the effectiveness of GSM-PAF for image classification.
A flexible new method for 3D measurement based on multi-view image sequences
NASA Astrophysics Data System (ADS)
Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu
2016-11-01
Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.
Emotions' Impact on Viewing Behavior under Natural Conditions
Kaspar, Kai; Hloucal, Teresa-Maria; Kriz, Jürgen; Canzler, Sonja; Gameiro, Ricardo Ramos; Krapp, Vanessa; König, Peter
2013-01-01
Human overt attention under natural conditions is guided by stimulus features as well as by higher cognitive components, such as task and emotional context. In contrast to the considerable progress regarding the former, insight into the interaction of emotions and attention is limited. Here we investigate the influence of the current emotional context on viewing behavior under natural conditions. In two eye-tracking studies participants freely viewed complex scenes embedded in sequences of emotion-laden images. The latter primes constituted specific emotional contexts for neutral target images. Viewing behavior toward target images embedded into sets of primes was affected by the current emotional context, revealing the intensity of the emotional context as a significant moderator. The primes themselves were not scanned in different ways when presented within a block (Study 1), but when presented individually, negative primes were more actively scanned than positive primes (Study 2). These divergent results suggest an interaction between emotional priming and further context factors. Additionally, in most cases primes were scanned more actively than target images. Interestingly, the mere presence of emotion-laden stimuli in a set of images of different categories slowed down viewing activity overall, but the known effect of image category was not affected. Finally, viewing behavior remained largely constant on single images as well as across the targets' post-prime positions (Study 2). We conclude that the emotional context significantly influences the exploration of complex scenes and the emotional context has to be considered in predictions of eye-movement patterns. PMID:23326353
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, R. L.
1976-06-14
Program GRAY is written to perform the matrix manipulations necessary to convert black-body radiation heat-transfer view factors to gray-body view factors as required by thermal analyzer codes. The black-body view factors contain only geometric relationships. Program GRAY allows the effects of multiple gray-body reflections to be included. The resulting effective gray-body factors can then be used with the corresponding fourth-power temperature differences to obtain the net radiative heat flux. The program is written to accept a matrix input or the card image output generated by the black-body view factor program CNVUFAC. The resulting card image output generated by GRAY ismore » in a form usable by the TRUMP thermal analyzer.« less
Hirano, Yutaka; Ikuta, Shin-Ichiro; Nakano, Manabu; Akiyama, Seita; Nakamura, Hajime; Nasu, Masataka; Saito, Futoshi; Nakagawa, Junichi; Matsuzaki, Masashi; Miyazaki, Shunichi
2007-02-01
Assessment of deterioration of regional wall motion by echocardiography is not only subjective but also features difficulties with interobserver agreement. Progress in digital communication technology has made it possible to send video images from a distant location via the Internet. The possibility of evaluating left ventricular wall motion using video images sent via the Internet to distant institutions was evaluated. Twenty-two subjects were randomly selected. Four sets of video images (parasternal long-axis view, parasternal short-axis view, apical four-chamber view, and apical two-chamber view) were taken for one cardiac cycle. The images were sent via the Internet to two institutions (observer C in facility A and observers D and E in facility B) for evaluation. Great care was taken to prevent disclosure of patient information to these observers. Parasternal long-axis images were divided into four segments, and the parasternal short-axis view, apical four-chamber view, and apical two-chamber view were divided into six segments. One of the following assessments, normokinesis, hypokinesis, akinesis, or dyskinesis, was assigned to each segment. The interobserver rates of agreement in judgments between observers C and D, observers C and E, and intraobserver agreement rate (for observer D) were calculated. The rate of interobserver agreement was 85.7% (394/460 segments; Kappa = 0.65) between observers C and D, 76.7% (353/460 segments; Kappa = 0.39) between observers D and E, and 76.3% (351/460 segments; Kappa = 0.36)between observers C and E, and intraobserver agreement was 94.3% (434/460; Kappa = 0.86). Segments of difference judgments between observers C and D were normokinesis-hypokinesis; 62.1%, hypokinesis-akinesis; 33.3%, akinesis-dyskinesis; 3.0%, and normokinesis-akinesis; 1.5%. Wall motion can be evaluated at remote institutions via the Internet.
2014-11-21
The puzzling, fascinating surface of Jupiter icy moon Europa looms large in this newly-reprocessed [sic] color view, made from images taken by NASA Galileo spacecraft in the late 1990s. This is the color view of Europa from Galileo that shows the largest portion of the moon's surface at the highest resolution. The view was previously released as a mosaic with lower resolution and strongly enhanced color (see PIA02590). To create this new version, the images were assembled into a realistic color view of the surface that approximates how Europa would appear to the human eye. The scene shows the stunning diversity of Europa's surface geology. Long, linear cracks and ridges crisscross the surface, interrupted by regions of disrupted terrain where the surface ice crust has been broken up and re-frozen into new patterns. Color variations across the surface are associated with differences in geologic feature type and location. For example, areas that appear blue or white contain relatively pure water ice, while reddish and brownish areas include non-ice components in higher concentrations. The polar regions, visible at the left and right of this view, are noticeably bluer than the more equatorial latitudes, which look more white. This color variation is thought to be due to differences in ice grain size in the two locations. Images taken through near-infrared, green and violet filters have been combined to produce this view. The images have been corrected for light scattered outside of the image, to provide a color correction that is calibrated by wavelength. Gaps in the images have been filled with simulated color based on the color of nearby surface areas with similar terrain types. This global color view consists of images acquired by the Galileo Solid-State Imaging (SSI) experiment on the spacecraft's first and fourteenth orbits through the Jupiter system, in 1995 and 1998, respectively. Image scale is 1 mile (1.6 kilometers) per pixel. North on Europa is at right. http://photojournal.jpl.nasa.gov/catalog/PIA19048
Impact of audit of routine second-trimester cardiac images using a novel image-scoring method.
Sairam, S; Awadh, A M A; Cook, K; Papageorghiou, A T; Carvalho, J S
2009-05-01
To assess the impact of using an objective scoring method to audit cardiac images obtained as part of the routine 21-23-week anomaly scan. A prospective audit and re-audit (6 months later) were conducted on cardiac images obtained by sonographers during the routine anomaly scan. A new image-scoring method was devised based on expected features in the four-chamber and outflow tract views. For each patient, scores were awarded for documentation and quality of individual views. These were called 'Documentation Scores' and 'View Scores' and were added to give a 'Patient Score' which represented the quality of screening provided by the sonographer for that particular patient (maximum score, 15). In order to assess the overall performance of sonographers, an 'Audit Score' was calculated for each by averaging his or her Patient Scores. In addition, to assess each sonographer's performance in relation to particular aspects of the various views, each was given their own 'Sonographer View Scores', derived from image documentation and details of four-chamber view (magnification, valve offset and septum) and left and right outflow tract views. All images were scored by two reviewers, jointly in the primary audit and independently in the re-audit. The scores from primary and re-audit were compared to assess the impact of feedback from the primary audit. Eight sonographers participated in the study. The median Audit Score increased significantly (P < 0.01), from 10.8 (range, 9.8-12.4) in the primary audit to 12.4 (range, 10.4-13.6) in the re-audit. Scores allocated by the two reviewers in the re-audit were not significantly different (P = 0.08). Objective scoring of fetal heart images is feasible and has a positive impact on the quality of cardiac images acquired at the time of the routine anomaly scan. This audit tool has the potential to be applied in every obstetric scanning unit and may improve the effectiveness of screening for congenital heart defects.
Views of Hartley 2 Nucleus and Inner Coma
2010-11-18
NASA EPOXI mission spacecraft obtained these views of the icy particle cloud around comet Hartley 2. The image on the left is the full image of comet Hartley 2 for context, and the image on the right was enlarged and cropped.
BOREAS RSS-2 Level-1B ASAS Image Data: At-Sensor Radiance in BSQ Format
NASA Technical Reports Server (NTRS)
Russell, C.; Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Dabney, P. W.; Kovalick, W.; Graham, D.; Bur, Michael; Irons, James R.; Tierney, M.
2000-01-01
The BOREAS RSS-2 team used the ASAS instrument, mounted on the NASA C-130 aircraft, to create at-sensor radiance images of various sites as a function of spectral wavelength, view geometry (combinations of view zenith angle, view azimuth angle, solar zenith angle, and solar azimuth angle), and altitude. The level-1b ASAS images of the BOREAS study areas were collected from April to September 1994 and March to July 1996.
2015-03-30
After a couple of years in high-inclination orbits that limited its ability to encounter Saturn's moons, NASA's Cassini spacecraft returned to Saturn's equatorial plane in March 2015. As a prelude to its return to the realm of the icy satellites, the spacecraft had its first relatively close flyby of an icy moon (apart from Titan) in almost two years on Feb. 9. During this encounter Cassini's cameras captured images of the icy moon Rhea, as shown in these in two image mosaics. The views were taken about an hour and a half apart as Cassini drew closer to Rhea. Images taken using clear, green, infrared and ultraviolet spectral filters were combined to create these enhanced color views, which offer an expanded range of the colors visible to human eyes in order to highlight subtle color differences across Rhea's surface. The moon's surface is fairly uniform in natural color. The image at right represents one of the highest resolution color views of Rhea released to date. A larger, monochrome mosaic is available in PIA07763. Both views are orthographic projections facing toward terrain on the trailing hemisphere of Rhea. An orthographic view is most like the view seen by a distant observer looking through a telescope. The views have been rotated so that north on Rhea is up. The smaller view at left is centered at 21 degrees north latitude, 229 degrees west longitude. Resolution in this mosaic is 450 meters (1,476 feet) per pixel. The images were acquired at a distance that ranged from about 51,200 to 46,600 miles (82,100 to 74,600 kilometers) from Rhea. The larger view at right is centered at 9 degrees north latitude, 254 degrees west longitude. Resolution in this mosaic is 300 meters (984 feet) per pixel. The images were acquired at a distance that ranged from about 36,000 to 32,100 miles (57,900 to 51,700 kilometers) from Rhea. The mosaics each consist of multiple narrow-angle camera (NAC) images with data from the wide-angle camera used to fill in areas where NAC data was not available. The image was produced by Heike Rosenberg and Tilmann Denk at Freie Universität in Berlin, Germany. http://photojournal.jpl.nasa.gov/catalog/PIA19057
3D reconstruction from multi-view VHR-satellite images in MicMac
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Pierrot-Deseilligny, Marc; Delorme, Arthur
2018-05-01
This work addresses the generation of high quality digital surface models by fusing multiple depths maps calculated with the dense image matching method. The algorithm is adapted to very high resolution multi-view satellite images, and the main contributions of this work are in the multi-view fusion. The algorithm is insensitive to outliers, takes into account the matching quality indicators, handles non-correlated zones (e.g. occlusions), and is solved with a multi-directional dynamic programming approach. No geometric constraints (e.g. surface planarity) or auxiliary data in form of ground control points are required for its operation. Prior to the fusion procedures, the RPC geolocation parameters of all images are improved in a bundle block adjustment routine. The performance of the algorithm is evaluated on two VHR (Very High Resolution)-satellite image datasets (Pléiades, WorldView-3) revealing its good performance in reconstructing non-textured areas, repetitive patterns, and surface discontinuities.
Wide Field-of-View Soft X-Ray Imaging for Solar Wind-Magnetosphere Interactions
NASA Technical Reports Server (NTRS)
Walsh, B. M.; Collier, M. R.; Kuntz, K. D.; Porter, F. S.; Sibeck, D. G.; Snowden, S. L.; Carter, J. A.; Collado-Vega, Y.; Connor, H. K.; Cravens, T. E.;
2016-01-01
Soft X-ray imagers can be used to study the mesoscale and macroscale density structures that occur whenever and wherever the solar wind encounters neutral atoms at comets, the Moon, and both magnetized and unmagnetized planets. Charge exchange between high charge state solar wind ions and exospheric neutrals results in the isotropic emission of soft X-ray photons with energies from 0.1 to 2.0 keV. At Earth, this process occurs primarily within the magnetosheath and cusps. Through providing a global view, wide field-of-view imaging can determine the significance of the various proposed solar wind-magnetosphere interaction mechanisms by evaluating their global extent and occurrence patterns. A summary of wide field-of-view (several to tens of degrees) soft X-ray imaging is provided including slumped micropore microchannel reflectors, simulated images, and recent flight results.
Limited Angle Dual Modality Breast Imaging
NASA Astrophysics Data System (ADS)
More, Mitali J.; Li, Heng; Goodale, Patricia J.; Zheng, Yibin; Majewski, Stan; Popov, Vladimir; Welch, Benjamin; Williams, Mark B.
2007-06-01
We are developing a dual modality breast scanner that can obtain x-ray transmission and gamma ray emission images in succession at multiple viewing angles with the breast held under mild compression. These views are reconstructed and fused to obtain three-dimensional images that combine structural and functional information. Here, we describe the dual modality system and present results of phantom experiments designed to test the system's ability to obtain fused volumetric dual modality data sets from a limited number of projections, acquired over a limited (less than 180 degrees) angular range. We also present initial results from phantom experiments conducted to optimize the acquisition geometry for gamma imaging. The optimization parameters include the total number of views and the angular range over which these views should be spread, while keeping the total number of detected counts fixed. We have found that in general, for a fixed number of views centered around the direction perpendicular to the direction of compression, in-plane contrast and SNR are improved as the angular range of the views is decreased. The improvement in contrast and SNR with decreasing angular range is much greater for deeper lesions and for a smaller number of views. However, the z-resolution of the lesion is significantly reduced with decreasing angular range. Finally, we present results from limited angle tomography scans using a system with dual, opposing heads.
Hologram generation by horizontal scanning of a high-speed spatial light modulator.
Takaki, Yasuhiro; Okada, Naoya
2009-06-10
In order to increase the image size and the viewing zone angle of a hologram, a high-speed spatial light modulator (SLM) is imaged as a vertically long image by an anamorphic imaging system, and this image is scanned horizontally by a galvano scanner. The reduction in horizontal pixel pitch of the SLM provides a wide viewing zone angle. The increased image height and horizontal scanning increased the image size. We demonstrated the generation of a hologram having a 15 degrees horizontal viewing zone angle and an image size of 3.4 inches with a frame rate of 60 Hz using a digital micromirror device with a frame rate of 13.333 kHz as a high-speed SLM.
Ultraviolet Viewing with a Television Camera.
ERIC Educational Resources Information Center
Eisner, Thomas; And Others
1988-01-01
Reports on a portable video color camera that is fully suited for seeing ultraviolet images and offers some expanded viewing possibilities. Discusses the basic technique, specialized viewing, and the instructional value of this system of viewing reflectance patterns of flowers and insects that are invisible to the unaided eye. (CW)
NASA Astrophysics Data System (ADS)
Tate, Tyler H.; McGregor, Davis; Barton, Jennifer K.
2017-02-01
The optical design for a dual modality endoscope based on piezo scanning fiber technology is presented including a novel technique to combine forward-viewing navigation and side viewing OCT. Potential applications include navigating body lumens such as the fallopian tube, biliary ducts and cardiovascular system. A custom cover plate provides a rotationally symmetric double reflection of the OCT beam to deviate and focus the OCT beam out the side of the endoscope for cross-sectional imaging of the tubal lumen. Considerations in the choice of the scanning fiber are explored and a new technique to increase the divergence angle of the scanning fiber to improve system performance is presented. Resolution and the necessary scanning density requirements to achieve Nyquist sampling of the full image are considered. The novel optical design lays the groundwork for a new approach integrating side-viewing OCT into multimodality endoscopes for small lumen imaging. KEYWORDS:
Impact Site: Cassini's Final Image
2017-09-15
This monochrome view is the last image taken by the imaging cameras on NASA's Cassini spacecraft. It looks toward the planet's night side, lit by reflected light from the rings, and shows the location at which the spacecraft would enter the planet's atmosphere hours later. A natural color view, created using images taken with red, green and blue spectral filters, is also provided (Figure 1). The imaging cameras obtained this view at approximately the same time that Cassini's visual and infrared mapping spectrometer made its own observations of the impact area in the thermal infrared. This location -- the site of Cassini's atmospheric entry -- was at this time on the night side of the planet, but would rotate into daylight by the time Cassini made its final dive into Saturn's upper atmosphere, ending its remarkable 13-year exploration of Saturn. The view was acquired on Sept. 14, 2017 at 19:59 UTC (spacecraft event time). The view was taken in visible light using the Cassini spacecraft wide-angle camera at a distance of 394,000 miles (634,000 kilometers) from Saturn. Image scale is about 11 miles (17 kilometers). The original image has a size of 512x512 pixels. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21895
Positive media portrayals of obese persons: impact on attitudes and image preferences.
Pearl, Rebecca L; Puhl, Rebecca M; Brownell, Kelly D
2012-11-01
The purpose of this research was to assess the impact of nonstereotypical, positive media portrayals of obese persons on biased attitudes, as well as propose a change in media practices that could reduce public weight bias and consequent negative health outcomes for those who experience weight stigma. Two online experiments were conducted in which participants viewed either a stigmatizing or a positive photograph of an obese model. In Experiment 1 (N = 146), participants viewed a photograph of either a Caucasian or African American obese woman; in Experiment 2 (N = 145), participants viewed either a Caucasian male or female obese model. Multiple linear regression models were used to analyze outcomes for social distance attitudes toward the obese models depicted in the images, in addition to other negative attitudes and image preferences. Participants who viewed the stigmatizing images endorsed stronger social distance attitudes and more negative attitudes toward obese persons than participants who viewed the positive images, and there was a stronger preference for the positive images than the stigmatizing images. These results were consistent regardless of the race or gender of the obese model pictured. The findings indicate that more positive media portrayals of obese individuals may help reduce weight stigma and its associated negative health outcomes.
Surface Stereo Imager on Mars, Side View
NASA Technical Reports Server (NTRS)
2008-01-01
This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NOAA Photo Library - Sanctuaries
whale tail The word sanctuary evokes images of a sacred place, a refuge from the dangers of the world images contained in the collection. Click on thumbnails to view larger images. ALBUMS Images are arranged by themes. Click on thumbnails to view larger images. Note that not all images are contained in the
Viewing experience and naturalness of 3D images
NASA Astrophysics Data System (ADS)
Seuntiëns, Pieter J.; Heynderickx, Ingrid E.; IJsselsteijn, Wijnand A.; van den Avoort, Paul M. J.; Berentsen, Jelle; Dalm, Iwan J.; Lambooij, Marc T.; Oosting, Willem
2005-11-01
The term 'image quality' is often used to measure the performance of an imaging system. Recent research showed however that image quality may not be the most appropriate term to capture the evaluative processes associated with experiencing 3D images. The added value of depth in 3D images is clearly recognized when viewers judge image quality of unimpaired 3D images against their 2D counterparts. However, when viewers are asked to rate image quality of impaired 2D and 3D images, the image quality results for both 2D and 3D images are mainly determined by the introduced artefacts, and the addition of depth in the 3D images is hardly accounted for. In this experiment we applied and tested the more general evaluative concepts of 'naturalness' and 'viewing experience'. It was hypothesized that these concepts would better reflect the added value of depth in 3D images. Four scenes were used varying in dimension (2D and 3D) and noise level (6 levels of white gaussian noise). Results showed that both viewing experience and naturalness were rated higher in 3D than in 2D when the same noise level was applied. Thus, the added value of depth is clearly demonstrated when the concepts of viewing experience and naturalness are being evaluated. The added value of 3D over 2D, expressed in noise level, was 2 dB for viewing experience and 4 dB for naturalness, indicating that naturalness appears the more sensitive evaluative concept for demonstrating the psychological impact of 3D displays.
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. ...
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. PHOTOGRAPH TAKEN ON 18 MAY 1948. NCA HISTORY COLLECTION. - Knoxville National Cemetery, 939 Tyson Street, Northwest, Knoxville, Knox County, TN
Atmospheric Science Data Center
2013-04-15
... View Larger Image These images of northeastern South Africa, near Kruger National ... Unlike the MISR view, the AirMISR data are in "raw" form and processing to remove radiometric and geometric distortions has not yet been ...
Atmospheric Science Data Center
2013-04-16
article title: Icebergs in the Ross Sea View Larger Image Two ... (MISR) nadir camera view of the Ross Ice Shelf and Ross Sea in Antarctica. The image was acquired on December 10, 2000 during Terra ...
Atmospheric Science Data Center
2014-05-15
article title: Los Alamos, New Mexico View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...
NASA Astrophysics Data System (ADS)
Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong
2018-02-01
Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.
NASA Astrophysics Data System (ADS)
Wang, Yihan; Lu, Tong; Wan, Wenbo; Liu, Lingling; Zhang, Songhe; Li, Jiao; Zhao, Huijuan; Gao, Feng
2018-02-01
To fully realize the potential of photoacoustic tomography (PAT) in preclinical and clinical applications, rapid measurements and robust reconstructions are needed. Sparse-view measurements have been adopted effectively to accelerate the data acquisition. However, since the reconstruction from the sparse-view sampling data is challenging, both of the effective measurement and the appropriate reconstruction should be taken into account. In this study, we present an iterative sparse-view PAT reconstruction scheme where a virtual parallel-projection concept matching for the proposed measurement condition is introduced to help to achieve the "compressive sensing" procedure of the reconstruction, and meanwhile the spatially adaptive filtering fully considering the a priori information of the mutually similar blocks existing in natural images is introduced to effectively recover the partial unknown coefficients in the transformed domain. Therefore, the sparse-view PAT images can be reconstructed with higher quality compared with the results obtained by the universal back-projection (UBP) algorithm in the same sparse-view cases. The proposed approach has been validated by simulation experiments, which exhibits desirable performances in image fidelity even from a small number of measuring positions.
Spirit Beside 'Home Plate,' Sol 1809 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11803 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11803 NASA Mars Exploration Rover Spirit used its navigation camera to take the images assembled into this stereo, 120-degree view southward after a short drive during the 1,809th Martian day, or sol, of Spirit's mission on the surface of Mars (February 3, 2009). By combining images from the left-eye and right-eye sides of the navigation camera, the view appears three-dimensional when viewed through red-blue glasses with the red lens on the left. Spirit had driven about 2.6 meters (8.5 feet) that sol, continuing a clockwise route around a low plateau called 'Home Plate.' In this image, the rocks visible above the rovers' solar panels are on the slope at the northern edge of Home Plate. This view is presented as a cylindrical-perspective projection with geometric seam correction.Morimoto, Takuma; Mizokami, Yoko; Yaguchi, Hirohisa; Buck, Steven L.
2017-01-01
There has been debate about how and why color constancy may be better in three-dimensional (3-D) scenes than in two-dimensional (2-D) scenes. Although some studies have shown better color constancy for 3-D conditions, the role of specific cues remains unclear. In this study, we compared color constancy for a 3-D miniature room (a real scene consisting of actual objects) and 2-D still images of that room presented on a monitor using three viewing methods: binocular viewing, monocular viewing, and head movement. We found that color constancy was better for the 3-D room; however, color constancy for the 2-D image improved when the viewing method caused the scene to be perceived more like a 3-D scene. Separate measurements of the perceptual 3-D effect of each viewing method also supported these results. An additional experiment comparing a miniature room and its image with and without texture suggested that surface texture of scene objects contributes to color constancy. PMID:29238513
2017-09-15
As it glanced around the Saturn system one final time, NASA's Cassini spacecraft captured this view of the planet's giant moon Titan. Interest in mysterious Titan was a major motivating factor to return to Saturn with Cassini-Huygens following the Voyager mission flybys of the early 1980s. Cassini and its Huygens probe, supplied by European Space Agency, revealed the moon to be every bit as fascinating as scientists had hoped. These views were obtained by Cassini's narrow-angle camera on Sept. 13, 2017. They are among the last images Cassini sent back to Earth. This natural color view, made from images taken using red, green and blue spectral filters, shows Titan much as Voyager saw it -- a mostly featureless golden orb, swathed in a dense atmospheric haze. An enhanced-color view (Figure 1) adds to this color a separate view taken using a spectral filter (centered at 938 nanometers) that can partially see through the haze. The views were acquired at a distance of 481,000 miles (774,000 kilometers) from Titan. The image scale is about 3 miles (5 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21890
Thin plate spline feature point matching for organ surfaces in minimally invasive surgery imaging
NASA Astrophysics Data System (ADS)
Lin, Bingxiong; Sun, Yu; Qian, Xiaoning
2013-03-01
Robust feature point matching for images with large view angle changes in Minimally Invasive Surgery (MIS) is a challenging task due to low texture and specular reflections in these images. This paper presents a new approach that can improve feature matching performance by exploiting the inherent geometric property of the organ surfaces. Recently, intensity based template image tracking using a Thin Plate Spline (TPS) model has been extended for 3D surface tracking with stereo cameras. The intensity based tracking is also used here for 3D reconstruction of internal organ surfaces. To overcome the small displacement requirement of intensity based tracking, feature point correspondences are used for proper initialization of the nonlinear optimization in the intensity based method. Second, we generate simulated images from the reconstructed 3D surfaces under all potential view positions and orientations, and then extract feature points from these simulated images. The obtained feature points are then filtered and re-projected to the common reference image. The descriptors of the feature points under different view angles are stored to ensure that the proposed method can tolerate a large range of view angles. We evaluate the proposed method with silicon phantoms and in vivo images. The experimental results show that our method is much more robust with respect to the view angle changes than other state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Agrawal, Arun; Koff, David; Bak, Peter; Bender, Duane; Castelli, Jane
2015-03-01
The deployment of regional and national Electronic Health Record solutions has been a focus of many countries throughout the past decade. A major challenge for these deployments has been support for ubiquitous image viewing. More specifically, these deployments require an imaging solution that can work over the Internet, leverage any point of service device: desktop, tablet, phone; and access imaging data from any source seamlessly. Whereas standards exist to enable ubiquitous image viewing, few if any solutions exist that leverage these standards and meet the challenge. Rather, most of the currently available web based DI viewing solutions are either proprietary solutions or require special plugins. We developed a true zero foot print browser based DI viewing solution based on the Web Access DICOM Objects (WADO) and Cross-enterprise Document Sharing for Imaging (XDS-I.b) standards to a) demonstrate that a truly ubiquitous image viewer can be deployed; b) identify the gaps in the current standards and the design challenges for developing such a solution. The objective was to develop a viewer, which works on all modern browsers on both desktop and mobile devices. The implementation allows basic viewing functionalities of scroll, zoom, pan and window leveling (limited). The major gaps identified in the current DICOM WADO standards are a lack of ability to allow any kind of 3D reconstruction or MPR views. Other design challenges explored include considerations related to optimization of the solution for response time and low memory foot print.
Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
2017-01-16
No Earth-based telescope could ever capture a view quite like this. Earth-based views can only show Saturn's daylit side, from within about 25 degrees of Saturn's equatorial plane. A spacecraft in orbit, like Cassini, can capture stunning scenes that would be impossible from our home planet. This view looks toward the sunlit side of the rings from about 25 degrees (if Saturn is dominant in image) above the ring plane. The image was taken in violet light with the Cassini spacecraft wide-angle camera on Oct. 28, 2016. The view was obtained at a distance of approximately 810,000 miles (1.3 million kilometers) from Saturn. Image scale is 50 miles (80 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20517
WiseView: Visualizing motion and variability of faint WISE sources
NASA Astrophysics Data System (ADS)
Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume
2018-06-01
WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.
Azlan, C A; Ng, K H; Anandan, S; Nizam, M S
2006-09-01
Illuminance level in the softcopy image viewing room is a very important factor to optimize productivity in radiological diagnosis. In today's radiological environment, the illuminance measurements are normally done during the quality control procedure and performed annually. Although the room is equipped with dimmer switches, radiologists are not able to decide the level of illuminance according to the standards. The aim of this study is to develop a simple real-time illuminance detector system to assist the radiologists in deciding an adequate illuminance level during radiological image viewing. The system indicates illuminance in a very simple visual form by using light emitting diodes. By employing the device in the viewing room, illuminance level can be monitored and adjusted effectively.
Multi-view 3D echocardiography compounding based on feature consistency
NASA Astrophysics Data System (ADS)
Yao, Cheng; Simpson, John M.; Schaeffter, Tobias; Penney, Graeme P.
2011-09-01
Echocardiography (echo) is a widely available method to obtain images of the heart; however, echo can suffer due to the presence of artefacts, high noise and a restricted field of view. One method to overcome these limitations is to use multiple images, using the 'best' parts from each image to produce a higher quality 'compounded' image. This paper describes our compounding algorithm which specifically aims to reduce the effect of echo artefacts as well as improving the signal-to-noise ratio, contrast and extending the field of view. Our method weights image information based on a local feature coherence/consistency between all the overlapping images. Validation has been carried out using phantom, volunteer and patient datasets consisting of up to ten multi-view 3D images. Multiple sets of phantom images were acquired, some directly from the phantom surface, and others by imaging through hard and soft tissue mimicking material to degrade the image quality. Our compounding method is compared to the original, uncompounded echocardiography images, and to two basic statistical compounding methods (mean and maximum). Results show that our method is able to take a set of ten images, degraded by soft and hard tissue artefacts, and produce a compounded image of equivalent quality to images acquired directly from the phantom. Our method on phantom, volunteer and patient data achieves almost the same signal-to-noise improvement as the mean method, while simultaneously almost achieving the same contrast improvement as the maximum method. We show a statistically significant improvement in image quality by using an increased number of images (ten compared to five), and visual inspection studies by three clinicians showed very strong preference for our compounded volumes in terms of overall high image quality, large field of view, high endocardial border definition and low cavity noise.
65. View of rare torreya looking from the southwest (closer ...
65. View of rare torreya looking from the southwest (closer view of springtime image, HALS no. LA-1-37) - Briarwood: The Caroline Dormon Nature Preserve, 216 Caroline Dormon Road, Saline, Bienville Parish, LA
Demonstration of a real-time implementation of the ICVision holographic stereogram display
NASA Astrophysics Data System (ADS)
Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel
1995-07-01
There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.
Reduction of false-positive recalls using a computerized mammographic image feature analysis scheme
NASA Astrophysics Data System (ADS)
Tan, Maxine; Pu, Jiantao; Zheng, Bin
2014-08-01
The high false-positive recall rate is one of the major dilemmas that significantly reduce the efficacy of screening mammography, which harms a large fraction of women and increases healthcare cost. This study aims to investigate the feasibility of helping reduce false-positive recalls by developing a new computer-aided diagnosis (CAD) scheme based on the analysis of global mammographic texture and density features computed from four-view images. Our database includes full-field digital mammography (FFDM) images acquired from 1052 recalled women (669 positive for cancer and 383 benign). Each case has four images: two craniocaudal (CC) and two mediolateral oblique (MLO) views. Our CAD scheme first computed global texture features related to the mammographic density distribution on the segmented breast regions of four images. Second, the computed features were given to two artificial neural network (ANN) classifiers that were separately trained and tested in a ten-fold cross-validation scheme on CC and MLO view images, respectively. Finally, two ANN classification scores were combined using a new adaptive scoring fusion method that automatically determined the optimal weights to assign to both views. CAD performance was tested using the area under a receiver operating characteristic curve (AUC). The AUC = 0.793 ± 0.026 was obtained for this four-view CAD scheme, which was significantly higher at the 5% significance level than the AUCs achieved when using only CC (p = 0.025) or MLO (p = 0.0004) view images, respectively. This study demonstrates that a quantitative assessment of global mammographic image texture and density features could provide useful and/or supplementary information to classify between malignant and benign cases among the recalled cases, which may eventually help reduce the false-positive recall rate in screening mammography.
Exploring the World using Street View 360 Images
NASA Astrophysics Data System (ADS)
Bailey, J.
2016-12-01
The phrase "A Picture is Worth a Thousand Words" is an idiom of unknown 20th century origin. There is some belief that the modern use of the phrase stems from an article in a 1921 issue of a popular trade journal, that used "One Look is Worth A Thousand Words" to promote the use of images in advertisements on the sides of streetcars. There is a certain irony to this as nearly a century later the camera technologies on "Street View cars" are collecting images that look everywhere at once. However, while it can be to fun drive along the World's streets, it was the development of Street View imaging systems that could be mounted on other modes of transport or capture platforms (Street View Special Collects) that opened the door for these 360 images to become a tool for exploration and storytelling. Using Special Collect imagery captured in "off-road" and extreme locations, scientists are now using Street View images to assess changes to species habitats, show the impact of natural disasters and even perform "armchair" geology. A powerful example is the imagery captured before and after the 2011 earthquake and tsunami that devastated Japan. However, it is use of the immersive nature of 360 images that truly allows them to create wonder and awe, especially when combined with Virtual Reality (VR) viewers. Combined with the Street View App or Google Expeditions, VR provides insight into what it is like to swim with sealions in the Galapagos or climb El Capitan in Yosemite National Park. While these image could never replace experiencing these locations in real-life, they can inspire the viewer to explore and learn more about the many wonders of our planet. https://www.google.com/streetview/https://www.google.com/expeditions/
Immersive Photography Renders 360 degree Views
NASA Technical Reports Server (NTRS)
2008-01-01
An SBIR contract through Langley Research Center helped Interactive Pictures Corporation, of Knoxville, Tennessee, create an innovative imaging technology. This technology is a video imaging process that allows real-time control of live video data and can provide users with interactive, panoramic 360 views. The camera system can see in multiple directions, provide up to four simultaneous views, each with its own tilt, rotation, and magnification, yet it has no moving parts, is noiseless, and can respond faster than the human eye. In addition, it eliminates the distortion caused by a fisheye lens, and provides a clear, flat view of each perspective.
NASA Astrophysics Data System (ADS)
Schonlau, William J.
2006-05-01
An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.
2016-11-21
Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508
The utility of indocyanine green fluorescence imaging during robotic adrenalectomy.
Colvin, Jennifer; Zaidi, Nisar; Berber, Eren
2016-08-01
Indocyanine green (ICG) has been used for medical imaging since 1950s, but has more recently become available for use in minimally invasive surgery owing to improvements in technology. This study investigates the use of ICG florescence to guide an accurate dissection by delineating the borders of adrenal tumors during robotic adrenalectomy (RA). This prospective study compared conventional robotic view with ICG fluorescence imaging in 40 consecutive patients undergoing RA. Independent, non-blinded observers assessed how accurately ICG fluorescence delineated the borders of adrenal tumors compared to conventional robotic view. A total of 40 patients underwent 43 adrenalectomies. ICG imaging was superior, equivalent, or inferior to conventional robotic view in 46.5% (n = 20), 25.6% (n = 11), and 27.9% (n = 12) of the procedures. On univariate analysis, the only parameter that predicted the superiority of ICG imaging over conventional robotic view was the tumor type, with adrenocortical tumors being delineated more accurately on ICG imaging compared to conventional robotic view. This study demonstrates the utility of ICG to guide the dissection and removal of adrenal tumors during RA. A simple reproducible method is reported, with a detailed description of the utility based on tumor type, approach and side. J. Surg. Oncol. 2016;114:153-156. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Task-based optimization of image reconstruction in breast CT
NASA Astrophysics Data System (ADS)
Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan
2014-03-01
We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.
Wegleitner, Eric J.; Isermann, Daniel A.
2017-01-01
Many biologists use digital images for estimating ages of fish, but the use of images could lead to differences in age estimates and precision because image capture can produce changes in light and clarity compared to directly viewing structures through a microscope. We used sectioned sagittal otoliths from 132 Largemouth Bass Micropterus salmoides and sectioned dorsal spines and otoliths from 157 Walleyes Sander vitreus to determine whether age estimates and among‐reader precision were similar when annuli were enumerated directly through a microscope or from digital images. Agreement of ages between viewing methods for three readers were highest for Largemouth Bass otoliths (75–89% among readers), followed by Walleye otoliths (63–70%) and Walleye dorsal spines (47–64%). Most discrepancies (72–96%) were ±1 year, and differences were more prevalent for age‐5 and older fish. With few exceptions, mean ages estimated from digital images were similar to ages estimated via directly viewing the structures through the microscope, and among‐reader precision did not vary between viewing methods for each structure. However, the number of disagreements we observed suggests that biologists should assess potential differences in age structure that could arise if images of calcified structures are used in the age estimation process.
Mercator Projection of Huygens View
2006-05-04
This poster shows a flattened (Mercator) projection of the Huygens probe's view from 10 kilometers altitude (6 miles). The images that make up this view were taken on Jan. 14, 2005, with the descent imager/spectral radiometer onboard the European Space Agency's Huygens probe. The Huygens probe was delivered to Saturn's moon Titan by the Cassini spacecraft, which is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif. NASA supplied two instruments on the probe, the descent imager/spectral radiometer and the gas chromatograph mass spectrometer. http://photojournal.jpl.nasa.gov/catalog/PIA08113
Kruger, David G; Riederer, Stephen J; Rossman, Phillip J; Mostardi, Petrice M; Madhuranthakam, Ananth J; Hu, Houchun H
2005-09-01
MR images formed using extended FOV continuously moving table data acquisition can have signal falloff and loss of lateral spatial resolution at localized, periodic positions along the direction of table motion. In this work we identify the origin of these artifacts and provide a means for correction. The artifacts are due to a mismatch of the phase of signals acquired from contiguous sampling fields of view and are most pronounced when the central k-space views are being sampled. Correction can be performed using the phase information from a periodically sampled central view to adjust the phase of all other views of that view cycle, making the net phase uniform across each axial plane. Results from experimental phantom and contrast-enhanced peripheral MRA studies show that the correction technique substantially eliminates the artifact for a variety of phase encode orders. Copyright (c) 2005 Wiley-Liss, Inc.
Yoon, Ki-Hyuk; Kang, Min-Koo; Lee, Hwasun; Kim, Sung-Kyu
2018-01-01
We study optical technologies for viewer-tracked autostereoscopic 3D display (VTA3D), which provides improved 3D image quality and extended viewing range. In particular, we utilize a technique-the so-called dynamic fusion of viewing zone (DFVZ)-for each 3D optical line to realize image quality equivalent to that achievable at optimal viewing distance, even when a viewer is moving in a depth direction. In addition, we examine quantitative properties of viewing zones provided by the VTA3D system that adopted DFVZ, revealing that the optimal viewing zone can be formed at viewer position. Last, we show that the comfort zone is extended due to DFVZ. This is demonstrated by a viewer's subjective evaluation of the 3D display system that employs both multiview autostereoscopic 3D display and DFVZ.
Video streaming technologies using ActiveX and LabVIEW
NASA Astrophysics Data System (ADS)
Panoiu, M.; Rat, C. L.; Panoiu, C.
2015-06-01
The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.
The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.
Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L
2016-11-01
Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.
Mulgrew, Kate E; Tiggemann, Marika
2018-01-01
We examined whether shifting young women's ( N =322) attention toward functionality components of media-portrayed idealized images would protect against body dissatisfaction. Image type was manipulated via images of models in either an objectified body-as-object form or active body-as-process form; viewing focus was manipulated via questions about the appearance or functionality of the models. Social comparison was examined as a moderator. Negative outcomes were most pronounced within the process-related conditions (body-as-process images or functionality viewing focus) and for women who reported greater functionality comparison. Results suggest that functionality-based depictions, reflections, and comparisons may actually produce worse outcomes than those based on appearance.
Cohen, Leeber; Mangers, Kristie; Grobman, William A; Platt, Lawrence D
2009-12-01
The purpose of this study was to determine the frequency with which 3 standard screening views of the fetal heart (4-chamber, left ventricular outflow tract [LVOT], and right ventricular outflow tract [RVOT]) can be obtained satisfactorily with the spatiotemporal image correlation (STIC) technique. A prospective study of 111 patients undergoing anatomic surveys at 18 to 22 weeks was performed. Two ultrasound machines with fetal cardiac settings were used. The best volume set that could be obtained from each patient during a 45-minute examination was graded by 2 sonologists with regard to whether the 4-chamber, LVOT, and RVOT images were satisfactory for screening. All 3 views were judged satisfactory for screening in most patients: 1 sonologist graded the views as satisfactory in 70% of the patients, whereas the other found the views to be satisfactory in 83%. The position of the placenta did not alter the probability of achieving a satisfactory view, but a fetus in the spine anterior position was associated with a significantly lower probability that the views were regarded as satisfactory for screening (odds ratio, 0.28; 95% confidence interval, 0.09-0.70; P < .05). This study suggests that STIC may assist with screening for cardiac anomalies at 18 to 22 weeks' gestation.
Clinical decision making using teleradiology in urology.
Lee, B R; Allaf, M; Moore, R; Bohlman, M; Wang, G M; Bishoff, J T; Jackman, S V; Cadeddu, J A; Jarrett, T W; Khazan, R; Kavoussi, L R
1999-01-01
Using a personal computer-based teleradiology system, we compared accuracy, confidence, and diagnostic ability in the interpretation of digitized radiographs to determine if teleradiology-imported studies convey sufficient information to make relevant clinical decisions involving urology. Variables of diagnostic accuracy, confidence, image quality, interpretation, and the impact of clinical decisions made after viewing digitized radiographs were compared with those of original radiographs. We evaluated 956 radiographs that included 94 IV pyelograms, four voiding cystourethrograms, and two nephrostograms. The radiographs were digitized and transferred over an Ethernet network to a remote personal computer-based viewing station. The digitized images were viewed by urologists and graded according to confidence in making a diagnosis, image quality, diagnostic difficulty, clinical management based on the image itself, and brief patient history. The hard-copy radiographs were then interpreted immediately afterward, and diagnostic decisions were reassessed. All analog radiographs were reviewed by an attending radiologist. Ninety-seven percent of the decisions made from the digitized radiographs did not change after reviewing conventional radiographs of the same case. When comparing the variables of clinical confidence, quality of the film on the teleradiology system versus analog films, and diagnostic difficulty, we found no statistical difference (p > .05) between the two techniques. Overall accuracy in interpreting the digitized images on the teleradiology system was 88% by urologists compared with that of the attending radiologist's interpretation of the analog radiographs. However, urologists detected findings on five (5%) analog radiographs that had been previously unreported by the radiologist. Viewing radiographs transmitted to a personal computer-based viewing station is an appropriate means of reviewing films with sufficient quality on which to base clinical decisions. Our focus was whether decisions made after viewing the transmitted radiographs would change after viewing the hard-copy images of the same case. In 97% of the cases, the decision did not change. In those cases in which management was altered, recommendation of further imaging studies was the most common factor.
True 3-D View of 'Columbia Hills' from an Angle
NASA Technical Reports Server (NTRS)
2004-01-01
This mosaic of images from NASA's Mars Exploration Rover Spirit shows a panorama of the 'Columbia Hills' without any adjustment for rover tilt. When viewed through 3-D glasses, depth is much more dramatic and easier to see, compared with a tilt-adjusted version. This is because stereo views are created by producing two images, one corresponding to the view from the panoramic camera's left-eye camera, the other corresponding to the view from the panoramic camera's right-eye camera. The brain processes the visual input more accurately when the two images do not have any vertical offset. In this view, the vertical alignment is nearly perfect, but the horizon appears to curve because of the rover's tilt (because the rover was parked on a steep slope, it was tilted approximately 22 degrees to the west-northwest). Spirit took the images for this 360-degree panorama while en route to higher ground in the 'Columbia Hills.' The highest point visible in the hills is 'Husband Hill,' named for space shuttle Columbia Commander Rick Husband. To the right are the rover's tracks through the soil, where it stopped to perform maintenance on its right front wheel in July. In the distance, below the hills, is the floor of Gusev Crater, where Spirit landed Jan. 3, 2004, before traveling more than 3 kilometers (1.8 miles) to reach this point. This vista comprises 188 images taken by Spirit's panoramic camera from its 213th day, or sol, on Mars to its 223rd sol (Aug. 9 to 19, 2004). Team members at NASA's Jet Propulsion Laboratory and Cornell University spent several weeks processing images and producing geometric maps to stitch all the images together in this mosaic. The 360-degree view is presented in a cylindrical-perspective map projection with geometric seam correction.COMPLEX ADAPTIVE HIERARCHICAL SYSTEMS
One of the most powerful images of our time, an image that has changed the way we think of ourselves and the way we think about our relationship to our environment, is the image of Earth viewed from the surface of the moon. As we view "spaceship Earth" we sense that the complexit...
Opportunity Surroundings on Sol 1687 Stereo
2009-01-05
NASA Mars Exploration Rover Opportunity combined images into this stereo, 360-degree view of the rover surroundings on Oct. 22, 2008. Opportunity position was about 300 meters southwest of Victoria. 3D glasses are necessary to view this image.
2017-10-02
Stunning views like this image of Saturn's night side are only possible thanks to our robotic emissaries like Cassini. Until future missions are sent to Saturn, Cassini's image-rich legacy must suffice. Because Earth is closer to the Sun than Saturn, observers on Earth only see Saturn's day side. With spacecraft, we can capture views (and data) that are simply not possible from Earth, even with the largest telescopes. This view looks toward the sunlit side of the rings from about 7 degrees above the ring plane. The image was taken in visible light with the wide-angle camera on NASA's Cassini spacecraft on June 7, 2017. The view was obtained at a distance of approximately 751,000 miles (1.21 million kilometers) from Saturn. Image scale is 45 miles (72 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21350
Voyager: Neptune Encounter Highlights
NASA Technical Reports Server (NTRS)
1989-01-01
Voyager encounter data are presented in computer animation (CA) and real (R) animation. The highlights include a view of 2 full rotations of Neptune. It shows spacecraft trajectory 'diving' over Neptune and intercepting Triton's orbit, depicting radiation and occulation zones. Also shown are a renegade orbit of Triton and Voyager's encounter with Neptune's Magnetopause. A model of the spacecraft's complex maneuvers during close encounters of Neptune and Triton is presented. A view from Earth of Neptune's occulation experiment is is shown as well as a recreation of Voyager's final pass. There is detail of Voyager's Image Compensation technique which produces Voyager images. Eighteen images were produced on June 22 - 23, 1989, from 57 million miles away. A 68 day sequence which provides a stroboscopic view - colorization approximates what is seen by the human eye. Real time images recorded live from Voyager on 8/24/89 are presented. Photoclinometry produced the topography of Triton. Three images are used to create a sequence of Neptune's rings. The globe of Neptune and 2 views of the south pole are shown as well as Neptune rotating. The rotation of a scooter is frozen in images showing differential motion. There is a view of rotation of the Great Dark Spot about its own axis. Photoclinometry provides a 3-dimensional perspective using a color mosaic of Triton images. The globe is used to indicate the orientation of Neptune's crescent. The east and west plumes on Triton are shown.
Display of high dynamic range images under varying viewing conditions
NASA Astrophysics Data System (ADS)
Borer, Tim
2017-09-01
Recent demonstrations of high dynamic range (HDR) television have shown that superb images are possible. With the emergence of an HDR television production standard (ITU-R Recommendation BT.2100) last year, HDR television production is poised to take off. However research to date has focused principally on HDR image display only under "dark" viewing conditions. HDR television will need to be displayed at varying brightness and under varying illumination (for example to view sport in daytime or on mobile devices). We know, from common practice with conventional TV, that the rendering intent (gamma) should change under brighter conditions, although this is poorly quantified. For HDR the need to render images under varying conditions is all the more acute. This paper seeks to explore the issues surrounding image display under varying conditions. It also describes how visual adaptation is affected by display brightness, surround illumination, screen size and viewing distance. Existing experimental results are presented and extended to try to quantify these effects. Using the experimental results it is described how HDR images may be displayed so that they are perceptually equivalent under different viewing conditions. A new interpretation of the experimental results is reported, yielding a new, luminance invariant model for the appropriate display "gamma". In this way the consistency of HDR image reproduction should be improved, thereby better maintaining "creative intent" in television.
NASA Astrophysics Data System (ADS)
1989-11-01
Voyager encounter data are presented in computer animation (CA) and real (R) animation. The highlights include a view of 2 full rotations of Neptune. It shows spacecraft trajectory 'diving' over Neptune and intercepting Triton's orbit, depicting radiation and occulation zones. Also shown are a renegade orbit of Triton and Voyager's encounter with Neptune's Magnetopause. A model of the spacecraft's complex maneuvers during close encounters of Neptune and Triton is presented. A view from Earth of Neptune's occulation experiment is is shown as well as a recreation of Voyager's final pass. There is detail of Voyager's Image Compensation technique which produces Voyager images. Eighteen images were produced on June 22 - 23, 1989, from 57 million miles away. A 68 day sequence which provides a stroboscopic view - colorization approximates what is seen by the human eye. Real time images recorded live from Voyager on 8/24/89 are presented. Photoclinometry produced the topography of Triton. Three images are used to create a sequence of Neptune's rings. The globe of Neptune and 2 views of the south pole are shown as well as Neptune rotating. The rotation of a scooter is frozen in images showing differential motion. There is a view of rotation of the Great Dark Spot about its own axis. Photoclinometry provides a 3-dimensional perspective using a color mosaic of Triton images. The globe is used to indicate the orientation of Neptune's crescent. The east and west plumes on Triton are shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Theodore W.; Sultan, Laith R.; Sehgal, Chandra M., E-mail: sehgalc@uphs.upenn.edu
Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced eachmore » phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.« less
VIEWDEX: an efficient and easy-to-use software for observer performance studies.
Håkansson, Markus; Svensson, Sune; Zachrisson, Sara; Svalkvist, Angelica; Båth, Magnus; Månsson, Lars Gunnar
2010-01-01
The development of investigation techniques, image processing, workstation monitors, analysing tools etc. within the field of radiology is vast, and the need for efficient tools in the evaluation and optimisation process of image and investigation quality is important. ViewDEX (Viewer for Digital Evaluation of X-ray images) is an image viewer and task manager suitable for research and optimisation tasks in medical imaging. ViewDEX is DICOM compatible and the features of the interface (tasks, image handling and functionality) are general and flexible. The configuration of a study and output (for example, answers given) can be edited in any text editor. ViewDEX is developed in Java and can run from any disc area connected to a computer. It is free to use for non-commercial purposes and can be downloaded from http://www.vgregion.se/sas/viewdex. In the present work, an evaluation of the efficiency of ViewDEX for receiver operating characteristic (ROC) studies, free-response ROC (FROC) studies and visual grading (VG) studies was conducted. For VG studies, the total scoring rate was dependent on the number of criteria per case. A scoring rate of approximately 150 cases h(-1) can be expected for a typical VG study using single images and five anatomical criteria. For ROC and FROC studies using clinical images, the scoring rate was approximately 100 cases h(-1) using single images and approximately 25 cases h(-1) using image stacks ( approximately 50 images case(-1)). In conclusion, ViewDEX is an efficient and easy-to-use software for observer performance studies.
2017-05-25
This sequence of enhanced-color images shows how quickly the viewing geometry changes for NASA's Juno spacecraft as it swoops by Jupiter. The images were obtained by JunoCam. Once every 53 days the Juno spacecraft swings close to Jupiter, speeding over its clouds. In just two hours, the spacecraft travels from a perch over Jupiter's north pole through its closest approach (perijove), then passes over the south pole on its way back out. This sequence shows 14 enhanced-color images. The first image on the left shows the entire half-lit globe of Jupiter, with the north pole approximately in the center. As the spacecraft gets closer to Jupiter, the horizon moves in and the range of visible latitudes shrinks. The third and fourth images in this sequence show the north polar region rotating away from our view while a band of wavy clouds at northern mid-latitudes comes into view. By the fifth image of the sequence the band of turbulent clouds is nicely centered in the image. The seventh and eighth images were taken just before the spacecraft was at its closest point to Jupiter, near Jupiter's equator. Even though these two pictures were taken just four minutes apart, the view is changing quickly. As the spacecraft crossed into the southern hemisphere, the bright "south tropical zone" dominates the ninth, 10th and 11th images. The white ovals in a feature nicknamed Jupiter's "String of Pearls" are visible in the 12th and 13th images. In the 14th image Juno views Jupiter's south poles. https://photojournal.jpl.nasa.gov/catalog/PIA21645
Cary, Theodore W; Reamer, Courtney B; Sultan, Laith R; Mohler, Emile R; Sehgal, Chandra M
2014-02-01
To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.
Cary, Theodore W.; Reamer, Courtney B.; Sultan, Laith R.; Mohler, Emile R.; Sehgal, Chandra M.
2014-01-01
Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging. PMID:24506648
Correction-free pyrometry in radiant wall furnaces
NASA Technical Reports Server (NTRS)
Thomas, Andrew S. W. (Inventor)
1994-01-01
A specular, spherical, or near-spherical target is located within a furnace having inner walls and a viewing window. A pyrometer located outside the furnace 'views' the target through pyrometer optics and the window, and it is positioned so that its detector sees only the image of the viewing window on the target. Since this image is free of any image of the furnace walls, it is free from wall radiance, and correction-free target radiance is obtained. The pyrometer location is determined through a nonparaxial optical analysis employing differential optical ray tracing methods to derive a series of exact relations for the image location.
Study on Hybrid Image Search Technology Based on Texts and Contents
NASA Astrophysics Data System (ADS)
Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.
2018-05-01
Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.
Design of a single projector multiview 3D display system
NASA Astrophysics Data System (ADS)
Geng, Jason
2014-03-01
Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.
Opportunity View During Exploration in 'Duck Bay,' Sols 1506-1510 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11787 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11787 NASA Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo, full-circle view of the rover's surroundings on the 1,506th through 1,510th Martian days, or sols, of Opportunity's mission on Mars (April 19-23, 2008). North is at the top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The site is within an alcove called 'Duck Bay' in the western portion of Victoria Crater. Victoria Crater is about 800 meters (half a mile) wide. Opportunity had descended into the crater at the top of Duck Bay 7 months earlier. By the time the rover acquired this view, it had examined rock layers inside the rim. Opportunity was headed for a closer look at the base of a promontory called 'Cape Verde,' the cliff at about the 2-o'clock position of this image, before leaving Victoria. The face of Cape Verde is about 6 meters (20 feet) tall. Just clockwise from Cape Verde is the main bowl of Victoria Crater, with sand dunes at the bottom. A promontory called 'Cabo Frio,' at the southern side of Duck Bay, stands near the 6-o'clock position of the image. This view is presented as a cylindrical-perspective projection with geometric seam correction.DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Olsen, J.; Parikh, P.
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less
Saturn B Ring, Finer Than Ever
2017-01-30
This image shows a region in Saturn's outer B ring. NASA's Cassini spacecraft viewed this area at a level of detail twice as high as it had ever been observed before. And from this view, it is clear that there are still finer details to uncover. Researchers have yet to determine what generated the rich structure seen in this view, but they hope detailed images like this will help them unravel the mystery. In order to preserve the finest details, this image has not been processed to remove the many small bright blemishes, which are created by cosmic rays and charged particle radiation near the planet. The image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 32,000 miles (51,000 kilometers) from the rings, and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (360 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21058
Drill Bit Tip on Mars Rover Curiosity, Head-on View
2013-02-04
This head-on view shows the tip of the drill bit on NASA Mars rover Curiosity. The view merges two exposures taken by the remote micro-imager in the rover ChemCam instrument at different focus settings.
Posteroanterior versus anteroposterior lumbar spine radiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsuno, M.M.; Shu, G.J.
The posteroanterior view of the lumbar spine has important features including radiation protection and image quality; these have been studied by various investigators. Investigators have shown that sensitive tissues receive less radiation dosage in the posteroanterior view of the spine for scoliosis screening and intracranial tomography without altering the image quality. This paper emphasizes the importance of the radiation safety aspect of the posteroanterior view and shows the improvement in shape distortion in the lumbar vertebrae.
A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.
Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G
2013-03-21
Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.
GF-7 Imaging Simulation and Dsm Accuracy Estimate
NASA Astrophysics Data System (ADS)
Yue, Q.; Tang, X.; Gao, X.
2017-05-01
GF-7 satellite is a two-line-array stereo imaging satellite for surveying and mapping which will be launched in 2018. Its resolution is about 0.8 meter at subastral point corresponding to a 20 km width of cloth, and the viewing angle of its forward and backward cameras are 5 and 26 degrees. This paper proposed the imaging simulation method of GF-7 stereo images. WorldView-2 stereo images were used as basic data for simulation. That is, we didn't use DSM and DOM as basic data (we call it "ortho-to-stereo" method) but used a "stereo-to-stereo" method, which will be better to reflect the difference of geometry and radiation in different looking angle. The shortage is that geometric error will be caused by two factors, one is different looking angles between basic image and simulated image, another is not very accurate or no ground reference data. We generated DSM by WorldView-2 stereo images. The WorldView-2 DSM was not only used as reference DSM to estimate the accuracy of DSM generated by simulated GF-7 stereo images, but also used as "ground truth" to establish the relationship between WorldView-2 image point and simulated image point. Static MTF was simulated on the instantaneous focal plane "image" by filtering. SNR was simulated in the electronic sense, that is, digital value of WorldView-2 image point was converted to radiation brightness and used as radiation brightness of simulated GF-7 camera. This radiation brightness will be converted to electronic number n according to physical parameters of GF-7 camera. The noise electronic number n1 will be a random number between -√n and √n. The overall electronic number obtained by TDI CCD will add and converted to digital value of simulated GF-7 image. Sinusoidal curves with different amplitude, frequency and initial phase were used as attitude curves. Geometric installation errors of CCD tiles were also simulated considering the rotation and translation factors. An accuracy estimate was made for DSM generated from simulated images.
Schneider, Matthias; Pistritto, Anna Maria; Gerges, Christian; Gerges, Mario; Binder, Christina; Lang, Irene; Maurer, Gerald; Binder, Thomas; Goliasch, Georg
2018-05-01
Pulmonary hypertension (PH) is a disease with severe morbidity and mortality. Echocardiography plays an essential role in the screening of PH. The quality of the acquired continuous wave Doppler signal is the major limitation of the method and can greatly affect the accuracy of estimated pulmonary pressures. The aim of this study was to evaluate the clinical need to image from multiple ultrasound windows in patients with suspected pulmonary hypertension. We prospectively evaluated 65 patients (43% male, mean age 67.2 years) with echocardiography and right heart catheterization. 17% had invasively normal pulmonary pressures, 83% had pulmonary hypertension. Peak tricuspid regurgitation (TR) velocity was imaged in five echocardiographic views. Sufficient Doppler signal was recorded in 94% of the patients. Correlation for overall peak TR velocity with invasively measured systolic pulmonary artery pressure was r = 0.83 (p < 0.001). Considering all five imaging windows resulted in a sensitivity of 87%, and a specificity of 91% for correct diagnosis of PH with an AUC of 0.89, which was significantly better as compared to sole imaging from the right ventricular modified apical four-chamber view (AUC 0.85, p = 0.0395). Additional imaging from atypical views changed the overall peak TR velocity in 32% of the patients. A multiple-view approach changed the echocardiographic diagnosis of PH in 11% of the patients as opposed to sole imaging from an apical four-chamber view. This study comprehensively assessed the impact on clinical decision making when evaluating patients with an echocardiographic multiplane approach for suspected PH. This approach substantially increased sensitivity without a decrease in specificity.
2013-12-23
The globe of Saturn, seen here in natural color, is reminiscent of a holiday ornament in this wide-angle view from NASA's Cassini spacecraft. The characteristic hexagonal shape of Saturn's northern jet stream, somewhat yellow here, is visible. At the pole lies a Saturnian version of a high-speed hurricane, eye and all. This view is centered on terrain at 75 degrees north latitude, 120 degrees west longitude. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The images were taken with the Cassini spacecraft wide-angle camera on July 22, 2013. This view was acquired at a distance of approximately 611,000 miles (984,000 kilometers) from Saturn. Image scale is 51 miles (82 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA17175
JuxtaView - A tool for interactive visualization of large imagery on scalable tiled displays
Krishnaprasad, N.K.; Vishwanath, V.; Venkataraman, S.; Rao, A.G.; Renambot, L.; Leigh, J.; Johnson, A.E.; Davis, B.
2004-01-01
JuxtaView is a cluster-based application for viewing ultra-high-resolution images on scalable tiled displays. We present in JuxtaView, a new parallel computing and distributed memory approach for out-of-core montage visualization, using LambdaRAM, a software-based network-level cache system. The ultimate goal of JuxtaView is to enable a user to interactively roam through potentially terabytes of distributed, spatially referenced image data such as those from electron microscopes, satellites and aerial photographs. In working towards this goal, we describe our first prototype implemented over a local area network, where the image is distributed using LambdaRAM, on the memory of all nodes of a PC cluster driving a tiled display wall. Aggressive pre-fetching schemes employed by LambdaRAM help to reduce latency involved in remote memory access. We compare LambdaRAM with a more traditional memory-mapped file approach for out-of-core visualization. ?? 2004 IEEE.
Anaglyph Image Technology As a Visualization Tool for Teaching Geology of National Parks
NASA Astrophysics Data System (ADS)
Stoffer, P. W.; Phillips, E.; Messina, P.
2003-12-01
Anaglyphic stereo viewing technology emerged in the mid 1800's. Anaglyphs use offset images in contrasting colors (typically red and cyan) that when viewed through color filters produce a three-dimensional (3-D) image. Modern anaglyph image technology has become increasingly easy to use and relatively inexpensive using digital cameras, scanners, color printing, and common image manipulation software. Perhaps the primary drawbacks of anaglyph images include visualization problems with primary colors (such as flowers, bright clothing, or blue sky) and distortion factors in large depth-of-field images. However, anaglyphs are more versatile than polarization techniques since they can be printed, displayed on computer screens (such as on websites), or projected with a single projector (as slides or digital images), and red and cyan viewing glasses cost less than polarization glasses and other 3-D viewing alternatives. Anaglyph images are especially well suited for most natural landscapes, such as views dominated by natural earth tones (grays, browns, greens), and they work well for sepia and black and white images (making the conversion of historic stereo photography into anaglyphs easy). We used a simple stereo camera setup incorporating two digital cameras with a rigid base to photograph landscape features in national parks (including arches, caverns, cactus, forests, and coastlines). We also scanned historic stereographic images. Using common digital image manipulation software we created websites featuring anaglyphs of geologic features from national parks. We used the same images for popular 3-D poster displays at the U.S. Geological Survey Open House 2003 in Menlo Park, CA. Anaglyph photography could easily be used in combined educational outdoor activities and laboratory exercises.
CDC Vital Signs: Trucker Safety
... to 84% in 2013). View larger image and text description Infographic View larger image and text description Top of Page What Can Be Done ... to take rest breaks. Prohibiting truck drivers from text messaging or using a handheld cell phone while ...
Atmospheric Science Data Center
2014-05-15
... the Multi-angle Imaging SpectroRadiometer (MISR). On the left, a natural-color view acquired by MISR's vertical-viewing (nadir) camera ... Gunnison River at the city of Grand Junction. The striking "L" shaped feature in the lower image center is a sandstone monocline known as ...
2010-01-14
This image, produced from instrument data aboard NASA Space Shuttle Endeavour, is a stereoscopic view of the topography of Port-au-Prince, Haiti where a magnitude 7.0 earthquake occurred on January 12, 2010. You need 3-D glasses to view this image.
Sharon, Jeffrey D; Northcutt, Benjamin G; Aygun, Nafi; Francis, Howard W
2016-10-01
To study the quality and usability of magnetic resonance imaging (MRI) obtained with a cochlear implant magnet in situ. Retrospective chart review. Tertiary care center. All patients who underwent brain MRI with a cochlear implant magnet in situ from 2007 to 2016. None. Grade of view of the ipsilateral internal auditory canal (IAC) and cerebellopontine angle (CPA). Inclusion criteria were met by 765 image sequences in 57 MRI brain scans. For the ipsilateral IAC, significant predictors of a grade 1 (normal) view included: absence of fat saturation algorithm (p = 0.001), nonaxial plane of imaging (p = 0.01), and contrast administration (p = 0.001). For the ipsilateral CPA, significant predictors of a grade 1 view included: absence of fat saturation algorithm (p = 0.001), high-resolution images (p = 0.001), and nonaxial plane of imaging (p = 0.001). Overall, coronal T1 high-resolution images produced the highest percentage of grade 1 views (89%). Fat saturation also caused a secondary ring-shaped distortion artifact, which impaired the view of the contralateral CPA 52.7% of the time, and the contralateral IAC 42.8% of the time. MRI scans without any usable (grade 1) sequences had fewer overall sequences (N = 4.3) than scans with at least one usable sequence (N = 7.1, p = 0.001). MRI image quality with a cochlear implant magnet in situ depends on several factors, which can be modified to maximize image quality in this unique patient population.
3D multi-view convolutional neural networks for lung nodule classification
Kang, Guixia; Hou, Beibei; Zhang, Ningbo
2017-01-01
The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492
Chen, Guang-Hong; Li, Yinsheng
2015-08-01
In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial-temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial-temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial-temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. In numerical simulations, the 240(∘) short scan angular span was divided into four consecutive 60(∘) angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200(∘), three 66(∘) angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60(∘) angular subsectors.
Low-Rate Information Transmission (LRIT) - NOAA Satellite Information
bulletins and notices and an updated area where further explanations can be found. GOES-East Full Disk Image Viewed Using LRIT GOES-EAST full disk image viewed using LRIT. Zoomed In Portion of the LRIT Full Disk Image. A zoomed in portion of the LRIT full disk image. Contact Information: LRIT / EMWIN: Paul Seymour
Dual-surface dielectric depth detector for holographic millimeter-wave security scanners
NASA Astrophysics Data System (ADS)
McMakin, Douglas L.; Keller, Paul E.; Sheen, David M.; Hall, Thomas E.
2009-05-01
The Transportation Security Administration (TSA) is presently deploying millimeter-wave whole body scanners at over 20 airports in the United States. Threats that may be concealed on a person are displayed to the security operator of this scanner. "Passenger privacy is ensured through the anonymity of the image. The officer attending the passenger cannot view the image, and the officer viewing the image is remotely located and cannot see the passenger. Additionally, the image cannot be stored, transmitted or printed and is deleted immediately after being viewed. Finally, the facial area of the image has been blurred to further ensure privacy." Pacific Northwest National Laboratory (PNNL) originated research into this novel security technology which has been independently commercialized by L-3 Communications, SafeView, Inc. PNNL continues to perform fundamental research into improved software techniques which are applicable to the field of holographic security screening technology. This includes performing significant research to remove human features from the imagery. Both physical and software imaging techniques have been employed. The physical imaging techniques include polarization diversity illumination and reception, dual frequency implementation, and high frequency imaging at 100 GHz. This paper will focus on a software privacy technique using a dual surface dielectric depth detector method.
64. VIEW FROM THE NORTHEAST IN THE NORTHEAST QUADRANT. DETAIL ...
64. VIEW FROM THE NORTHEAST IN THE NORTHEAST QUADRANT. DETAIL VIEW OF THE RIGHT FACE. A PORTION OF THE RIGHT SHOULDER ANGLE IS INCLUDED ON THE LEFT-SIDE OF THE IMAGE, WITH SCALE. - Fort Sumter, Charleston, Charleston County, SC
Europa Global Views in Natural and Enhanced Colors
1998-05-08
This color composite view combines violet, green, and infrared images of Jupiter intriguing moon, Europa, for a view of the moon in natural color left and in enhanced color designed to bring out subtle color differences in the surface right.
View Ahead After Spirit's Sol 1861 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11977 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11977 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images combined into this stereo, 210-degree view of the rover's surroundings during the 1,861st to 1,863rd Martian days, or sols, of Spirit's surface mission (March 28 to 30, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the scene is toward the south-southwest. East is on the left. West-northwest is on the right. The rover had driven 22.7 meters (74 feet) southwestward on Sol 1861 before beginning to take the frames in this view. The drive brought Spirit past the northwestern corner of Home Plate. In this view, the western edge of Home Plate is on the portion of the horizon farthest to the left. A mound in middle distance near the center of the view is called 'Tsiolkovsky' and is about 40 meters (about 130 feet) from the rover's position. This view is presented as a cylindrical-perspective projection with geometric seam correction.ERIC Educational Resources Information Center
Kiang, Lisa; Blumenthal, Terry D.; Carlson, Erika N.; Lawson, Yolanda N.; Shell, J. Clark
2009-01-01
Physiologic reactivity to racially rejecting images was assessed in 35 young adults (10 males, 25 female) from African-American backgrounds using the startle probe paradigm. In a laboratory setting, participants viewed 16 images depicting racial rejection, racial acceptance, nonracial negative, and nonracial positive themes. While viewing these…
Optical cross-talk and visual comfort of a stereoscopic display used in a real-time application
NASA Astrophysics Data System (ADS)
Pala, S.; Stevens, R.; Surman, P.
2007-02-01
Many 3D systems work by presenting to the observer stereoscopic pairs of images that are combined to give the impression of a 3D image. Discomfort experienced when viewing for extended periods may be due to several factors, including the presence of optical crosstalk between the stereo image channels. In this paper we use two video cameras and two LCD panels viewed via a Helmholtz arrangement of mirrors, to display a stereoscopic image inherently free of crosstalk. Simple depth discrimination tasks are performed whilst viewing the 3D image and controlled amounts of image crosstalk are introduced by electronically mixing the video signals. Error monitoring and skin conductance are used as measures of workload as well as traditional subjective questionnaires. We report qualitative measurements of user workload under a variety of viewing conditions. This pilot study revealed a decrease in task performance and increased workload as crosstalk was increased. The observations will assist in the design of further trials planned to be conducted in a medical environment.
Fire and Smoke in Western Russia on August 4, 2010
2017-12-08
NASA image acquired August 4, 2010 Intense fires continued to rage in western Russia on August 4, 2010. Burning in dry peat bogs and forests, the fires produced a dense plume of smoke that reached across hundreds of kilometers. The Moderate Resolution Imaging Spectroradiometer (MODIS) captured this view of the fires and smoke in three consecutive overpasses on NASA’s Terra satellite. The smooth gray-brown smoke hangs over the Russian landscape, completely obscuring the ground in places. The top image provides a close view of the fires immediately southeast of Moscow, while the lower image shows the full extent of the smoke plume. To read more about this image go to: earthobservatory.nasa.gov/IOTD/view.php?id=45046 To view the high res go here: NASA image courtesy Jeff Schmaltz, MODIS Rapid Response Team at NASA GSFC. NASA Goddard Space Flight Center is home to the nation's largest organization of combined scientists, engineers and technologists that build spacecraft, instruments and new technology to study the Earth, the sun, our solar system, and the universe. Follow us on Twitter Join us on Facebook
Jensen, Chad D; Kirwan, C Brock
2015-03-01
Research conducted with adults suggests that successful weight losers demonstrate greater activation in brain regions associated with executive control in response to viewing high-energy foods. No previous studies have examined these associations in adolescents. Functional neuroimaging was used to assess brain response to food images among groups of overweight (OW), normal-weight (NW), and successful weight-losing (SWL) adolescents. Eleven SWL, 12 NW, and 11 OW participants underwent functional magnetic resonance imaging while viewing images of high- and low-energy foods. When viewing high-energy food images, SWLs demonstrated greater activation in the dorsolateral prefrontal cortex (DLPFC) compared with OW and NW controls. Compared with NW and SWL groups, OW individuals demonstrated greater activation in the ventral striatum and anterior cingulate in response to food images. Adolescent SWLs demonstrated greater neural activation in the DLPFC compared with OW/NW controls when viewing high-energy food stimuli, which may indicate enhanced executive control. OW individuals' brain responses to food stimuli may indicate greater reward incentive processes than either SWL or NW groups. © 2015 The Obesity Society.
3-D movies using microprocessor-controlled optoelectronic spectacles
NASA Astrophysics Data System (ADS)
Jacobs, Ken; Karpf, Ron
2012-02-01
Despite rapid advances in technology, 3-D movies are impractical for general movie viewing. A new approach that opens all content for casual 3-D viewing is needed. 3Deeps--advanced microprocessor controlled optoelectronic spectacles--provides such a new approach to 3-D. 3Deeps works on a different principle than other methods for 3-D. 3-D movies typically use the asymmetry of dual images to produce stereopsis, necessitating costly dual-image content, complex formatting and transmission standards, and viewing via a corresponding selection device. In contrast, all 3Deeps requires to view movies in realistic depth is an illumination asymmetry--a controlled difference in optical density between the lenses. When a 2-D movie has been projected for viewing, 3Deeps converts every scene containing lateral motion into realistic 3-D. Put on 3Deeps spectacles for 3-D viewing, or remove them for viewing in 2-D. 3Deeps works for all analogue and digital 2-D content, by any mode of transmission, and for projection screens, digital or analogue monitors. An example using aerial photography is presented. A movie consisting of successive monoscopic aerial photographs appears in realistic 3-D when viewed through 3Deeps spectacles.
The composite classification problem in optical information processing
NASA Technical Reports Server (NTRS)
Hall, Eric B.
1995-01-01
Optical pattern recognition allows objects to be recognized from their images and permits their positional parameters to be estimated accurately in real time. The guiding principle behind optical pattern recognition is that a lens focusing a beam of coherent light modulated with an image produces the two-dimensinal Fourier transform of that image. When the resulting output is further transformed by the matched filter corresponding to the original image, one obtains the autocorrelation function of the original image, which has a peak at the origin. Such a device is called an optical correlator and may be used to recognize the locate the image for which it is designed. (From a practical perspective, an approximation to the matched filter must be used since the spatial light modulator (SLM) on which the filter is implemented usually does not allow one to independently control both the magnitude and phase of the filter.) Generally, one is not just concerned with recognizing a single image but is interested in recognizing a variety of rotated and scaled views of a particular image. In order to recognize these different views using an optical correlator, one may select a subset of these views (whose elements are called training images) and then use a composite filter that is designed to produce a correlation peak for each training image. Presumably, these peaks should be sharp and easily distinguishable from the surrounding correlation plane values. In this report we consider two areas of research regarding composite optical correlators. First, we consider the question of how best to choose the training images that are used to design the composite filter. With regard to quantity, the number of training images should be large enough to adequately represent all possible views of the targeted object yet small enough to ensure that the resolution of the filter is not exhausted. As for the images themselves, they should be distinct enough to avoid numerical difficulties yet similar enough to avoid gaps in which certain views of the target will be unrecognized. One method that we introduce to study this problem is called probing and involves the creation of the artificial imagery. The second problem we consider involves the clasification of the composite filter's correlation plane data. In particular, we would like to determine not only whether or not we are viewing a training image, but, in the former case, we would like to determine which training image is being viewed. This second problem is investigated using traditional M-ary hypothesis testing techniques.
Characteristics of mist 3D screen for projection type electro-holography
NASA Astrophysics Data System (ADS)
Sato, Koki; Okumura, Toshimichi; Kanaoka, Takumi; Koizumi, Shinya; Nishikawa, Satoko; Takano, Kunihiko
2006-01-01
The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel (time shared CGH of RGB three colors). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.
Compression for radiological images
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.
1992-07-01
The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.
Surface Stereo Imager on Mars, Face-On
NASA Technical Reports Server (NTRS)
2008-01-01
This image is a view of NASA's Phoenix Mars Lander's Surface Stereo Imager (SSI) as seen by the lander's Robotic Arm Camera. This image was taken on the afternoon of the 116th Martian day, or sol, of the mission (September 22, 2008). The mast-mounted SSI, which provided the images used in the 360 degree panoramic view of Phoenix's landing site, is about 4 inches tall and 8 inches long. The two 'eyes' of the SSI seen in this image can take photos to create three-dimensional views of the landing site. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam
2013-10-01
Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.
NASA Technical Reports Server (NTRS)
2002-01-01
Korea and the Sea of Japan are obscured by swirls of pollution in this image taken by the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) on November 23, 2001. Image courtesy the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE
[Dry view laser imager--a new economical photothermal imaging method].
Weberling, R
1996-11-01
The production of hard copies is currently achieved by means of laser imagers and wet film processing in systems attached either directly in or to the laser imager or in a darkroom. Variations in image quality resulting from a not always optimal wet film development are frequent. A newly developed thermographic film developer for laser films without liquid powdered chemicals, on the other hand, is environmentally preferable and reducing operating costs. The completely dry developing process provides permanent image documentation meeting the quality and safety requirements of RöV and BAK. One of the currently available systems of this type, the DryView Laser Imager is inexpensive and easy to install. The selective connection principle of the DryView Laser Imager can be expanded as required and accepts digital and/or analog interfaces with all imaging systems (CT, MR, DR, US, NM) from the various manufactures.
Wide-angle vision for road views
NASA Astrophysics Data System (ADS)
Huang, F.; Fehrs, K.-K.; Hartmann, G.; Klette, R.
2013-03-01
The field-of-view of a wide-angle image is greater than (say) 90 degrees, and so contains more information than available in a standard image. A wide field-of-view is more advantageous than standard input for understanding the geometry of 3D scenes, and for estimating the poses of panoramic sensors within such scenes. Thus, wide-angle imaging sensors and methodologies are commonly used in various road-safety, street surveillance, street virtual touring, or street 3D modelling applications. The paper reviews related wide-angle vision technologies by focusing on mathematical issues rather than on hardware.
Kimport, Katrina; Weitz, Tracy A; Foster, Diana Greene
2014-12-01
In the United States, abortion opponents have supported legislation requiring that abortion patients be offered the opportunity to view their preprocedure ultrasound. Little research has examined women's interest in and emotional response to such viewing. Data from 702 women who received abortions at 30 facilities throughout the United States between 2008 and 2010 were analyzed. Mixed-effects multinomial logistic regression analysis was used to determine which characteristics were associated with being offered and choosing to view ultrasounds, and with reporting positive or negative emotional responses to viewing. Grounded theory analytic techniques were used to qualitatively describe women's reports of their emotional responses. Forty-eight percent of participants were offered the opportunity to view their ultrasound, and nulliparous women were more likely than others to receive an offer (odds ratio, 2.3). Sixty-five percent of these women (31% overall) chose to view the image; nulliparous women and those living in a state that regulates viewing were more likely than their counterparts to do so (1.7 and 2.5, respectively). Some 213 women reported emotional responses to viewing; neutral emotions (fine, nothing) were the most commonly reported ones, followed by negative emotions (sad, guilty, upset) and then positive emotions (happy, excited). Women who visited clinics with a policy of offering viewing had increased odds of reporting a negative emotion (2.6). Ultrasound viewing appears not to have a singular emotional effect. The presence of state regulation and facility policies matters for women's interest in and responses to viewing. Copyright © 2014 by the Guttmacher Institute.
Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem
NASA Astrophysics Data System (ADS)
Park, Won-Kwang
2017-07-01
MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.
Yoon, Ki-Hyuk; Ju, Heongkyu; Kwon, Hyunkyung; Park, Inkyu; Kim, Sung-Kyu
2016-02-22
We present optical characteristics of view image provided by a high-density multi-view autostereoscopic 3D display (HD-MVA3D) with a parallax barrier (PB). Diffraction effects that become of great importance in such a display system that uses a PB, are considered in an one-dimensional model of the 3D display, in which the numerical simulation of light from display panel pixels through PB slits to viewing zone is performed. The simulation results are then compared to the corresponding experimental measurements with discussion. We demonstrate that, as a main parameter for view image quality evaluation, the Fresnel number can be used to determine the PB slit aperture for the best performance of the display system. It is revealed that a set of the display parameters, which gives the Fresnel number of ∼ 0.7 offers maximized brightness of the view images while that corresponding to the Fresnel number of 0.4 ∼ 0.5 offers minimized image crosstalk. The compromise between the brightness and crosstalk enables optimization of the relative magnitude of the brightness to the crosstalk and lead to the choice of display parameter set for the HD-MVA3D with a PB, which satisfies the condition where the Fresnel number lies between 0.4 and 0.7.
Hobi, Martina L.; Ginzler, Christian
2012-01-01
Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645
Hobi, Martina L; Ginzler, Christian
2012-01-01
Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of -0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of -0.43 m for the herb and grass vegetation and -0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of -1.85 m for the WorldView-2 GCP-enhanced RPCs model and -1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling.
Target recognition of log-polar ladar range images using moment invariants
NASA Astrophysics Data System (ADS)
Xia, Wenze; Han, Shaokun; Cao, Jie; Yu, Haoyong
2017-01-01
The ladar range image has received considerable attentions in the automatic target recognition field. However, previous research does not cover target recognition using log-polar ladar range images. Therefore, we construct a target recognition system based on log-polar ladar range images in this paper. In this system combined moment invariants and backpropagation neural network are selected as shape descriptor and shape classifier, respectively. In order to fully analyze the effect of log-polar sampling pattern on recognition result, several comparative experiments based on simulated and real range images are carried out. Eventually, several important conclusions are drawn: (i) if combined moments are computed directly by log-polar range images, translation, rotation and scaling invariant properties of combined moments will be invalid (ii) when object is located in the center of field of view, recognition rate of log-polar range images is less sensitive to the changing of field of view (iii) as object position changes from center to edge of field of view, recognition performance of log-polar range images will decline dramatically (iv) log-polar range images has a better noise robustness than Cartesian range images. Finally, we give a suggestion that it is better to divide field of view into recognition area and searching area in the real application.
Boffeli, Troy J; Collier, Rachel C; Gervais, Samuel J
Assessing ankle stability in nondisplaced Lauge-Hansen supination external rotation type II injuries requires stress imaging. Gravity stress mortise imaging is routinely used as an alternative to manual stress imaging to assess deltoid integrity with the goal of differentiating type II from type IV injuries in cases without a posterior or medial fracture. A type II injury with a nondisplaced fibula fracture is typically treated with cast immobilization, and a type IV injury is considered unstable and often requires operative repair. The present case series (two patients) highlights a standardized 2-view gravity stress imaging protocol and introduces the gravity stress cross-table lateral view. The gravity stress cross-table lateral view provides a more thorough evaluation of the posterior malleolus owing to the slight external rotation and posteriorly directed stress. External rotation also creates less bony overlap between the tibia and fibula, allowing for better visualization of the fibula fracture. Gravity stress imaging confirmed medial-sided injury in both cases, confirming the presence of supination external rotation type IV or bimalleolar equivalent fractures. Open reduction and internal fixation was performed, and both patients achieved radiographic union. No further treatment was required at 21 and 33 months postoperatively. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Opportunity's Surroundings on Sol 1818 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11846 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11846 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,818th Martian day, or sol, of Opportunity's surface mission (March 5, 2009). South is at the center; north at both ends. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 80.3 meters (263 feet) southward earlier on that sol. Tracks from the drive recede northward in this view. The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Opportunity's Surroundings on Sol 1798 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11850 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11850 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this stereo 180-degree view of the rover's surroundings during the 1,798th Martian day, or sol, of Opportunity's surface mission (Feb. 13, 2009). North is on top. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 111 meters (364 feet) southward on the preceding sol. Tracks from that drive recede northward in this view. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Image quality prediction - An aid to the Viking lander imaging investigation on Mars
NASA Technical Reports Server (NTRS)
Huck, F. O.; Wall, S. D.
1976-01-01
Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).
Shortcomings of low-cost imaging systems for viewing computed radiographs.
Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N
2000-01-01
To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.
Copple, Susan S.; Jaskowski, Troy D.; Giles, Rashelle; Hill, Harry R.
2014-01-01
Objective. To evaluate NOVA View with focus on reading archived images versus microscope based manual interpretation of ANA HEp-2 slides by an experienced, certified medical technologist. Methods. 369 well defined sera from: 44 rheumatoid arthritis, 50 systemic lupus erythematosus, 35 scleroderma, 19 Sjögren's syndrome, and 10 polymyositis patients as well as 99 healthy controls were examined. In addition, 12 defined sera from the Centers for Disease Control and 100 random patient sera sent to ARUP Laboratories for ANA HEp-2 IIF testing were included. Samples were read using the archived images on NOVA View and compared to results obtained from manual reading. Results. At a 1 : 40/1 : 80 dilution the resulting comparison demonstrated 94.8%/92.9% positive, 97.4%/97.4% negative, and 96.5%/96.2% total agreements between manual IIF and NOVA View archived images. Agreement of identifiable patterns between methods was 97%, with PCNA and mixed patterns undetermined. Conclusion. Excellent agreements were obtained between reading archived images on NOVA View and manually on a fluorescent microscope. In addition, workflow benefits were observed which need to be analyzed in future studies. PMID:24741573
High resolution, wide field of view, real time 340GHz 3D imaging radar for security screening
NASA Astrophysics Data System (ADS)
Robertson, Duncan A.; Macfarlane, David G.; Hunter, Robert I.; Cassidy, Scott L.; Llombart, Nuria; Gandini, Erio; Bryllert, Tomas; Ferndahl, Mattias; Lindström, Hannu; Tenhunen, Jussi; Vasama, Hannu; Huopana, Jouni; Selkälä, Timo; Vuotikka, Antti-Jussi
2017-05-01
The EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) is developing a demonstrator system for next generation airport security screening which will combine passive and active submillimeter wave imaging sensors. We report on the development of the 340 GHz 3D imaging radar which achieves high volumetric resolution over a wide field of view with high dynamic range and a high frame rate. A sparse array of 16 radar transceivers is coupled with high speed mechanical beam scanning to achieve a field of view of 1 x 1 x 1 m3 and a 10 Hz frame rate.
The media's representation of the ideal male body: a cause for muscle dysmorphia?
Leit, Richard A; Gray, James J; Pope, Harrison G
2002-04-01
This study sought to examine the effects of media images on men's attitudes toward their body appearance. A group of college men viewed advertisements showing muscular men, whereas a control group viewed neutral advertisements. Immediately thereafter, participants performed a computerized test of body image perception while unaware of the hypotheses being tested in the study. The students exposed to the muscular images showed a significantly greater discrepancy between their own perceived muscularity and the level of muscularity that they ideally wanted to have. These findings suggest that media images, even in a brief presentation, can affect men's views of their bodies. Copyright 2002 by Wiley Periodicals, Inc.
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Atmospheric Science Data Center
2013-04-18
... continent. The region in this image includes the southern end of Peru, the northern portion of Chile, and the western part of Bolivia, ... feet, it is said to be the highest navigable lake in the world. In the 3-D view afforded by the stereo anaglyph image (viewed with ...
Earth and Moon as Seen from Mars
2008-03-03
The High Resolution Imaging Science Experiment HiRISE camera would make a great backyard telescope for viewing Mars, and we can also use it at Mars to view other planets. This is an image of Earth and the moon, acquired on October 3, 2007.
Mid-level image representations for real-time heart view plane classification of echocardiograms.
Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson
2015-11-01
In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.
Linearization of an annular image by using a diffractive optic
NASA Technical Reports Server (NTRS)
Matthys, Donald R.
1996-01-01
The goal for this project is to develop the algorithms for fracturing the zones defined by the mapping transformation, and to actually produce the binary optic in an appropriate setup. In 1984 a side-viewing panoramic viewing system was patented, consisting of a single piece of glass with spherical surfaces which produces a 360 degree view of the region surrounding the lens which extends about 25 degrees in front of and 20 degrees behind the lens. The system not only produces images of good quality, it is also afocal, i.e., images stay in focus for objects located right next to the lens as well as those located far from the lens. The lens produced a panoramic view in an annular shaped image, and so the lens was called a PAL (panoramic annular lens). When applying traditional measurements to PAL images, it is found advantageous to linearize the annular image. This can easily be done with a computer and such a linearized image can be produced within about 40 seconds on current microcomputers. However, this process requires a frame-grabber and a computer, and is not real-time. Therefore, it was decided to try to perform this linearization optically by using a diffractive optic.
High speed color imaging through scattering media with a large field of view
NASA Astrophysics Data System (ADS)
Zhuang, Huichang; He, Hexiang; Xie, Xiangsheng; Zhou, Jianying
2016-09-01
Optical imaging through complex media has many important applications. Although research progresses have been made to recover optical image through various turbid media, the widespread application of the technology is hampered by the recovery speed, requirement on specific illumination, poor image quality and limited field of view. Here we demonstrate that above-mentioned drawbacks can be essentially overcome. The realization of high speed color imaging through turbid media is successfully carried out by taking into account the media memory effect, the point spread function, the exit pupil of the optical system, and the optimized signal to noise ratio. By retrieving selected speckles with enlarged field of view, high quality image is recovered with a responding speed only determined by the frame rates of the image capturing devices. The immediate application of the technique is expected to register static and dynamic imaging under human skin to recover information with a wearable device.
NASA Astrophysics Data System (ADS)
Guan, Huifeng; Anastasio, Mark A.
2017-03-01
It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.
Evaluative Processing of Food Images: A Conditional Role for Viewing in Preference Formation
Wolf, Alexandra; Ounjai, Kajornvut; Takahashi, Muneyoshi; Kobayashi, Shunsuke; Matsuda, Tetsuya; Lauwereyns, Johan
2018-01-01
Previous research suggested a role of gaze in preference formation, not merely as an expression of preference, but also as a causal influence. According to the gaze cascade hypothesis, the longer subjects look at an item, the more likely they are to develop a preference for it. However, to date the connection between viewing and liking has been investigated predominately with self-paced viewing conditions in which the subjects were required to select certain items from simultaneously presented stimuli on the basis of perceived visual attractiveness. Such conditions might promote a default, but non-mandatory connection between viewing and liking. To explore whether the connection is separable, we examined the evaluative processing of single naturalistic food images in a 2 × 2 design, conducted completely within subjects, in which we varied both the type of exposure (self-paced versus time-controlled) and the type of evaluation (non-exclusive versus exclusive). In the self-paced exclusive evaluation, longer viewing was associated with a higher likelihood of a positive evaluation. However, in the self-paced non-exclusive evaluation, the trend reversed such that longer viewing durations were associated with lesser ratings. Furthermore, in the time-controlled tasks, both with non-exclusive and exclusive evaluation, there was no significant relationship between the viewing duration and the evaluation. The overall pattern of results was consistent for viewing times measured in terms of exposure duration (i.e., the duration of stimulus presentation on the screen) and in terms of actual gaze duration (i.e., the amount of time the subject effectively gazed at the stimulus on the screen). The data indicated that viewing does not intrinsically lead to a higher evaluation when evaluating single food images; instead, the relationship between viewing duration and evaluation depends on the type of task. We suggest that self-determination of exposure duration may be a prerequisite for any influence from viewing time on evaluative processing, regardless of whether the influence is facilitative. Moreover, the purported facilitative link between viewing and liking appears to be limited to exclusive evaluation, when only a restricted number of items can be included in a chosen set. PMID:29942273
Evaluative Processing of Food Images: A Conditional Role for Viewing in Preference Formation.
Wolf, Alexandra; Ounjai, Kajornvut; Takahashi, Muneyoshi; Kobayashi, Shunsuke; Matsuda, Tetsuya; Lauwereyns, Johan
2018-01-01
Previous research suggested a role of gaze in preference formation, not merely as an expression of preference, but also as a causal influence. According to the gaze cascade hypothesis, the longer subjects look at an item, the more likely they are to develop a preference for it. However, to date the connection between viewing and liking has been investigated predominately with self-paced viewing conditions in which the subjects were required to select certain items from simultaneously presented stimuli on the basis of perceived visual attractiveness. Such conditions might promote a default, but non-mandatory connection between viewing and liking. To explore whether the connection is separable, we examined the evaluative processing of single naturalistic food images in a 2 × 2 design, conducted completely within subjects, in which we varied both the type of exposure (self-paced versus time-controlled) and the type of evaluation (non-exclusive versus exclusive). In the self-paced exclusive evaluation, longer viewing was associated with a higher likelihood of a positive evaluation. However, in the self-paced non-exclusive evaluation, the trend reversed such that longer viewing durations were associated with lesser ratings. Furthermore, in the time-controlled tasks, both with non-exclusive and exclusive evaluation, there was no significant relationship between the viewing duration and the evaluation. The overall pattern of results was consistent for viewing times measured in terms of exposure duration (i.e., the duration of stimulus presentation on the screen) and in terms of actual gaze duration (i.e., the amount of time the subject effectively gazed at the stimulus on the screen). The data indicated that viewing does not intrinsically lead to a higher evaluation when evaluating single food images; instead, the relationship between viewing duration and evaluation depends on the type of task. We suggest that self-determination of exposure duration may be a prerequisite for any influence from viewing time on evaluative processing, regardless of whether the influence is facilitative. Moreover, the purported facilitative link between viewing and liking appears to be limited to exclusive evaluation, when only a restricted number of items can be included in a chosen set.
Efficient structure from motion on large scenes using UAV with position and pose information
NASA Astrophysics Data System (ADS)
Teng, Xichao; Yu, Qifeng; Shang, Yang; Luo, Jing; Wang, Gang
2018-04-01
In this paper, we exploit prior information from global positioning systems and inertial measurement units to speed up the process of large scene reconstruction from images acquired by Unmanned Aerial Vehicles. We utilize weak pose information and intrinsic parameter to obtain the projection matrix for each view. As compared to unmanned aerial vehicles' flight altitude, topographic relief can usually be ignored, we assume that the scene is flat and use weak perspective camera to get projective transformations between two views. Furthermore, we propose an overlap criterion and select potentially matching view pairs between projective transformed views. A robust global structure from motion method is used for image based reconstruction. Our real world experiments show that the approach is accurate, scalable and computationally efficient. Moreover, projective transformations between views can also be used to eliminate false matching.
A Dying Star in a Different Light
2010-11-17
This image composite shows two views of a puffy, dying star, or planetary nebula, known as NGC 1514. At left is a view from a ground-based, visible-light telescope; the view on the right shows the object in infrared light from NASA WISE telescope.
NASA's MISR Instrument Captures Stereo View of Mountain Fire Near Idyllwild, Calif.
Atmospheric Science Data Center
2016-09-27
... been produced. The image is best viewed with standard "red/blue" 3-D glasses with the red lens over the left eye. The image is oriented ... 2.5 to 3 miles (4 to 5 kilometers) above sea level with very light winds at this time. The image extends from about 34.8 degrees north ...
This view of Jupiter was taken by Voyager 1
NASA Technical Reports Server (NTRS)
1998-01-01
This view of Jupiter was taken by Voyager 1. This image was taken through color filters and recombined to produce the color image. This photo was assembled from three black and white negatives by the Image Processing Lab at Jet Propulsion Laboratory. JPL manages and controls the VOyager project for NASA's Office of Space Science.
2016-10-07
NASA's Dawn spacecraft views Oxo Crater (6 miles, 10 kilometers wide) in this view from Ceres. Dawn took this image on June 4, 2016, from its low-altitude mapping orbit, at a distance of about 240 miles (385 kilometers) above the surface. The image resolution is 120 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20950
"Butterfly under a Pin": An Emergent Teacher Image amid Mandated Curriculum Reform
ERIC Educational Resources Information Center
Craig, Cheryl J.
2012-01-01
The author examines 1 experienced teacher's image of teaching and how it was purposely changed--through external intervention and against the individual's will--from the view of teacher as curriculum maker to the view of teacher as curriculum implementer. Laura's account of the "butterfly under a pin" image, a version of the…
Photogrammetry Toolbox Reference Manual
NASA Technical Reports Server (NTRS)
Liu, Tianshu; Burner, Alpheus W.
2014-01-01
Specialized photogrammetric and image processing MATLAB functions useful for wind tunnel and other ground-based testing of aerospace structures are described. These functions include single view and multi-view photogrammetric solutions, basic image processing to determine image coordinates, 2D and 3D coordinate transformations and least squares solutions, spatial and radiometric camera calibration, epipolar relations, and various supporting utility functions.
PMC's Florida Bay & Adjacent Marine Systems Science Program
Florida Bay and Adjacent Marine Systems Science Program Inverted image, click link below to view actual image and caption click to display actual image and caption Program Overview Management & - January 2002 >For more, click here to view the What's New Page... | Main | Overview | Management &
A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT.
Badea, Cristian T; Hedlund, Laurence W; Johnson, G Allan
2013-01-01
CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging.
A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT
Badea, Cristian T.; Hedlund, Laurence W.; Johnson, G. Allan
2013-01-01
CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging. PMID:27006920
Phytoplankton off the Coast of Washington State
NASA Technical Reports Server (NTRS)
2002-01-01
Clear weather over the Pacific Northwest yesterday gave the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) a good view of this mountain region of the United States. Also, there are several phytoplankton blooms visible offshore. The white areas hugging the California coastline toward the bottom of the image are low-level stratus clouds. SeaWiFS acquired this true-color scene on October 3, 2001. Image courtesy the SeaWiFS Project, NASA/Goddard Space Flight Center, and ORBIMAGE
Galileo spacecraft solid-state imaging system view of Antarctica
NASA Technical Reports Server (NTRS)
1990-01-01
Galileo spacecraft solid-state imaging system view of Antarctica was taken during its first encounter with the Earth. This color picture of Antarctica is part of a mosaic of pictures covering the entire polar continent showing the Ross Ice Shelf and its border with the sea and mountains poking through the ice near the McMurdo Station. From top to bottom, the frame looks across about half of Antarctica. View provided by the Jet Propulsion Laboratory (JPL) with alternate number P-37297.
Using Virtual Observatory Services in Sky View
NASA Technical Reports Server (NTRS)
McGlynn, Thomas A.
2007-01-01
For over a decade Skyview has provided astronomers and the public with easy access to survey and imaging data from all wavelength regimes. SkyView has pioneered many of the concepts that underlie the Virtual Observatory. Recently SkyView has been released as a distributable package which uses VO protocols to access image and catalog services. This chapter describes how to use the Skyview as a local service and how to customize it to access additional VO services and local data.
Video Mosaicking for Inspection of Gas Pipelines
NASA Technical Reports Server (NTRS)
Magruder, Darby; Chien, Chiun-Hong
2005-01-01
A vision system that includes a specially designed video camera and an image-data-processing computer is under development as a prototype of robotic systems for visual inspection of the interior surfaces of pipes and especially of gas pipelines. The system is capable of providing both forward views and mosaicked radial views that can be displayed in real time or after inspection. To avoid the complexities associated with moving parts and to provide simultaneous forward and radial views, the video camera is equipped with a wide-angle (>165 ) fish-eye lens aimed along the axis of a pipe to be inspected. Nine white-light-emitting diodes (LEDs) placed just outside the field of view of the lens (see Figure 1) provide ample diffuse illumination for a high-contrast image of the interior pipe wall. The video camera contains a 2/3-in. (1.7-cm) charge-coupled-device (CCD) photodetector array and functions according to the National Television Standards Committee (NTSC) standard. The video output of the camera is sent to an off-the-shelf video capture board (frame grabber) by use of a peripheral component interconnect (PCI) interface in the computer, which is of the 400-MHz, Pentium II (or equivalent) class. Prior video-mosaicking techniques are applicable to narrow-field-of-view (low-distortion) images of evenly illuminated, relatively flat surfaces viewed along approximately perpendicular lines by cameras that do not rotate and that move approximately parallel to the viewed surfaces. One such technique for real-time creation of mosaic images of the ocean floor involves the use of visual correspondences based on area correlation, during both the acquisition of separate images of adjacent areas and the consolidation (equivalently, integration) of the separate images into a mosaic image, in order to insure that there are no gaps in the mosaic image. The data-processing technique used for mosaicking in the present system also involves area correlation, but with several notable differences: Because the wide-angle lens introduces considerable distortion, the image data must be processed to effectively unwarp the images (see Figure 2). The computer executes special software that includes an unwarping algorithm that takes explicit account of the cylindrical pipe geometry. To reduce the processing time needed for unwarping, parameters of the geometric mapping between the circular view of a fisheye lens and pipe wall are determined in advance from calibration images and compiled into an electronic lookup table. The software incorporates the assumption that the optical axis of the camera is parallel (rather than perpendicular) to the direction of motion of the camera. The software also compensates for the decrease in illumination with distance from the ring of LEDs.
Weinstein, Susan P.; McDonald, Elizabeth S.; Conant, Emily F.
2016-01-01
Digital breast tomosynthesis (DBT) represents a valuable addition to breast cancer screening by decreasing recall rates while increasing cancer detection rates. The increased accuracy achieved with DBT is due to the quasi–three-dimensional format of the reconstructed images and the ability to “scroll through” breast tissue in the reconstructed images, thereby reducing the effect of tissue superimposition found with conventional planar digital mammography. The margins of both benign and malignant lesions are more conspicuous at DBT, which allows improved lesion characterization, increased reader confidence, and improved screening outcomes. However, even with the improvements in accuracy achieved with DBT, there remain differences in breast cancer conspicuity by mammographic view. Early data suggest that breast cancers may be more conspicuous on craniocaudal (CC) views than on mediolateral oblique (MLO) views. While some very laterally located breast cancers may be visualized on only the MLO view, the increased conspicuity of cancers on the CC view compared with the MLO view suggests that DBT screening should be performed with two-view imaging. Even with the improved conspicuity of lesions at DBT, there may still be false-negative studies. Subtle lesions seen on only one view may be discounted, and dense and/or complex tissue patterns may make some cancers occult or extremely difficult to detect. Therefore, radiologists should be cognizant of both perceptual and cognitive errors to avoid potential pitfalls in lesion detection and characterization. ©RSNA, 2016 Online supplemental material is available for this article. PMID:27715711
Spirit's View Beside 'Home Plate' on Sol 1823 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11971 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11971 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,823rd Martian day, or sol, of Spirit's surface mission (Feb. 17, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The center of the view is toward the south-southwest. The rover had driven 7 meters (23 feet) eastward earlier on Sol 1823, part of maneuvering to get Spirit into a favorable position for climbing onto the low plateau called 'Home Plate.' However, after two driving attempts with negligible progress during the following three sols, the rover team changed its strategy for getting to destinations south of Home Plate. The team decided to drive Spirit at least partway around Home Plate, instead of ascending the northern edge and taking a shorter route across the top of the plateau. Layered rocks forming part of the northern edge of Home Plate can be seen near the center of the image. Rover wheel tracks are visible at the lower edge. This view is presented as a cylindrical-perspective projection with geometric seam correction.The ideal imaging AR waveguide
NASA Astrophysics Data System (ADS)
Grey, David J.
2017-06-01
Imaging waveguides are a key development that are helping to create the Augmented Reality revolution. They have the ability to use a small projector as an input and produce a wide field of view, large eyebox, full colour, see-through image with good contrast and resolution. WaveOptics is at the forefront of this AR technology and has developed and demonstrated an approach which is readily scalable. This paper presents our view of the ideal near-to-eye imaging AR waveguide. This will be a single-layer waveguide which can be manufactured in high volume and low cost, and is suitable for small form factor applications and all-day wear. We discuss the requirements of the waveguide for an excellent user experience. When enhanced (AR) viewing is not required, the waveguide should have at least 90% transmission, no distracting artifacts and should accommodate the user's ophthalmic prescription. When enhanced viewing is required, additionally, the waveguide requires excellent imaging performance, this includes resolution to the limit of human acuity, wide field of view, full colour, high luminance uniformity and contrast. Imaging waveguides are afocal designs and hence cannot provide ophthalmic correction. If the user requires this correction then they must wear either contact lenses, prescription spectacles or inserts. The ideal imaging waveguide would need to cope with all of these situations so we believe it must be capable of providing an eyebox at an eye relief suitable for spectacle wear which covers a significant range of population inter-pupillary distances. We describe the current status of our technology and review existing imaging waveguide technologies against the ideal component.
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. ...
HISTORIC IMAGE: AERIAL VIEW OF THE CEMETERY AND ITS ENVIRONS. PHOTOGRAPH TAKEN ON 6 APRIL 1968. NCA HISTORY COLLECTION. - Rock Island National Cemetery, Rock Island Arsenal, 0.25 mile north of southern tip of Rock Island, Rock Island, Rock Island County, IL
Atmospheric Science Data Center
2013-04-16
article title: Tropical Northern Australia ... view. Water appears in blue shades in the right image, for example, because glitter makes the water look brighter at the aft camera's view ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...
Martin, Elizabeth A.; Karcher, Nicole R.; Bartholow, Bruce D.; Siegle, Greg J.; Kerns, John G.
2017-01-01
Both extreme levels of social anhedonia (SocAnh) and perceptual aberration/magical ideation (PerMag) are associated with risk for schizophrenia-spectrum disorders and with emotional abnormalities. Yet, the nature of any psychophysiological-measured affective abnormality, including the role of automatic/controlled processes, is unclear. We examined the late positive potential (LPP) during passive viewing (to assess automatic processing) and during cognitive reappraisal (to assess controlled processing) in three groups: SocAnh, PerMag, and controls. The SocAnh group exhibited an increased LPP when viewing negative images. Further, SocAnh exhibited greater reductions in the LPP for negative images when told to use strategies to alter negative emotion. Similar to SocAnh, PerMag exhibited an increased LPP when viewing negative images. However, PerMag also exhibited an increased LPP when viewing positive images as well as an atypical decreased LPP when increasing positive emotion. Overall, these results suggest that at-risk groups are associated with shared and unique automatic and controlled abnormalities. PMID:28174121
Color multiplexing method to capture front and side images with a capsule endoscope.
Tseng, Yung-Chieh; Hsu, Hsun-Ching; Han, Pin; Tsai, Cheng-Mu
2015-10-01
This paper proposes a capsule endoscope (CE), based on color multiplexing, to simultaneously record front and side images. Only one lens associated with an X-cube prism is employed to catch the front and side view profiles in the CE. Three color filters and polarizers are placed on three sides of an X-cube prism. When objects locate at one of the X-cube's three sides, front and side view profiles of different colors will be caught through the proposed lens and recorded at the color image sensor. The proposed color multiplexing CE (CMCE) is designed with a field of view of up to 210 deg and a 180 lp/mm resolution under f-number 2.8 and overall length 13.323 mm. A ray-tracing simulation in the CMCE with the color multiplexing mechanism verifies that the CMCE not only records the front and side view profiles at the same time, but also has great image quality at a small size.
Filter Function for Wavefront Sensing Over a Field of View
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A filter function has been derived as a means of optimally weighting the wavefront estimates obtained in image-based phase retrieval performed at multiple points distributed over the field of view of a telescope or other optical system. When the data obtained in wavefront sensing and, more specifically, image-based phase retrieval, are used for controlling the shape of a deformable mirror or other optic used to correct the wavefront, the control law obtained by use of the filter function gives a more balanced optical performance over the field of view than does a wavefront-control law obtained by use of a wavefront estimate obtained from a single point in the field of view.
OrbView-3 Initial On-Orbit Characterization
NASA Technical Reports Server (NTRS)
Ross, Kent; Blonski, Slawomir; Holekamp, Kara; Pagnutti, Mary; Zanoni, Vicki; Carver, David; Fendley, Debbie; Smith, Charles
2004-01-01
NASA at Stennis Space Center (SSC) established a Space Act Agreement with Orbital Sciences Corporation (OSC) and ORBIMAGE Inc. to collaborate on the characterization of the OrbView-3 system and its imagery products and to develop characterization techniques further. In accordance with the agreement, NASA performed an independent radiometric, spatial, and geopositional accuracy assessment of OrbView-3 imagery acquired before completion of the system's initial on-orbit checkout. OSC acquired OrbView-3 imagery over SSC from July 2003 through January 2004, and NASA collected ground reference information coincident with many of these acquisitions. After evaluating all acquisitions, NASA deemed two multispectral images and five panchromatic images useful for characterization. NASA then performed radiometric, spatial, and geopositional characterizations.
Takaki, Yasuhiro; Hayashi, Yuki
2008-07-01
The narrow viewing zone angle is one of the problems associated with electronic holography. We propose a technique that enables the ratio of horizontal and vertical resolutions of a spatial light modulator (SLM) to be altered. This technique increases the horizontal resolution of a SLM several times, so that the horizontal viewing zone angle is also increased several times. A SLM illuminated by a slanted point light source array is imaged by a 4f imaging system in which a horizontal slit is located on the Fourier plane. We show that the horizontal resolution was increased four times and that the horizontal viewing zone angle was increased approximately four times.
[Lateral chest X-rays. Radiographic anatomy].
García Villafañe, C; Pedrosa, C S
2014-01-01
Lateral chest views constitute an essential part of chest X-ray examinations, so it is fundamental to know the anatomy on these images and to be able to detect the variations manifested on these images in different diseases. The aim of this article is to review the normal anatomy and main normal variants seen on lateral chest views. For teaching purposes, we divide the thorax into different spaces and analyze each in an orderly way, especially emphasizing the anatomic details that are most helpful for locating lesions that have already been detected in the posteroanterior view or for detecting lesions that can be missed in the posteroanterior view. Copyright © 2013 SERAM. Published by Elsevier Espana. All rights reserved.
Web-based CERES Clouds QC Property Viewing Tool
NASA Astrophysics Data System (ADS)
Smith, R. A.; Chu, C.; Sun-Mack, S.; Chen, Y.; Heckert, E.; Minnis, P.
2014-12-01
This presentation will display the capabilities of a web-based CERES cloud property viewer. Terra data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool. A laptop will hopefully be available to allow conference attendees to try navigating the tool.
Panoramic cone beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Zhou Lili; Wang Song
2012-05-15
Purpose: Cone-beam computed tomography (CBCT) is the main imaging tool for image-guided radiotherapy but its functionality is limited by a small imaging volume and restricted image position (imaged at the central instead of the treatment position for peripheral lesions to avoid collisions). In this paper, the authors present the concept of ''panoramic CBCT,'' which can image patients at the treatment position with an imaging volume as large as practically needed. Methods: In this novel panoramic CBCT technique, the target is scanned sequentially from multiple view angles. For each view angle, a half scan (180 deg. + {theta}{sub cone} where {theta}{submore » cone} is the cone angle) is performed with the imaging panel positioned in any location along the beam path. The panoramic projection images of all views for the same gantry angle are then stitched together with the direct image stitching method (i.e., according to the reported imaging position) and full-fan, half-scan CBCT reconstruction is performed using the stitched projection images. To validate this imaging technique, the authors simulated cone-beam projection images of the Mathematical Cardiac Torso (MCAT) thorax phantom for three panoramic views. Gaps, repeated/missing columns, and different exposure levels were introduced between adjacent views to simulate imperfect image stitching due to uncertainties in imaging position or output fluctuation. A modified simultaneous algebraic reconstruction technique (modified SART) was developed to reconstruct CBCT images directly from the stitched projection images. As a gold standard, full-fan, full-scan (360 deg. gantry rotation) CBCT reconstructions were also performed using projection images of one imaging panel large enough to encompass the target. Contrast-to-noise ratio (CNR) and geometric distortion were evaluated to quantify the quality of reconstructed images. Monte Carlo simulations were performed to evaluate the effect of scattering on the image quality and imaging dose for both standard and panoramic CBCT. Results: Truncated images with artifacts were observed for the CBCT reconstruction using projection images of the central view only. When the image stitching was perfect, complete reconstruction was obtained for the panoramic CBCT using the modified SART with the image quality similar to the gold standard (full-scan, full-fan CBCT using one large imaging panel). Imperfect image stitching, on the other hand, lead to (streak, line, or ring) reconstruction artifacts, reduced CNR, and/or distorted geometry. Results from Monte Carlo simulations showed that, for identical imaging quality, the imaging dose was lower for the panoramic CBCT than that acquired with one large imaging panel. For the same imaging dose, the CNR of the three-view panoramic CBCT was 50% higher than that of the regular CBCT using one big panel. Conclusions: The authors have developed a panoramic CBCT technique and demonstrated with simulation data that it can image tumors of any location for patients of any size at the treatment position with comparable or less imaging dose and time. However, the image quality of this CBCT technique is sensitive to the reconstruction artifacts caused by imperfect image stitching. Better algorithms are therefore needed to improve the accuracy of image stitching for panoramic CBCT.« less
Viewing-zone scanning holographic display using a MEMS spatial light modulator.
Takaki, Yasuhiro; Fujii, Keisuke
2014-10-06
Horizontally scanning holography using a spatial light modulator based on microelectromechanical system, which we previously proposed for enlarging both the screen size and the viewing zone, utilized a screen scanning system with elementary holograms being scanned horizontally on the screen. In this study, to enlarge the screen size and the viewing zone, we propose a viewing-zone scanning system with enlarged hologram screen and horizontally scanned reduced viewing zone. The reduced viewing zone is localized using converging light emitted from the screen, and the entire screen can be viewed from the localized viewing zone. An experimental system was constructed, and we demonstrated the generation of reconstructed images with a screen size of 2.0 in, a viewing zone width of 437 mm at a distance of 600 mm from the screen, and a frame rate of 60 Hz.
Development of scanning holographic display using MEMS SLM
NASA Astrophysics Data System (ADS)
Takaki, Yasuhiro
2016-10-01
Holography is an ideal three-dimensional (3D) display technique, because it produces 3D images that naturally satisfy human 3D perception including physiological and psychological factors. However, its electronic implementation is quite challenging because ultra-high resolution is required for display devices to provide sufficient screen size and viewing zone. We have developed holographic display techniques to enlarge the screen size and the viewing zone by use of microelectromechanical systems spatial light modulators (MEMS-SLMs). Because MEMS-SLMs can generate hologram patterns at a high frame rate, the time-multiplexing technique is utilized to virtually increase the resolution. Three kinds of scanning systems have been combined with MEMS-SLMs; the screen scanning system, the viewing-zone scanning system, and the 360-degree scanning system. The screen scanning system reduces the hologram size to enlarge the viewing zone and the reduced hologram patterns are scanned on the screen to increase the screen size: the color display system with a screen size of 6.2 in. and a viewing zone angle of 11° was demonstrated. The viewing-zone scanning system increases the screen size and the reduced viewing zone is scanned to enlarge the viewing zone: a screen size of 2.0 in. and a viewing zone angle of 40° were achieved. The two-channel system increased the screen size to 7.4 in. The 360-degree scanning increases the screen size and the reduced viewing zone is scanned circularly: the display system having a flat screen with a diameter of 100 mm was demonstrated, which generates 3D images viewed from any direction around the flat screen.
Angular relational signature-based chest radiograph image view classification.
Santosh, K C; Wendling, Laurent
2018-01-22
In a computer-aided diagnosis (CAD) system, especially for chest radiograph or chest X-ray (CXR) screening, CXR image view information is required. Automatically separating CXR image view, frontal and lateral can ease subsequent CXR screening process, since the techniques may not equally work for both views. We present a novel technique to classify frontal and lateral CXR images, where we introduce angular relational signature through force histogram to extract features and apply three different state-of-the-art classifiers: multi-layer perceptron, random forest, and support vector machine to make a decision. We validated our fully automatic technique on a set of 8100 images hosted by the U.S. National Library of Medicine (NLM), National Institutes of Health (NIH), and achieved an accuracy close to 100%. Our method outperforms the state-of-the-art methods in terms of processing time (less than or close to 2 s for the whole test data) while the accuracies can be compared, and therefore, it justifies its practicality. Graphical Abstract Interpreting chest X-ray (CXR) through the angular relational signature.
Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granstedt, E. M., E-mail: egranstedt@trialphaenergy.com; Petrov, P.; Knapp, K.
2016-11-15
The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport.more » For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.« less
Fast imaging diagnostics on the C-2U advanced beam-driven field-reversed configuration device
NASA Astrophysics Data System (ADS)
Granstedt, E. M.; Petrov, P.; Knapp, K.; Cordero, M.; Patel, V.
2016-11-01
The C-2U device employed neutral beam injection, end-biasing, and various particle fueling techniques to sustain a Field-Reversed Configuration (FRC) plasma. As part of the diagnostic suite, two fast imaging instruments with radial and nearly axial plasma views were developed using a common camera platform. To achieve the necessary viewing geometry, imaging lenses were mounted behind re-entrant viewports attached to welded bellows. During gettering, the vacuum optics were retracted and isolated behind a gate valve permitting their removal if cleaning was necessary. The axial view incorporated a stainless-steel mirror in a protective cap assembly attached to the vacuum-side of the viewport. For each system, a custom lens-based, high-throughput optical periscope was designed to relay the plasma image about half a meter to a high-speed camera. Each instrument also contained a remote-controlled filter wheel, set between shots to isolate a particular hydrogen or impurity emission line. The design of the camera platform, imaging performance, and sample data for each view is presented.
A Spacebird-eye View of the Grand Canyon from NASA Terra Spacecraft
2011-10-14
NASA Terra spacecraft provided this view of the eastern part of Grand Canyon National Park in northern Arizona in this image on July 14, 2011. This view looks to the west, with tourist facilities of Grand Canyon Village visible in the upper left.
Perspective View with Landsat Overlay, Los Angeles Basin
NASA Technical Reports Server (NTRS)
2002-01-01
Most of Los Angeles is visible in this computer-generated north-northeast perspective viewed from above the Pacific Ocean. In the foreground the hilly Palos Verdes peninsula lies to the left of the harbor at Long Beach, and in the middle distance the various communities that comprise the greater Los Angeles area appear as shades of grey and white. In the distance the San Gabriel Mountains rise up to separate the basin from the Mojave Desert, which can be seen near the top of the image.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and an enhanced color Landsat 5satellite image mosaic. Topographic expression is exaggerated one and one-half times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive.Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR)that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 70 kilometers (42 miles), View distance 160 kilometers(100 miles) Location: 34.0 deg. North lat., 118.2 deg. West lon. Orientation: View north-northeast Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively Date Acquired: February 2000 (SRTM)3D Cryo-Imaging: A Very High-Resolution View of the Whole Mouse
Roy, Debashish; Steyer, Grant J.; Gargesha, Madhusudhana; Stone, Meredith E.; Wilson, David L.
2009-01-01
We developed the Case Cryo-imaging system that provides information rich, very high-resolution, color brightfield, and molecular fluorescence images of a whole mouse using a section-and-image block-face imaging technology. The system consists of a mouse-sized, motorized cryo-microtome with special features for imaging, a modified, brightfield/ fluorescence microscope, and a robotic xyz imaging system positioner, all of which is fully automated by a control system. Using the robotic system, we acquired microscopic tiled images at a pixel size of 15.6 µm over the block face of a whole mouse sectioned at 40 µm, with a total data volume of 55 GB. Viewing 2D images at multiple resolutions, we identified small structures such as cardiac vessels, muscle layers, villi of the small intestine, the optic nerve, and layers of the eye. Cryo-imaging was also suitable for imaging embryo mutants in 3D. A mouse, in which enhanced green fluorescent protein was expressed under gamma actin promoter in smooth muscle cells, gave clear 3D views of smooth muscle in the urogenital and gastrointestinal tracts. With cryo-imaging, we could obtain 3D vasculature down to 10 µm, over very large regions of mouse brain. Software is fully automated with fully programmable imaging/sectioning protocols, email notifications, and automatic volume visualization. With a unique combination of field-of-view, depth of field, contrast, and resolution, the Case Cryo-imaging system fills the gap between whole animal in vivo imaging and histology. PMID:19248166
Real-time 3-D X-ray and gamma-ray viewer
NASA Technical Reports Server (NTRS)
Yin, L. I. (Inventor)
1983-01-01
A multi-pinhole aperture lead screen forms an equal plurality of invisible mini-images having dissimilar perspectives of an X-ray and gamma-ray emitting object (ABC) onto a near-earth phosphor layer. This layer provides visible light mini-images directly into a visible light image intensifier. A viewing screen having an equal number of dissimilar perspective apertures distributed across its face in a geometric pattern identical to the lead screen, provides a viewer with a real, pseudoscopic image (A'B'C') of the object with full horizontal and vertical parallax. Alternatively, a third screen identical to viewing screen and spaced apart from a second visible light image intensifier, may be positioned between the first image intensifier and the viewing screen, thereby providing the viewer with a virtual, orthoscopic image (A"B"C") of the object (ABC) with full horizontal and vertical parallax.
Aberration analyses for improving the frontal projection three-dimensional display.
Gao, Xin; Sang, Xinzhu; Yu, Xunbo; Wang, Peng; Cao, Xuemei; Sun, Lei; Yan, Binbin; Yuan, Jinhui; Wang, Kuiru; Yu, Chongxiu; Dou, Wenhua
2014-09-22
The crosstalk severely affects the viewing experience for the auto-stereoscopic 3D displays based on frontal projection lenticular sheet. To suppress unclear stereo vision and ghosts are observed in marginal viewing zones(MVZs), aberration of the lenticular sheet combining with the frontal projector is analyzed and designed. Theoretical and experimental results show that increasing radius of curvature (ROC) or decreasing aperture of the lenticular sheet can suppress the aberration and reduce the crosstalk. A projector array with 20 micro-projectors is used to frontally project 20 parallax images one lenticular sheet with the ROC of 10 mm and the size of 1.9 m × 1.2 m. The 3D image with the high quality is experimentally demonstrated in both the mid-viewing zone and MVZs in the optimal viewing plane. The 3D clear depth of 1.2m can be perceived. To provide an excellent 3D image and enlarge the field of view at the same time, a novel structure of lenticular sheet is presented to reduce aberration, and the crosstalk is well suppressed.
System and method for attitude determination based on optical imaging
NASA Technical Reports Server (NTRS)
Junkins, John L. (Inventor); Pollock, Thomas C. (Inventor); Mortari, Daniele (Inventor)
2003-01-01
A method and apparatus is provide for receiving a first set of optical data from a first field of view and receiving a second set of optical data from a second field of view. A portion of the first set of optical data is communicated and a portion of the second set of optical data is reflected, both toward an optical combiner. The optical combiner then focuses the portions onto the image plane such that information at the image plane that is associated with the first and second fields of view is received by an optical detector and used to determine an attitude characteristic.
NASA Technical Reports Server (NTRS)
Kast, J. W.
1975-01-01
We consider the design of a Kirkpatrick-Baez grazing-incidence X-ray telescope to be used in a scan of the sky and analyze the distribution of both properly reflected rays and spurious images over the field of view. To obtain maximum effective area over the field of view, it is necessary to increase the spacing between plates for a scanning telescope as compared to a pointing telescope. Spurious images are necessarily present in this type of lens, but they can be eliminated from the field of view by adding properly located baffles or collimators. Results of a computer design are presented.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
Atmospheric Science Data Center
2014-05-15
article title: Tornado Cuts Through La Plata, Maryland View Larger Image A category F4 tornado tore through La Plata, Maryland on April 28, 2002, killing 5 and ... illustrates the strip of flattened vegetation left by the tornado. The lower image was acquired by MISR's nadir (vertical-viewing) ...
Titan: Kraken and Ligeia In Sharper Focus
2017-03-15
As it sped away from a relatively distant encounter with Titan on Feb. 17, 2017, NASA's Cassini spacecraft captured this mosaic view of the moon's northern lakes and seas. Cassini's viewing angle over Kraken Mare and Ligeia Mare was better during this flyby than previous encounters, providing increased contrast for viewing these seas. Because the spacecraft is peering through less of Titan's haze toward Kraken and Ligeia, more details on their shorelines are visible, compared to earlier maps. This was one of several "non-targeted" Cassini Titan flybys in 2017 that allow the mission to image the moon's north polar region and track clouds there. ("Non-targeted" means Cassini did not have to use any rocket-thruster firings to steer itself toward the flyby.) Several prominent cloud streaks are visible at mid-latitudes between 45 and 55 degrees north latitude, on the right side of the image. Smaller bright clouds are seen just above the sea called Punga Mare (roughly at center). Scientists are seeing increasing cloud activity in Titan's north polar region as the seasons continue to change from spring to summer there, though not as much as predicted by models of Titan's atmosphere. The images in this mosaic were taken with the Cassini spacecraft narrow-angle camera using a spectral filter sensitive to wavelengths of near-infrared light centered at 938 nanometers. The view was obtained at a distance of approximately 150,700 miles (242,500 kilometers) from Titan. Image scale is about 1.6 miles (2.6 kilometers) per pixel. The view is an orthographic projection centered on 68 degrees north latitude, 225 degrees west longitude. An orthographic view is most like the view seen by a distant observer looking through a telescope. http://photojournal.jpl.nasa.gov/catalog/PIA21434
Saenz, Daniel L.; Paliwal, Bhudatt R.; Bayouth, John E.
2014-01-01
ViewRay, a novel technology providing soft-tissue imaging during radiotherapy is investigated for treatment planning capabilities assessing treatment plan dose homogeneity and conformity compared with linear accelerator plans. ViewRay offers both adaptive radiotherapy and image guidance. The combination of cobalt-60 (Co-60) with 0.35 Tesla magnetic resonance imaging (MRI) allows for magnetic resonance (MR)-guided intensity-modulated radiation therapy (IMRT) delivery with multiple beams. This study investigated head and neck, lung, and prostate treatment plans to understand what is possible on ViewRay to narrow focus toward sites with optimal dosimetry. The goal is not to provide a rigorous assessment of planning capabilities, but rather a first order demonstration of ViewRay planning abilities. Images, structure sets, points, and dose from treatment plans created in Pinnacle for patients in our clinic were imported into ViewRay. The same objectives were used to assess plan quality and all critical structures were treated as similarly as possible. Homogeneity index (HI), conformity index (CI), and volume receiving <20% of prescription dose (DRx) were calculated to assess the plans. The 95% confidence intervals were recorded for all measurements and presented with the associated bars in graphs. The homogeneity index (D5/D95) had a 1-5% inhomogeneity increase for head and neck, 3-8% for lung, and 4-16% for prostate. CI revealed a modest conformity increase for lung. The volume receiving 20% of the prescription dose increased 2-8% for head and neck and up to 4% for lung and prostate. Overall, for head and neck Co-60 ViewRay treatments planned with its Monte Carlo treatment planning software were comparable with 6 MV plans computed with convolution superposition algorithm on Pinnacle treatment planning system. PMID:24872603
Saenz, Daniel L; Paliwal, Bhudatt R; Bayouth, John E
2014-04-01
ViewRay, a novel technology providing soft-tissue imaging during radiotherapy is investigated for treatment planning capabilities assessing treatment plan dose homogeneity and conformity compared with linear accelerator plans. ViewRay offers both adaptive radiotherapy and image guidance. The combination of cobalt-60 (Co-60) with 0.35 Tesla magnetic resonance imaging (MRI) allows for magnetic resonance (MR)-guided intensity-modulated radiation therapy (IMRT) delivery with multiple beams. This study investigated head and neck, lung, and prostate treatment plans to understand what is possible on ViewRay to narrow focus toward sites with optimal dosimetry. The goal is not to provide a rigorous assessment of planning capabilities, but rather a first order demonstration of ViewRay planning abilities. Images, structure sets, points, and dose from treatment plans created in Pinnacle for patients in our clinic were imported into ViewRay. The same objectives were used to assess plan quality and all critical structures were treated as similarly as possible. Homogeneity index (HI), conformity index (CI), and volume receiving <20% of prescription dose (DRx) were calculated to assess the plans. The 95% confidence intervals were recorded for all measurements and presented with the associated bars in graphs. The homogeneity index (D5/D95) had a 1-5% inhomogeneity increase for head and neck, 3-8% for lung, and 4-16% for prostate. CI revealed a modest conformity increase for lung. The volume receiving 20% of the prescription dose increased 2-8% for head and neck and up to 4% for lung and prostate. Overall, for head and neck Co-60 ViewRay treatments planned with its Monte Carlo treatment planning software were comparable with 6 MV plans computed with convolution superposition algorithm on Pinnacle treatment planning system.
Kreplin, Ute; Fairclough, Stephen H
2013-01-01
The contemplation of visual art requires attention to be directed to external stimulus properties and internally generated thoughts. It has been proposed that the medial rostral prefrontal cortex (rPFC; BA10) plays a role in the maintenance of attention on external stimuli whereas the lateral area of the rPFC is associated with the preservation of attention on internal cognitions. An alternative hypothesis associates activation of medial rPFC with internal cognitions related to the self during emotion regulation. The aim of the current study was to differentiate activation within rPFC using functional near infrared spectroscopy (fNIRS) during the viewing of visual art selected to induce positive and negative valence, which were viewed under two conditions: (1) emotional introspection and (2) external object identification. Thirty participants (15 female) were recruited. Sixteen pre-rated images that represented either positive or negative valence were selected from an existing database of visual art. In one condition, participants were directed to engage in emotional introspection during picture viewing. The second condition involved a spot-the-difference task where participants compared two almost identical images, a viewing strategy that directed attention to external properties of the stimuli. The analysis revealed a significant increase of oxygenated blood in the medial rPFC during viewing of positive images compared to negative images. This finding suggests that the rPFC is involved during positive evaluations of visual art that may be related to judgment of pleasantness or attraction. The fNIRS data revealed no significant main effect between the two viewing conditions, which seemed to indicate that the emotional impact of the stimuli remained unaffected by the two viewing conditions.
General view of the flight deck of the Orbiter Discovery ...
General view of the flight deck of the Orbiter Discovery looking forward along the approximate center line of the orbiter at the center console. The Multifunction Electronic Display System (MEDS) is evident in the mid-ground center of this image, this system was a major upgrade from the previous analog display system. The commander's station is on the port side or left in this view and the pilot's station is on the starboard side or right tin this view. Not the grab bar in the upper center of the image which was primarily used for commander and pilot ingress with the orbiter in a vertical position on the launch pad. Also note that the forward observation windows have protective covers over them. This image was taken at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
The effects of the ideal of female beauty on mood and body satisfaction.
Pinhas, L; Toner, B B; Ali, A; Garfinkel, P E; Stuckless, N
1999-03-01
The present study examined changes in women's mood states resulting from their viewing pictures in fashion magazines of models who represent a thin ideal. Female university students completed the Profile of Mood States (POMS), the Body Parts Satisfaction Scale (BPSS), and the Eating Disorder Inventory (EDI). They were then exposed to 20 slides; the experimental group (N = 51) viewed images of female fashion models and a control group (N = 67) viewed slides containing no human figures. All subjects then completed the POMS and the BPSS again. Women were more depressed (R2 = 0.745, p < .05) and more angry (R2 = 0.73, p < .01) following exposure to slides of female fashion models. Viewing images of female fashion models had an immediate negative effect on women's mood. This study, therefore, supports the hypothesis that media images do play a role in disordered eating.
Flow visualization and characterization of evaporating liquid drops
NASA Technical Reports Server (NTRS)
Chao, David F. (Inventor); Zhang, Nengli (Inventor)
2004-01-01
An optical system, consisting of drop-reflection image, reflection-refracted shadowgraphy and top-view photography, is used to measure the spreading and instant dynamic contact angle of a volatile-liquid drop on a non-transparent substrate. The drop-reflection image and the shadowgraphy is shown by projecting the images of a collimated laser beam partially reflected by the drop and partially passing through the drop onto a screen while the top view photograph is separately viewed by use of a camera video recorder and monitor. For a transparent liquid on a reflective solid surface, thermocapillary convection in the drop, induced by evaporation, can be viewed nonintrusively, and the drop real-time profile data are synchronously recorded by video recording systems. Experimental results obtained from this technique clearly reveal that evaporation and thermocapillary convection greatly affect the spreading process and the characteristics of dynamic contact angle of the drop.
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy
NASA Astrophysics Data System (ADS)
Bian, Junguo; Sharp, Gregory C.; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-01
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.
Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu
2015-05-18
We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.
Investigation of cone-beam CT image quality trade-off for image-guided radiation therapy.
Bian, Junguo; Sharp, Gregory C; Park, Yang-Kyun; Ouyang, Jinsong; Bortfeld, Thomas; El Fakhri, Georges
2016-05-07
It is well-known that projections acquired over an angular range slightly over 180° (so-called short scan) are sufficient for fan-beam reconstruction. However, due to practical imaging conditions (projection data and reconstruction image discretization, physical factors, and data noise), the short-scan reconstructions may have different appearances and properties from the full-scan (scans over 360°) reconstructions. Nevertheless, short-scan configurations have been used in applications such as cone-beam CT (CBCT) for head-neck-cancer image-guided radiation therapy (IGRT) that only requires a small field of view due to the potential reduced imaging time and dose. In this work, we studied the image quality trade-off for full, short, and full/short scan configurations with both conventional filtered-backprojection (FBP) reconstruction and iterative reconstruction algorithms based on total-variation (TV) minimization for head-neck-cancer IGRT. Anthropomorphic and Catphan phantoms were scanned at different exposure levels with a clinical scanner used in IGRT. Both visualization- and numerical-metric-based evaluation studies were performed. The results indicate that the optimal exposure level and number of views are in the middle range for both FBP and TV-based iterative algorithms and the optimization is object-dependent and task-dependent. The optimal view numbers decrease with the total exposure levels for both FBP and TV-based algorithms. The results also indicate there are slight differences between FBP and TV-based iterative algorithms for the image quality trade-off: FBP seems to be more in favor of larger number of views while the TV-based algorithm is more robust to different data conditions (number of views and exposure levels) than the FBP algorithm. The studies can provide a general guideline for image-quality optimization for CBCT used in IGRT and other applications.
Global view of Venus from Magellan, Pioneer, and Venera data
1991-10-29
This global view of Venus, centered at 270 degrees east longitude, is a compilation of data from several sources. Magellan synthetic aperature radar mosaics from the first cycle of Magellan mapping are mapped onto a computer-simulated globe to create the image. Data gaps are filled with Pioneer-Venus orbiter data, or a constant mid-range value. Simulated color is used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the Jet Propulsion Laboratory (JPL) Multimission Image Processing Laboratory and is a single frame from a video released at the JPL news conference, 10-29-91. View provided by JPL with alternate number P-39225 MGN81.
2018-05-16
This extraordinary view of Jupiter was captured by NASA's Juno spacecraft on the outbound leg of its 12th close flyby of the gas giant planet. This new perspective of Jupiter from the south makes the Great Red Spot appear as though it is in northern territory. This view is unique to Juno and demonstrates how different our view is when we step off the Earth and experience the true nature of our three-dimensional universe. Juno took the images used to produce this color-enhanced image on April 1 between 3:04 a.m. PDT (6:04 a.m. EDT) and 3:36 a.m. PDT (6:36 a.m. EDT). At the time the images were taken, the spacecraft was between 10,768 miles (17,329 kilometers) to 42,849 miles (68,959 kilometers) from the tops of the clouds of the planet at a southern latitude spanning 34.01 to 71.43 degrees. Citizen scientists Gerald Eichstädt and Seán Doran created this image using data from the spacecraft's JunoCam imager. The view is a composite of several separate JunoCam images that were re-projected, blended, and healed. https://photojournal.jpl.nasa.gov/catalog/PIA22421
Automatic image database generation from CAD for 3D object recognition
NASA Astrophysics Data System (ADS)
Sardana, Harish K.; Daemi, Mohammad F.; Ibrahim, Mohammad K.
1993-06-01
The development and evaluation of Multiple-View 3-D object recognition systems is based on a large set of model images. Due to the various advantages of using CAD, it is becoming more and more practical to use existing CAD data in computer vision systems. Current PC- level CAD systems are capable of providing physical image modelling and rendering involving positional variations in cameras, light sources etc. We have formulated a modular scheme for automatic generation of various aspects (views) of the objects in a model based 3-D object recognition system. These views are generated at desired orientations on the unit Gaussian sphere. With a suitable network file sharing system (NFS), the images can directly be stored on a database located on a file server. This paper presents the image modelling solutions using CAD in relation to multiple-view approach. Our modular scheme for data conversion and automatic image database storage for such a system is discussed. We have used this approach in 3-D polyhedron recognition. An overview of the results, advantages and limitations of using CAD data and conclusions using such as scheme are also presented.
2016-09-05
Saturn's rings appear to bend as they pass behind the planet's darkened limb due to refraction by Saturn's upper atmosphere. The effect is the same as that seen in an earlier Cassini view (see PIA20491), except this view looks toward the unlit face of the rings, while the earlier image viewed the rings' sunlit side. The difference in illumination brings out some noticeable differences. The A ring is much darker here, on the rings' unlit face, since its larger particles primarily reflect light back toward the sun (and away from Cassini's cameras in this view). The narrow F ring (at bottom), which was faint in the earlier image, appears brighter than all of the other rings here, thanks to the microscopic dust that is prevalent within that ring. Small dust tends to scatter light forward (meaning close to its original direction of travel), making it appear bright when backlit. (A similar effect has plagued many a driver with a dusty windshield when driving toward the sun.) This view looks toward the unilluminated side of the rings from about 19 degrees below the ring plane. The image was taken in red light with the Cassini spacecraft narrow-angle camera on July 24, 2016. The view was acquired at a distance of approximately 527,000 miles (848,000 kilometers) from Saturn and at a sun-Saturn-spacecraft, or phase, angle of 169 degrees. Image scale is 3 miles (5 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20497
NASA Astrophysics Data System (ADS)
Liu, Tao; Abd-Elrahman, Amr
2018-05-01
Deep convolutional neural network (DCNN) requires massive training datasets to trigger its image classification power, while collecting training samples for remote sensing application is usually an expensive process. When DCNN is simply implemented with traditional object-based image analysis (OBIA) for classification of Unmanned Aerial systems (UAS) orthoimage, its power may be undermined if the number training samples is relatively small. This research aims to develop a novel OBIA classification approach that can take advantage of DCNN by enriching the training dataset automatically using multi-view data. Specifically, this study introduces a Multi-View Object-based classification using Deep convolutional neural network (MODe) method to process UAS images for land cover classification. MODe conducts the classification on multi-view UAS images instead of directly on the orthoimage, and gets the final results via a voting procedure. 10-fold cross validation results show the mean overall classification accuracy increasing substantially from 65.32%, when DCNN was applied on the orthoimage to 82.08% achieved when MODe was implemented. This study also compared the performances of the support vector machine (SVM) and random forest (RF) classifiers with DCNN under traditional OBIA and the proposed multi-view OBIA frameworks. The results indicate that the advantage of DCNN over traditional classifiers in terms of accuracy is more obvious when these classifiers were applied with the proposed multi-view OBIA framework than when these classifiers were applied within the traditional OBIA framework.
Scanning laser ophthalmoscope fundus cyclometry in near-natural viewing conditions.
Ehrt, O; Boergen, K P
2001-09-01
For a better understanding of motor and sensory adaptations in cyclodeviations, subjective and objective ocular torsion have to be measured under the same conditions. The search coil technique and videooculography allow natural viewing but only assess relative cycloduction, the dynamics of torsion over a short period of time. Cycloposition, on the other hand, can be measured by analysing the position of the foveola relative to the optic disc with fundus photographs but only in nonphysiological viewing. The aim of the study was to develop a technique that allows natural viewing conditions during fundus cyclometry. The scanning laser beam of the SLO was deflected by 90 degrees with a semitransparent mirror in front of the patient's eyes. The patient was able to look through the semitransparent mirror with both eyes into the room, e.g. at Harms' tangent screen. The infrared SLO images the central retina via the mirror through the undilated pupil. Digital image analysis quantifies the cycloposition of the eye. Controlled head movements while fixating the centre of Harms' tangent screen allow measurements in reproducible gaze positions. The semitransparent mirror reduces SLO image brightness, but image quality is sufficient for cyclometry after contrast enhancement. The laser light can be vaguely perceived by the patient but does not interfere with natural viewing. Reproducibility of the measurement is within +/- 1 degree SD. Our modification of SLO fundus cyclometry allows direct measurements of cycloposition in natural viewing conditions. This opens a new field for investigations of cyclodeviations and their sensory and motor adaptations.
The effect of music video clips on adolescent boys' body image, mood, and schema activation.
Mulgrew, Kate E; Volcevski-Kostas, Diana; Rendell, Peter G
2014-01-01
There is limited research that has examined experimentally the effects of muscular images on adolescent boys' body image, with no research specifically examining the effects of music television. The aim of the current study was to examine the effects of viewing muscular and attractive singers in music video clips on early, mid, and late adolescent boys' body image, mood, and schema activation. Participants were 180 boys in grade 7 (mean age = 12.73 years), grade 9 (mean age = 14.40 years) or grade 11 (mean age = 16.15 years) who completed pre- and post-test measures of mood and body satisfaction after viewing music videos containing male singers of muscular or average appearance. They also completed measures of schema activation and social comparison after viewing the clips. The results showed that the boys who viewed the muscular clips reported poorer upper body satisfaction, lower appearance satisfaction, lower happiness, and more depressive feelings compared to boys who viewed the clips depicting singers of average appearance. There was no evidence of increased appearance schema activation but the boys who viewed the muscular clips did report higher levels of social comparison to the singers. The results suggest that music video clips are a powerful form of media in conveying information about the male ideal body shape and that negative effects are found in boys as young as 12 years.
Iuculano, Ambra; Zoppi, Maria Angelica; Piras, Alessandra; Arras, Maurizio; Monni, Giovanni
2014-09-10
Abstract Objective: Brain stem depth/brain stem occipital bone distance (BS/BSOB ratio) and the four-line view, in images obtained for nuchal translucency (NT) screening in fetuses with open spina bifida (OSB). Methods: Single center, retrospective study based on the assessment of NT screening images of fetuses with OSB. A ratio between the BS depth and the BSOB distance was calculated (BS/BSOB ratio) and the four-line view observed, and the sensitivity for a BS/BSOB ratio superior/equal to 1, and for the lack of detection of the four-line view were calculated. Results: There were 17 cases of prenatal diagnosis OSB. In six cases, the suspicion on OSB was raised during NT screening, in six cases, the diagnosis was made before 20 weeks and in five cases during anomaly scan. The BS/BSOB ratio was superior/equal to 1 in all 17 cases, and three lines, were visualized in 15/17 images of the OSB cases, being the sensitivity 100% (95% CI, 81 to 100%) and 88% (95% CI, 65 to 96%). Conclusion: Assessment of BS/BSOB ratio and four-line view in NT images is feasible detecting affected by OSB with high sensitivity. The presence of associated anomalies or of an enlarged NT enhances the early detection.
Single DMD time-multiplexed 64-views autostereoscopic 3D display
NASA Astrophysics Data System (ADS)
Loreti, Luigi
2013-03-01
Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links
Imaging of particles with 3D full parallax mode with two-color digital off-axis holography
NASA Astrophysics Data System (ADS)
Kara-Mohammed, Soumaya; Bouamama, Larbi; Picart, Pascal
2018-05-01
This paper proposes an approach based on two orthogonal views and two wavelengths for recording off-axis two-color holograms. The approach permits to discriminate particles aligned along the sight-view axis. The experimental set-up is based on a double Mach-Zehnder architecture in which two different wavelengths provides the reference and the object beams. The digital processing to get images from the particles is based on convolution so as to obtain images with no wavelength dependence. The spatial bandwidth of the angular spectrum transfer function is adapted in order to increase the maximum reconstruction distance which is generally limited to a few tens of millimeters. In order to get the images of particles in the 3D volume, a calibration process is proposed and is based on the modulation theorem to perfectly superimpose the two views in a common XYZ axis. The experimental set-up is applied to two-color hologram recording of moving non-calibrated opaque particles with average diameter at about 150 μm. After processing the two-color holograms with image reconstruction and view calibration, the location of particles in the 3D volume can be obtained. Particularly, ambiguity about close particles, generating hidden particles in a single-view scheme, can be removed to determine the exact number of particles in the region of interest.
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Evaluation of collimation and imaging configuration in scintimammography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Frey, E.C.; Wessell, D.E.
1996-12-31
Conventional scintimammography (SM) with {sup 99m}Tc sestamibi has been limited to taking a single lateral view of the breast using a parallel-hole high resolution (LEHR) collimator. The collimator is placed close to the breast for best possible spatial resolution. However, the collimator geometry precludes imaging the breast from other views. We evaluated using a pinhole collimator instead of a LEHR collimator in SM for improved spatial resolution and detection efficiency, and to allow additional imaging views. Results from theoretical calculations indicated that pinhole collimators could be designed with higher spatial resolution and detection efficiency than LEHR when imaging small tomore » medium size breasts. The geometrical shape of the pinhole collimator allows imaging of the breasts from both the lateral and craniocaudal views. The dual-view images allow better determination of the location of the tumors within the breast and improved detection of tumors located in the medial region of the breast. A breast model that simulates the shape and composition of the breast and breast tumors with different sizes and locations was added to an existing 3D mathematical cardiac-torso (MCAT) phantom. A cylindrically shaped phantom with 10 cm diameter and spherical inserts with different sizes and {sup 99m}Tc sestamibi uptakes with respect to the background provide physical models of breast with tumors. Simulation studies using the breast and MCAT phantoms and experimental studies using the cylindrical phantom confirmed the utility of the pinhole collimator in SM for improved breast tumor detection.« less
Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.
Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo
2015-05-01
It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.
Weather Movie, Mars South Polar Region, March-April 2009 Close-up View
2009-04-16
This image shows the southern high-latitudes region of Mars from March 19 through April 14, 2009, a period when regional dust storms occurred along the retreating edge of carbon-dioxide frost in the seasonal south polar cap. Compared with a full-hemisphere view (see PIA11987), this view shows more details of where the dust clouds formed and how they moved around the planet. The movie combines hundreds of images from the Mars Color Imager (MARCI) camera on NASA's Mars Reconnaissance Orbiter. In viewing the movie, it helps to understand some of the artifacts produced by the nature of MARCI images when seen in animation. MARCI acquires images in swaths from pole-to-pole during the dayside portion of each orbit. The camera can cover the entire planet in just over 12 orbits, and takes about 1 day to accumulate this coverage. The indiviual swaths are assembled into a mosaic, and that mosaic is shown here wrapped onto a sphere. The blurry portions of the mosaic, seen to be "pinwheeling" around the planet in the movie, are the portions of adjacent images viewing obliquely through the hazy atmosphsere. Portions with sharper-looking details are the central part of an image, viewing more directly downward through less atmosphere than the obliquely viewed portions. MARCI has a 180-degree field of view, and Mars fills about 78 percent of that field of view when the camera is pointed down at the planet. However, the Mars Reconnaissance Orbiter often is pointed to one side or the other off its orbital track in order to acquire targeted observations by the higher-resolution imaging systems on the spacecraft. When such rolls exceed about 20 degrees, gaps occur in the mosaic of MARCI swaths. Also, dark gaps appear when data are missing, either because of irrecoverable data drops, or because not all the data have yet been transmitted from the spacecraft. It isn't easy to see the actual dust motion in the atmosphere in these images, owing to the apparent motion of these artifacts. However, by concentrating on specific surface features (craters, prominent ice deposits, etc.) and looking for the brownish clouds of dust, it is possible to see where the storms start and how they move around the planet. In additon to tracking the storms, it is also interesting to watch how the seasonal cap shrinks from the beginning to the end of the animation. This shrinkage results from subliming of the carbon-dioxide frost from the surface as the frost absorbs southern hemisphere mid-spring sunlight. The temperature contrast between the warm sunlit ground just north of the cap's edge and the cold carbon-dioxide frost generates strong winds, enhanced by the excess carbon dioxide subliming off the cap. These winds create the conditions that lead to the dust storms. http://photojournal.jpl.nasa.gov/catalog/PIA11988
Slant Perception Under Stereomicroscopy.
Horvath, Samantha; Macdonald, Kori; Galeotti, John; Klatzky, Roberta L
2017-11-01
Objective These studies used threshold and slant-matching tasks to assess and quantitatively measure human perception of 3-D planar images viewed through a stereomicroscope. The results are intended for use in developing augmented-reality surgical aids. Background Substantial research demonstrates that slant perception is performed with high accuracy from monocular and binocular cues, but less research concerns the effects of magnification. Viewing through a microscope affects the utility of monocular and stereo slant cues, but its impact is as yet unknown. Method Participants performed in a threshold slant-detection task and matched the slant of a tool to a surface. Different stimuli and monocular versus binocular viewing conditions were implemented to isolate stereo cues alone, stereo with perspective cues, accommodation cue only, and cues intrinsic to optical-coherence-tomography images. Results At magnification of 5x, slant thresholds with stimuli providing stereo cues approximated those reported for direct viewing, about 12°. Most participants (75%) who passed a stereoacuity pretest could match a tool to the slant of a surface viewed with stereo at 5x magnification, with mean compressive error of about 20% for optimized surfaces. Slant matching to optical coherence tomography images of the cornea viewed under the microscope was also demonstrated. Conclusion Despite the distortions and cue loss introduced by viewing under the stereomicroscope, most participants were able to detect and interact with slanted surfaces. Application The experiments demonstrated sensitivity to surface slant that supports the development of augmented-reality systems to aid microscope-aided surgery.
Opportunity's Surroundings After Sol 1820 Drive (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11841 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11841 NASA's Mars Exploration Rover Opportunity used its navigation camera to take the images combined into this full-circle view of the rover's surroundings during the 1,820th to 1,822nd Martian days, or sols, of Opportunity's surface mission (March 7 to 9, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 20.6 meters toward the northwest on Sol 1820 before beginning to take the frames in this view. Tracks from that drive recede southwestward. For scale, the distance between the parallel wheel tracks is about 1 meter (about 40 inches). The terrain in this portion of Mars' Meridiani Planum region includes dark-toned sand ripples and small exposures of lighter-toned bedrock. This view is presented as a cylindrical-perspective projection with geometric seam correction.Dust Devil in Spirit's View Ahead on Sol 1854 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11960 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11960 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,854th Martian day, or sol, of Spirit's surface mission (March 21, 2009). This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 13.79 meters (45 feet) westward earlier on Sol 1854. West is at the center, where a dust devil is visible in the distance. North on the right, where Husband Hill dominates the horizon; Spirit was on top of Husband Hill in September and October 2005. South is on the left, where lighter-toned rock lines the edge of the low plateau called 'Home Plate.' This view is presented as a cylindrical-perspective projection with geometric seam correction.Digital mammography: comparative performance of color LCD and monochrome CRT displays.
Samei, Ehsan; Poolla, Ananth; Ulissey, Michael J; Lewin, John M
2007-05-01
To evaluate the comparative performance of high-fidelity liquid crystal display (LCD) and cathode ray tube (CRT) devices for mammography applications, and to assess the impact of LCD viewing angle on detection accuracy. Ninety 1 k x 1 k images were selected from a database of digital mammograms: 30 without any abnormality present, 30 with subtle masses, and 30 with subtle microcalcifications. The images were used with waived informed consent, Health Insurance Portability and Accountability Act compliance, and Institutional Review Board approval. With postprocessing presentation identical to those of the commercial mammography system used, 1 k x 1 k sections of images were viewed on a monochrome CRT and a color LCD in native grayscale, and with a grayscale representative of images viewed from a 30 degrees or 50 degrees off-normal viewing angle. Randomized images were independently scored by four experienced breast radiologists for the presence of lesions using a 0-100 grading scale. To compare diagnostic performance of the display modes, observer scores were analyzed using receiver operating characteristic (ROC) and analysis of variance. For masses and microcalcifications, the detection rate in terms of the area under the ROC curve (A(z)) showed a 2% increase and a 4% decrease from CRT to LCD, respectively. However, differences were not statistically significant (P > .05). The viewing angle data showed better microcalcification detection but lower mass detection at 30 degrees viewing orientation. The overall results varied notably from observer to observer yielding no statistically discernible trends across all observers, suggesting that within the 0-50 degrees viewing angle range and in a controlled observer experiment, the variation in the contrast response of the LCD has little or no impact on the detection of mammographic lesions. Although CRTs and LCDs differ in terms of angular response, resolution, noise, and color, these characteristics seem to have little influence on the detection of mammographic lesions. The results suggest comparable performance in clinical applications of the two devices.
Phoenix Lander on Mars with Surrounding Terrain, Vertical Projection
NASA Technical Reports Server (NTRS)
2008-01-01
This view is a vertical projection that combines more than 500 exposures taken by the Surface Stereo Imager camera on NASA's Mars Phoenix Lander and projects them as if looking down from above. The black circle on the spacecraft is where the camera itself is mounted on the lander, out of view in images taken by the camera. North is toward the top of the image. The height of the lander's meteorology mast, extending toward the southwest, appears exaggerated because that mast is taller than the camera mast. This view in approximately true color covers an area about 30 meters by 30 meters (about 100 feet by 100 feet). The landing site is at 68.22 degrees north latitude, 234.25 degrees east longitude on Mars. The ground surface around the lander has polygonal patterning similar to patterns in permafrost areas on Earth. This view comprises more than 100 different Stereo Surface Imager pointings, with images taken through three different filters at each pointing. The images were taken throughout the period from the 13th Martian day, or sol, after landing to the 47th sol (June 5 through July 12, 2008). The lander's Robotic Arm is cut off in this mosaic view because component images were taken when the arm was out of the frame. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.NASA Technical Reports Server (NTRS)
Graff, Paige Valderrama; Baker, Marshalyn (Editor); Graff, Trevor (Editor); Lindgren, Charlie (Editor); Mailhot, Michele (Editor); McCollum, Tim (Editor); Runco, Susan (Editor); Stefanov, William (Editor); Willis, Kim (Editor)
2010-01-01
Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA's Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. Scientists from the Image Science and Analysis Laboratory (ISAL) at NASA s Johnson Space Center (JSC) work with astronauts onboard the International Space Station (ISS) who take images of Earth. Astronaut photographs, sometimes referred to as Crew Earth Observations, are taken using hand-held digital cameras onboard the ISS. These digital images allow scientists to study our Earth from the unique perspective of space. Astronauts have taken images of Earth since the 1960s. There is a database of over 900,000 astronaut photographs available at http://eol.jsc.nasa.gov . Images are requested by ISAL scientists at JSC and astronauts in space personally frame and acquire them from the Destiny Laboratory or other windows in the ISS. By having astronauts take images, they can specifically frame them according to a given request and need. For example, they can choose to use different lenses to vary the amount of area (field of view) an image will cover. Images can be taken at different times of the day which allows different lighting conditions to bring out or highlight certain features. The viewing angle at which an image is acquired can also be varied to show the same area from different perspectives. Pointing the camera straight down gives you a nadir shot. Pointing the camera at an angle to get a view across an area would be considered an oblique shot. Being able to change these variables makes astronaut photographs a unique and useful data set. Astronaut photographs are taken from the ISS from altitudes of 300 - 400 km (approx.185 to 250 miles). One of the current cameras being used, the Nikon D3X digital camera, can take images using a 50, 100, 250, 400 or 800mm lens. These different lenses allow for a wider or narrower field of view. The higher the focal length (800mm for example) the narrower the field of view (less area will be covered). Higher focal lengths also show greater detail of the area on the surface being imaged. There are four major systems or spheres of Earth. They are: Atmosphere, Biosphere, Hydrosphe, and Litho/Geosphere.
Macy, Jonathan T; Chassin, Laurie; Presson, Clark C; Yeung, Ellen
2016-01-01
To test the effect of exposure to the US Food and Drug Administration's proposed graphic images with text warning statements for cigarette packages on implicit and explicit attitudes towards smoking. A two-session web-based study was conducted with 2192 young adults 18-25-years-old. During session one, demographics, smoking behaviour, and baseline implicit and explicit attitudes were assessed. Session two, completed on average 18 days later, contained random assignment to viewing one of three sets of cigarette packages, graphic images with text warnings, text warnings only, or current US Surgeon General's text warnings. Participants then completed post-exposure measures of implicit and explicit attitudes. ANCOVAs tested the effect of condition on the outcomes, controlling for baseline attitudes. Smokers who viewed packages with graphic images plus text warnings demonstrated more negative implicit attitudes compared to smokers in the other conditions (p = .004). For the entire sample, explicit attitudes were more negative for those who viewed graphic images plus text warnings compared to those who viewed current US Surgeon General's text warnings (p = .014), but there was no difference compared to those who viewed text-only warnings. Graphic health warnings on cigarette packages can influence young adult smokers' implicit attitudes towards smoking.
Examining the effect of task on viewing behavior in videos using saliency maps
NASA Astrophysics Data System (ADS)
Alers, Hani; Redi, Judith A.; Heynderickx, Ingrid
2012-03-01
Research has shown that when viewing still images, people will look at these images in a different manner if instructed to evaluate their quality. They will tend to focus less on the main features of the image and, instead, scan the entire image area looking for clues for its level of quality. It is questionable, however, whether this finding can be extended to videos considering their dynamic nature. One can argue that when watching a video the viewer will always focus on the dynamically changing features of the video regardless of the given task. To test whether this is true, an experiment was conducted where half of the participants viewed videos with the task of quality evaluation while the other half were simply told to watch the videos as if they were watching a movie on TV or a video downloaded from the internet. The videos contained content which was degraded with compression artifacts over a wide range of quality. An eye tracking device was used to record the viewing behavior in both conditions. By comparing the behavior during each task, it was possible to observe a systematic difference in the viewing behavior which seemed to correlate to the quality of the videos.
High Density Aerial Image Matching: State-Of and Future Prospects
NASA Astrophysics Data System (ADS)
Haala, N.; Cavegn, S.
2016-06-01
Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.
Elson, D S; Jo, J A
2007-01-01
We report a side viewing fibre-based endoscope that is compatible with intravascular imaging and fluorescence lifetime imaging microscopy (FLIM). The instrument has been validated through testing with fluorescent dyes and collagen and elastin powders using the Laguerre expansion deconvolution technique to calculate the fluorescence lifetimes. The instrument has also been tested on freshly excised unstained animal vascular tissues. PMID:19503759
6. VIEW LOOKING NORTHWEST FROM THE IMAGE LEFT TO THE ...
6. VIEW LOOKING NORTHWEST FROM THE IMAGE LEFT TO THE IMAGE RIGHT IS THE CHARCOAL HOUSE, THE RETORT SHED IN THE BACKGROUND, THE MILL ANNEX, THE MACHINE SHOP, AND THE ELECTRIC MOTOR ROOM. THE MILL BUILDING IS IN THE BACKGROUND CENTER RIGHT AND ONE OF THE ORE DELIVERY TRESTLES EXTENDING FROM THE MILL BUILDING TO RIGHT IMAGE EDGE. - Standard Gold Mill, East of Bodie Creek, Northeast of Bodie, Bodie, Mono County, CA
2016-07-12
This color view from NASA's Juno spacecraft is made from some of the first images taken by JunoCam after the spacecraft entered orbit around Jupiter on July 5th (UTC). The view shows that JunoCam survived its first pass through Jupiter's extreme radiation environment, and is ready to collect images of the giant planet as Juno begins its mission. The image was taken on July 10, 2016 at 5:30 UTC, when the spacecraft was 2.7 million miles (4.3 million kilometers) from Jupiter on the outbound leg of its initial 53.5-day capture orbit. The image shows atmospheric features on Jupiter, including the Great Red Spot, and three of Jupiter's four largest moons. JunoCam will continue to image Jupiter during Juno's capture orbits. The first high-resolution images of the planet will be taken on August 27 when the Juno spacecraft makes its next close pass to Jupiter. http://photojournal.jpl.nasa.gov/catalog/PIA20707
Super long viewing distance light homogeneous emitting three-dimensional display
NASA Astrophysics Data System (ADS)
Liao, Hongen
2015-04-01
Three-dimensional (3D) display technology has continuously been attracting public attention with the progress in today's 3D television and mature display technologies. The primary characteristics of conventional glasses-free autostereoscopic displays, such as spatial resolution, image depths, and viewing angle, are often limited due to the use of optical lenses or optical gratings. We present a 3D display using MEMS-scanning-mechanism-based light homogeneous emitting (LHE) approach and demonstrate that the display can directly generate an autostereoscopic 3D image without the need for optical lenses or gratings. The generated 3D image has the advantages of non-aberration and a high-definition spatial resolution, making it the first to exhibit animated 3D images with image depth of six meters. Our LHE 3D display approach can be used to generate a natural flat-panel 3D display with super long viewing distance and alternative real-time image update.
Image quality improvement in MDCT cardiac imaging via SMART-RECON method
NASA Astrophysics Data System (ADS)
Li, Yinsheng; Cao, Ximiao; Xing, Zhanfeng; Sun, Xuguang; Hsieh, Jiang; Chen, Guang-Hong
2017-03-01
Coronary CT angiography (CCTA) is a challenging imaging task currently limited by the achievable temporal resolution of modern Multi-Detector CT (MDCT) scanners. In this paper, the recently proposed SMARTRECON method has been applied in MDCT-based CCTA imaging to improve the image quality without any prior knowledge of cardiac motion. After the prospective ECG-gated data acquisition from a short-scan angular span, the acquired data were sorted into several sub-sectors of view angles; each corresponds to a 1/4th of the short-scan angular range. Information of the cardiac motion was thus encoded into the data in each view angle sub-sector. The SMART-RECON algorithm was then applied to jointly reconstruct several image volumes, each of which is temporally consistent with the data acquired in the corresponding view angle sub-sector. Extensive numerical simulations were performed to validate the proposed technique and investigate the performance dependence.
Retinal projection type super multi-view head-mounted display
NASA Astrophysics Data System (ADS)
Takahashi, Hideya; Ito, Yutaka; Nakata, Seigo; Yamada, Kenji
2014-02-01
We propose a retinal projection type super multi-view head-mounted display (HMD). The smooth motion parallax provided by the super multi-view technique enables a precise superposition of virtual 3D images on the real scene. Moreover, if a viewer focuses one's eyes on the displayed 3D image, the stimulus for the accommodation of the human eye is produced naturally. Therefore, although proposed HMD is a monocular HMD, it provides observers with natural 3D images. The proposed HMD consists of an image projection optical system and a holographic optical element (HOE). The HOE is used as a combiner, and also works as a condenser lens to implement the Maxwellian view. Some parallax images are projected onto the HOE, and converged on the pupil, and then projected onto the retina. In order to verify the effectiveness of the proposed HMD, we constructed the prototype HMD. In the prototype HMD, the number of parallax images and the number of convergent points on the pupil is three. The distance between adjacent convergent points is 2 mm. We displayed virtual images at the distance from 20 cm to 200 cm in front of the pupil, and confirmed the accommodation. This paper describes the principle of proposed HMD, and also describes the experimental result.
Durkin, Sarah J; Paxton, Susan J
2002-11-01
Predictors of change in body satisfaction, depressed mood, anxiety and anger, were examined following exposure to idealized female advertising images in Grades 7 and 10 girls. Stable body dissatisfaction, physical appearance comparison tendency, internalization of thin ideal, self-esteem, depression, identity confusion and body mass index (BMI) were assessed. One week later, participants viewed magazine images, before and after which they completed assessments of state body satisfaction, state depression, state anxiety and state anger. Participants were randomly allocated to view either images of idealized females (experimental condition) or fashion accessories (control condition). For both grades, there was a significant decrease in state body satisfaction and a significant increase in state depression attributable to viewing the female images. In Grade 7 girls in the experimental condition, decrease in state body satisfaction was predicted by stable body dissatisfaction and BMI, while significant predictors of decreases in the measures of negative affect included internalization of the thin-ideal and appearance comparison. In Grade 10 girls, reduction in state body satisfaction and increase in state depression was predicted by internalization of the thin-ideal, appearance comparison and stable body dissatisfaction. These findings indicate the importance of individual differences in short-term reaction to viewing idealized media images. Copyright 2002 Elsevier Science Inc.
3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events
NASA Technical Reports Server (NTRS)
Brown, Richard; Navard, Andrew; Spruce, Joseph
2010-01-01
An analytical, advanced imaging method has been developed for the initial monitoring and identification of foam debris and similar anomalies that occur post-launch in reference to the space shuttle s external tank (ET). Remote sensing technologies have been used to perform image enhancement and analysis on high-resolution, true-color images collected with the DCS 760 Kodak digital camera located in the right umbilical well of the space shuttle. Improvements to the camera, using filters, have added sharpness/definition to the image sets; however, image review/analysis of the ET has been limited by the fact that the images acquired by umbilical cameras during launch are two-dimensional, and are usually nonreferenceable between frames due to rotation translation of the ET as it falls away from the space shuttle. Use of stereo pairs of these images can enable strong visual indicators that can immediately portray depth perception of damaged areas or movement of fragments between frames is not perceivable in two-dimensional images. A stereoscopic image visualization system has been developed to allow 3D depth perception of stereo-aligned image pairs taken from in-flight umbilical and handheld digital shuttle cameras. This new system has been developed to augment and optimize existing 2D monitoring capabilities. Using this system, candidate sequential image pairs are identified for transformation into stereo viewing pairs. Image orientation is corrected using control points (similar points) between frames to place the two images in proper X-Y viewing perspective. The images are then imported into the WallView stereo viewing software package. The collected control points are used to generate a transformation equation that is used to re-project one image and effectively co-register it to the other image. The co-registered, oriented image pairs are imported into a WallView image set and are used as a 3D stereo analysis slide show. Multiple sequential image pairs can be used to allow forensic review of temporal phenomena between pairs. The observer, while wearing linear polarized glasses, is able to review image pairs in passive 3D stereo.
NASA Technical Reports Server (NTRS)
1999-01-01
This video gives a brief history of the Jet Propulsion Laboratory, current missions and what the future may hold. Scenes includes various planets in the solar system, robotic exploration of space, discussions on the Hubble Space Telescope, the source of life, and solar winds. This video was narrated by Jodie Foster. Animations include: close-up image of the Moon; close-up images of the surface of Mars; robotic exploration of Mars; the first mapping assignment of Mars; animated views of Jupiter; animated views of Saturn; and views of a Giant Storm on Neptune called the Great Dark Spot.
STEREO's Extreme UltraViolet Imager (EUVI)
NASA Technical Reports Server (NTRS)
2007-01-01
At a pixel resolution of 2048x2048, the STEREO EUVI instrument provides views of the Sun in ultraviolet light that rivals the full-disk views of SOHO/EIT. This image is through the 171 Angstrom (ultraviolet) filter which is characteristic of iron ions (missing eight and nine electrons) at 1 million degrees. There is a short data gap in the latter half of the movie that creates a freeze and then jump in the data view. This is a movie of the Sun in 171 Angstrom ultraviolet light. The time frame is late January, 2007
Image Sciences Division activities in bldg 8 and 424 for presentation
1994-01-25
Views of Image Sciences Division activities in bldg 8 and 424 for use in presentation by George Abbey, Deputy Center Director. Views include Taft Broadcasting employee Dexter Herbert in television editing suite in bldg 8 (26624); RMS Photographic Services employee Kelly St. Germaine at IAMS viewing station in the lobby of bldg 8 (26625); RMS employee Irene Jenkins standing in front of automated files used for negative storage in bldg 424 (26626); RMS employee Irma Rodriguez at barcoding and checkout station in bldg 424 (26627).
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Walkowski, Slawomir; Lundin, Mikael; Szymas, Janusz; Lundin, Johan
2015-01-01
The way of viewing whole slide images (WSI) can be tracked and analyzed. In particular, it can be useful to learn how medical students view WSIs during exams and how their viewing behavior is correlated with correctness of the answers they give. We used software-based view path tracking method that enabled gathering data about viewing behavior of multiple simultaneous WSI users. This approach was implemented and applied during two practical exams in oral pathology in 2012 (88 students) and 2013 (91 students), which were based on questions with attached WSIs. Gathered data were visualized and analyzed in multiple ways. As a part of extended analysis, we tried to use machine learning approaches to predict correctness of students' answers based on how they viewed WSIs. We compared the results of analyses for years 2012 and 2013 - done for a single question, for student groups, and for a set of questions. The overall patterns were generally consistent across these 3 years. Moreover, viewing behavior data appeared to have certain potential for predicting answers' correctness and some outcomes of machine learning approaches were in the right direction. However, general prediction results were not satisfactory in terms of precision and recall. Our work confirmed that the view path tracking method is useful for discovering viewing behavior of students analyzing WSIs. It provided multiple useful insights in this area, and general results of our analyses were consistent across two exams. On the other hand, predicting answers' correctness appeared to be a difficult task - students' answers seem to be often unpredictable.
ERIC Educational Resources Information Center
Hofer, Mark; Swan, Kathleen Owings
2005-01-01
With the importance of imagery in our culture and the increasing access to both digital images and the tools used to manipulate them, it is important that social studies teacher educators prepare preservice teachers to provide their students with opportunities to develop a critical lens through which to view images. As we strive to encourage the…
Critical Viewing and the Significance of the Emotional Response.
ERIC Educational Resources Information Center
Rood, Carrie
Within the scholarly debate about the value of visual literacy is the belief that visual literacy bestows the skill of critical viewing, or conscious appreciation of artistry along with the ability to see through manipulative uses and ideological implications of visual images. Critical thinking is commonly viewed as argument skills, cognitive…
NASA Technical Reports Server (NTRS)
2003-01-01
Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.NASA Astrophysics Data System (ADS)
Shih, Chihhsiong
2005-01-01
Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and reasonably low dimension error ratio. Again proving the applicability of the algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guang-Hong, E-mail: gchen7@wisc.edu; Li, Yinsheng
Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods:more » In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240{sup ∘} short scan angular span was divided into four consecutive 60{sup ∘} angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200{sup ∘}, three 66{sup ∘} angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60{sup ∘} angular subsectors.« less
Speirs, Calandra; Belchev, Zorry; Fernandez, Amanda; Korol, Stephanie; Sears, Christopher
2017-10-30
Two experiments examined age differences in the effect of a sad mood induction (MI) on attention to emotional images. Younger and older adults viewed sets of four images while their eye gaze was tracked throughout an 8-s presentation. Images were viewed before and after a sad MI to assess the effect of a sad mood on attention to positive and negative scenes. Younger and older adults exhibited positively biased attention after the sad MI, significantly increasing their attention to positive images, with no evidence of an age difference in either experiment. A test of participants' recognition memory for the images indicated that the sad MI reduced memory accuracy for sad images for younger and older adults. The results suggest that heightened attention to positive images following a sad MI reflects an affect regulation strategy related to mood repair. The implications for theories of the positivity effect are discussed.
Wide field-of-view, multi-region two-photon imaging of neuronal activity in the mammalian brain
Stirman, Jeffrey N.; Smith, Ikuko T.; Kudenov, Michael W.; Smith, Spencer L.
2016-01-01
Two-photon calcium imaging provides an optical readout of neuronal activity in populations of neurons with subcellular resolution. However, conventional two-photon imaging systems are limited in their field of view to ~1 mm2, precluding the visualization of multiple cortical areas simultaneously. Here, we demonstrate a two-photon microscope with an expanded field of view (>9.5 mm2) for rapidly reconfigurable simultaneous scanning of widely separated populations of neurons. We custom designed and assembled an optimized scan engine, objective, and two independently positionable, temporally multiplexed excitation pathways. We used this new microscope to measure activity correlations between two cortical visual areas in mice during visual processing. PMID:27347754
Image processing for cryogenic transmission electron microscopy of symmetry-mismatched complexes.
Huiskonen, Juha T
2018-02-08
Cryogenic transmission electron microscopy (cryo-TEM) is a high-resolution biological imaging method, whereby biological samples, such as purified proteins, macromolecular complexes, viral particles, organelles and cells, are embedded in vitreous ice preserving their native structures. Due to sensitivity of biological materials to the electron beam of the microscope, only relatively low electron doses can be applied during imaging. As a result, the signal arising from the structure of interest is overpowered by noise in the images. To increase the signal-to-noise ratio, different image processing-based strategies that aim at coherent averaging of signal have been devised. In such strategies, images are generally assumed to arise from multiple identical copies of the structure. Prior to averaging, the images must be grouped according to the view of the structure they represent and images representing the same view must be simultaneously aligned relatively to each other. For computational reconstruction of the three-dimensional structure, images must contain different views of the original structure. Structures with multiple symmetry-related substructures are advantageous in averaging approaches because each image provides multiple views of the substructures. However, the symmetry assumption may be valid for only parts of the structure, leading to incoherent averaging of the other parts. Several image processing approaches have been adapted to tackle symmetry-mismatched substructures with increasing success. Such structures are ubiquitous in nature and further computational method development is needed to understanding their biological functions. ©2018 The Author(s).
Perspective View with Landsat Overlay, Salt Lake City, Utah
NASA Technical Reports Server (NTRS)
2002-01-01
Most of the population of Utah lives just west of the Wasatch Mountains in the north central part of the state. This broad east-northeastward view shows that region with the cities of Ogden, Salt Lake City, and Provo seen from left to right. The Great Salt Lake (left) and Utah Lake (right) are quite shallow and appear greenish in this enhanced natural color view. Thousands of years ago ancient Lake Bonneville covered all of the lowlands seen here. Its former shoreline is clearly seen as a wave-cut bench and/or light colored 'bathtub ring' at several places along the base of the mountain front - evidence seen from space of our ever-changing planet.This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM), a Landsat 5 satellite image mosaic, and a false sky. Topographic expression is exaggerated four times.Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 147 kilometers (91 miles), View distance 38 kilometers (24 miles) Location: 40.7 deg. North lat., 112.0 deg. West lon. Orientation: View 19.5 deg North of East, 20 degrees below horizontal Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)Three dimensional perspective view of portion of western Galapagos Islands
NASA Technical Reports Server (NTRS)
1994-01-01
This is a three dimensional perspective view of Isla Isabela in the western Galapagos Islands. It was taken by the L-band radar in HH polarization from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperature Radar on the 40th orbit of the Shuttle Endeavour. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The image is centered at about .5 degrees south latitude and 91 degrees West longitude and covers an area of 75 km by 60 km. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth Pahoehoe lava flows dark. The Jet Propulsion Laboratory alternative photo number is P-43938.
Prediction of Viking lander camera image quality
NASA Technical Reports Server (NTRS)
Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.
1976-01-01
Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.
A Summer View of Russia's Lena Delta and Olenek
NASA Technical Reports Server (NTRS)
2004-01-01
These views of the Russian Arctic were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on July 11, 2004, when the brief arctic summer had transformed the frozen tundra and the thousands of lakes, channels, and rivers of the Lena Delta into a fertile wetland, and when the usual blanket of thick snow had melted from the vast plains and taiga forests. This set of three images cover an area in the northern part of the Eastern Siberian Sakha Republic. The Olenek River wends northeast from the bottom of the images to the upper left, and the top portions of the images are dominated by the delta into which the mighty Lena River empties when it reaches the Laptev Sea. At left is a natural color image from MISR's nadir (vertical-viewing) camera, in which the rivers appear murky due to the presence of sediment, and photosynthetically-active vegetation appears green. The center image is also from MISR's nadir camera, but is a false color view in which the predominant red color is due to the brightness of vegetation at near-infrared wavelengths. The most photosynthetically active parts of this area are the Lena Delta, in the lower half of the image, and throughout the great stretch of land that curves across the Olenek River and extends northeast beyond the relatively barren ranges of the Volyoi mountains (the pale tan-colored area to the right of image center). The right-hand image is a multi-angle false-color view made from the red band data of the 60o backward, nadir, and 60o forward cameras, displayed as red, green and blue, respectively. Water appears blue in this image because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. Much of the landscape and many low clouds appear purple since these surfaces are both forward and backward scattering, and clouds that are further from the surface appear in a different spot for each view angle, creating a rainbow-like appearance. However, the vegetated region that is darker green in the natural color nadir image, also appears to exhibit a faint greenish hue in the multi-angle composite. A possible explanation for this subtle green effect is that the taiga forest trees (or dwarf-shrubs) are not too dense here. Since the the nadir camera is more likly to observe any gaps between the trees or shrubs, and since the vegetation is not as bright (in the red band) as the underlying soil or surface, the brighter underlying surface results in an area that is relatively brighter at the nadir view angle. Accurate maps of vegetation structural units are an essential part of understanding the seasonal exchanges of energy and water at the Earth's surface, and of preserving the biodiversity in these regions. The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 24273. The panels cover an area of about 230 kilometers x 420 kilometers, and utilize data from blocks 30 to 34 within World Reference System-2 path 134. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Left Limb of North Pole of the Sun, March 20, 2007 (Anaglyph)
NASA Technical Reports Server (NTRS)
2007-01-01
[figure removed for brevity, see original site] [figure removed for brevity, see original site] Figure 1: Left eye view of a stereo pair Click on the image for full resolution TIFF Figure 2: Right eye view of a stereo pair Click on the image for full resolution TIFF Figure 1: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun. This location enables us to view the Sun from the position of a virtual left eye in space. Figure 2: This image was taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-A spacecraft. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space. NASA's Solar TErrestrial RElations Observatory (STEREO) satellites have provided the first three-dimensional images of the Sun. For the first time, scientists will be able to see structures in the Sun's atmosphere in three dimensions. The new view will greatly aid scientists' ability to understand solar physics and thereby improve space weather forecasting. This image is a composite of left and right eye color image pairs taken by the SECCHI Extreme UltraViolet Imager (EUVI) mounted on the STEREO-B and STEREO-A spacecraft. STEREO-B is located behind the Earth, and follows the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual left eye in space. STEREO-A is located ahead of the Earth, and leads the Earth in orbit around the Sun, This location enables us to view the Sun from the position of a virtual right eye in space. The EUVI imager is sensitive to wavelengths of light in the extreme ultraviolet portion of the spectrum. EUVI bands at wavelengths of 304, 171 and 195 Angstroms have been mapped to the red blue and green visible portion of the spectrum; and processed to emphasize the three-dimensional structure of the solar material. STEREO, a two-year mission, launched October 2006, will provide a unique and revolutionary view of the Sun-Earth System. The two nearly identical observatories -- one ahead of Earth in its orbit, the other trailing behind -- will trace the flow of energy and matter from the Sun to Earth. They will reveal the 3D structure of coronal mass ejections; violent eruptions of matter from the sun that can disrupt satellites and power grids, and help us understand why they happen. STEREO will become a key addition to the fleet of space weather detection satellites by providing more accurate alerts for the arrival time of Earth-directed solar ejections with its unique side-viewing perspective. STEREO is the third mission in NASA's Solar Terrestrial Probes program within NASA's Science Mission Directorate, Washington. The Goddard Science and Exploration Directorate manages the mission, instruments, and science center. The Johns Hopkins University Applied Physics Laboratory, Laurel, Md., designed and built the spacecraft and is responsible for mission operations. The imaging and particle detecting instruments were designed and built by scientific institutions in the U.S., UK, France, Germany, Belgium, Netherlands, and Switzerland. JPL is a division of the California Institute of Technology in Pasadena.NASA Astrophysics Data System (ADS)
Do, Dukho; Kang, Dongkyun; Ikuta, Mitsuhiro; Tearney, Guillermo J.
2016-03-01
Spectrally encoded endoscopy (SEE) is a miniature endoscopic technology that can acquire images of internal organs through a hair-thin probe. While most previously described SEE probes have been side viewing, forward-view (FV)-SEE is advantageous in certain clinical applications as it provides more natural navigation of the probe and has the potential to provide a wider field of view. Prior implementations of FV-SEE used multiple optical elements that increase fabrication complexity and may diminish the robustness of the device. In this paper, we present a new design that uses a monolithic optical element to realize FV-SEE imaging. The optical element is specially designed spacer, fabricated from a 500-μm-glass rod that has a mirror surface on one side and a grating stamped on its distal end. The mirror surface is used to change the incident angle on the grating to diffract the shortest wavelength of the spectrum so that it is parallel to the optical axis. Rotating the SEE optics creates a circular FV-SEE image. Custom-designed software processes FV-SEE images into circular images, which are displayed in real-time. In order to demonstrate this new design, we have constructed the FV-SEE optical element using a 1379 lines/mm diffraction grating. When illuminated with a source with a spectral bandwidth of 420-820 nm, the FV-SEE optical element provides 678 resolvable points per line. The imaging performance of the FV-SEE device was tested by imaging a USAF resolution target. SEE images showed that this new approach generates high quality images in the forward field with a field of view of 58°. Results from this preliminary study demonstrate that we can realize FV-SEE imaging with simple, monolithic, miniature optical element. The characteristics of this FV-SEE configuration will facilitate the development of robust miniature endoscopes for a variety of medical imaging applications.
Detail view of the port side of the payload bay ...
Detail view of the port side of the payload bay of the Orbiter Discovery. This view shows Remote Manipulator System, Canadarm, sensors in the center of the image and a close-up view of a small segment of the orbiter's radiator panel. This photograph was taken in the Orbiter Processing Facility at the Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Investigation of Prognostic Ability of Novel Imaging Markers for Traumatic Brain Injury (TBI)
2011-10-01
testing None of the above Psychological / Sociological ID: VIEW4514342955C00Name: Type of Research View: Lay Summary 1 * Provide a summary of the...best describe the current clinical status of the patient and which markers best predict a patient’s outcome status. ID: VIEW475E142D4E000Name: Lay ... Summary View: Justification, Objective, & Research Design 1 * Provide context, justification, and scientific/scholarly rationale for the study: Currently
Recent advances in multiview distributed video coding
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj
2007-04-01
We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.
Real Image Visual Display System
1992-12-01
DTI-100M autostereoscopic display ......................... 15 8. Lenticular screen ........ ............................. 16 9. Lenticular screen...parameters and pixel position ................. 17 10. General viewing of the stereoscopic couple .................... 18 11. Viewing zones for lenticular ...involves using a lenticular screen for imaging. Lenticular screens are probably most familiar in the form of ŗ-D postcards" which 15 consist of an
27. View looking to port from ship's centerline toward main ...
27. View looking to port from ship's centerline toward main electrical control panel, behind which is DC-AC motor-generator set. DC dynamo appears at lower right of image, waste water overflow pipe from hot well appears in upper right of image. - Ferry TICONDEROGA, Route 7, Shelburne, Chittenden County, VT
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
Trelease, R B
1996-01-01
Advances in computer visualization and user interface technologies have enabled development of "virtual reality" programs that allow users to perceive and to interact with objects in artificial three-dimensional environments. Such technologies were used to create an image database and program for studying the human skull, a specimen that has become increasingly expensive and scarce. Stereoscopic image pairs of a museum-quality skull were digitized from multiple views. For each view, the stereo pairs were interlaced into a single, field-sequential stereoscopic picture using an image processing program. The resulting interlaced image files are organized in an interactive multimedia program. At run-time, gray-scale 3-D images are displayed on a large-screen computer monitor and observed through liquid-crystal shutter goggles. Users can then control the program and change views with a mouse and cursor to point-and-click on screen-level control words ("buttons"). For each view of the skull, an ID control button can be used to overlay pointers and captions for important structures. Pointing and clicking on "hidden buttons" overlying certain structures triggers digitized audio spoken word descriptions or mini lectures.
The Propeller Belts in Saturn A Ring
2017-01-30
This image from NASA's Cassini mission shows a region in Saturn's A ring. The level of detail is twice as high as this part of the rings has ever been seen before. The view contains many small, bright blemishes due to cosmic rays and charged particle radiation near the planet. The view shows a section of the A ring known to researchers for hosting belts of propellers -- bright, narrow, propeller-shaped disturbances in the ring produced by the gravity of unseen embedded moonlets. Several small propellers are visible in this view. These are on the order of 10 times smaller than the large, bright propellers whose orbits scientists have routinely tracked (and which are given nicknames for famous aviators). This image is a lightly processed version, with minimal enhancement, preserving all original details present in the image. he image was taken in visible light with the Cassini spacecraft wide-angle camera on Dec. 18, 2016. The view was obtained at a distance of approximately 33,000 miles (54,000 kilometers) from the rings and looks toward the unilluminated side of the rings. Image scale is about a quarter-mile (330 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA21059
Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua
2014-06-16
To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as "ASR-TV-POCS." To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation.
Martin, Elizabeth A; Karcher, Nicole R; Bartholow, Bruce D; Siegle, Greg J; Kerns, John G
2017-03-01
Both extreme levels of social anhedonia (SocAnh) and perceptual aberration/magical ideation (PerMag) are associated with risk for schizophrenia-spectrum disorders and with emotional abnormalities. Yet, the nature of any psychophysiological-measured affective abnormality, including the role of automatic/controlled processes, is unclear. We examined the late positive potential (LPP) during passive viewing (to assess automatic processing) and during cognitive reappraisal (to assess controlled processing) in three groups: SocAnh, PerMag, and controls. The SocAnh group exhibited an increased LPP when viewing negative images. Further, SocAnh exhibited greater reductions in the LPP for negative images when told to use strategies to alter negative emotion. Similar to SocAnh, PerMag exhibited an increased LPP when viewing negative images. However, PerMag also exhibited an increased LPP when viewing positive images as well as an atypical decreased LPP when increasing positive emotion. Overall, these results suggest that at-risk groups are associated with shared and unique automatic and controlled abnormalities. Copyright © 2017 Elsevier B.V. All rights reserved.
Reconstruction of initial pressure from limited view photoacoustic images using deep learning
NASA Astrophysics Data System (ADS)
Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena
2018-02-01
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Mulgrew, Kate E; McCulloch, Karen; Farren, Emily; Prichard, Ivanka; Lim, Megan S C
2018-03-01
We tested the effectiveness of exposure to two functionality-focused media campaigns, This Girl Can and #jointhemovement, in improving state appearance and physical functionality satisfaction, exercise intent, and protecting against exposure to idealised imagery. Across two studies, 339 (M age =24.94, SD=4.98) and 256 (M age =26.46, SD=5.50) women viewed the campaign or control video, followed by images of models who were posed or physically active, or images of landscapes. State satisfaction and exercise intent was measured at pre-test, post-video, post-images, and 1-week follow-up. Social comparison was measured at post-images. Viewing either campaign produced higher appearance satisfaction and exercise intentions than the control video. Effects weren't maintained after viewing idealised imagery or 1 week later. Further, the campaigns did not decrease social comparisons when viewing idealised imagery. Results can inform agencies about campaign effectiveness and suggest that women benefit from campaigns that feature non-idealised depictions of women exercising. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Viale, Alberto; Villa, Dario
2011-03-01
Recently stereoscopy has increased a lot its popularity and various technologies are spreading in theaters and homes allowing observation of stereoscopic images and movies, becoming affordable even for home users. However there are some golden rules that users should follow to ensure a better enjoyment of stereoscopic images, first of all the viewing condition should not be too different from the ideal ones, which were assumed during the production process. To allow the user to perceive stereo depth instead of a flat image, two different views of the same scene are shown to the subject, one is seen just through his left eye and the other just through the right one; the vision process is making the work of merging the two images in a virtual three-dimensional scene, giving to the user the perception of depth. The two images presented to the user were created, either from image synthesis or from more traditional techniques, following the rules of perspective. These rules need some boundary conditions to be explicit, such as eye separation, field of view, parallax distance, viewer position and orientation. In this paper we are interested in studying how the variation of the viewer position and orientation from the ideal ones expressed as specified parameters in the image creation process, is affecting the correctness of the reconstruction of the three-dimensional virtual scene.
Multiview echocardiography fusion using an electromagnetic tracking system.
Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald
2016-08-01
Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.
Close up view of the Commander's Seat on the Flight ...
Close up view of the Commander's Seat on the Flight Deck of the Orbiter Discovery. It appears the Orbiter is in the roll out / launch pad configuration. A protective cover is over the Rotational Hand Controller to protect it during the commander's ingress. Most notable in this view are the Speed Brake/Thrust Controller in the center right in this view and the Translational Hand Controller in the center top of the view. This image was taken at Kennedy Space Center. - Space Transportation System, Orbiter Discovery (OV-103), Lyndon B. Johnson Space Center, 2101 NASA Parkway, Houston, Harris County, TX
Bebko, Genna M; Franconeri, Steven L; Ochsner, Kevin N; Chiao, Joan Y
2014-06-01
According to appraisal theories of emotion, cognitive reappraisal is a successful emotion regulation strategy because it involves cognitively changing our thoughts, which, in turn, change our emotions. However, recent evidence has challenged the importance of cognitive change and, instead, has suggested that attentional deployment may at least partly explain the emotion regulation success of cognitive reappraisal. The purpose of the current study was to examine the causal relationship between attentional deployment and emotion regulation success. We examined 2 commonly used emotion regulation strategies--cognitive reappraisal and expressive suppression-because both depend on attention but have divergent behavioral, experiential, and physiological outcomes. Participants were either instructed to regulate emotions during free-viewing (unrestricted image viewing) or gaze-controlled (restricted image viewing) conditions and to self-report negative emotional experience. For both emotion regulation strategies, emotion regulation success was not altered by changes in participant control over the (a) direction of attention (free-viewing vs. gaze-controlled) during image viewing and (b) valence (negative vs. neutral) of visual stimuli viewed when gaze was controlled. Taken together, these findings provide convergent evidence that attentional deployment does not alter subjective negative emotional experience during either cognitive reappraisal or expressive suppression, suggesting that strategy-specific processes, such as cognitive appraisal and response modulation, respectively, may have a greater impact on emotional regulation success than processes common to both strategies, such as attention.
Smoke from Fires in Southern Mexico
NASA Technical Reports Server (NTRS)
2002-01-01
On May 2, 2002, numerous fires in southern Mexico sent smoke drifting northward over the Gulf of Mexico. These views from the Multi-angle Imaging SpectroRadiometer illustrate the smoke extent over parts of the Gulf and the southern Mexican states of Tabasco, Campeche and Chiapas. At the same time, dozens of other fires were also burning in the Yucatan Peninsula and across Central America. A similar situation occurred in May and June of 1998, when Central American fires resulted in air quality warnings for several U.S. States.The image on the left is a natural color view acquired by MISR's vertical-viewing (nadir) camera. Smoke is visible, but sunglint in some ocean areas makes detection difficult. The middle image, on the other hand, is a natural color view acquired by MISR's 70-degree backward-viewing camera; its oblique view angle simultaneously suppresses sunglint and enhances the smoke. A map of aerosol optical depth, a measurement of the abundance of atmospheric particulates, is provided on the right. This quantity is retrieved using an automated computer algorithm that takes advantage of MISR's multi-angle capability. Areas where no retrieval occurred are shown in black.The images each represent an area of about 380 kilometers x 1550 kilometers and were captured during Terra orbit 12616.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.Rigorous analysis of an electric-field-driven liquid crystal lens for 3D displays
NASA Astrophysics Data System (ADS)
Kim, Bong-Sik; Lee, Seung-Chul; Park, Woo-Sang
2014-08-01
We numerically analyzed the optical performance of an electric field driven liquid crystal (ELC) lens adopted for 3-dimensional liquid crystal displays (3D-LCDs) through rigorous ray tracing. For the calculation, we first obtain the director distribution profile of the liquid crystals by using the Erickson-Leslie motional equation; then, we calculate the transmission of light through the ELC lens by using the extended Jones matrix method. The simulation was carried out for a 9view 3D-LCD with a diagonal of 17.1 inches, where the ELC lens was slanted to achieve natural stereoscopic images. The results show that each view exists separately according to the viewing position at an optimum viewing distance of 80 cm. In addition, our simulation results provide a quantitative explanation for the ghost or blurred images between views observed from a 3D-LCD with an ELC lens. The numerical simulations are also shown to be in good agreement with the experimental results. The present simulation method is expected to provide optimum design conditions for obtaining natural 3D images by rigorously analyzing the optical functionalities of an ELC lens.
2015-08-20
This view from NASA Cassini spacecraft looks toward Saturn icy moon Dione, with giant Saturn and its rings in the background, just prior to the mission final close approach to the moon on August 17, 2015. At lower right is the large, multi-ringed impact basin named Evander, which is about 220 miles (350 kilometers) wide. The canyons of Padua Chasma, features that form part of Dione's bright, wispy terrain, reach into the darkness at left. Imaging scientists combined nine visible light (clear spectral filter) images to create this mosaic view: eight from the narrow-angle camera and one from the wide-angle camera, which fills in an area at lower left. The scene is an orthographic projection centered on terrain at 0.2 degrees north latitude, 179 degrees west longitude on Dione. An orthographic view is most like the view seen by a distant observer looking through a telescope. North on Dione is up. The view was acquired at distances ranging from approximately 106,000 miles (170,000 kilometers) to 39,000 miles (63,000 kilometers) from Dione and at a sun-Dione-spacecraft, or phase, angle of 35 degrees. Image scale is about 1,500 feet (450 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19650
Two-Camera Acquisition and Tracking of a Flying Target
NASA Technical Reports Server (NTRS)
Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter
2008-01-01
A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
New Record Five-Wheel Drive, Spirit's Sol 1856 (Stereo)
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Left-eye view of a color stereo pair for PIA11962 [figure removed for brevity, see original site] Right-eye view of a color stereo pair for PIA11962 NASA's Mars Exploration Rover Spirit used its navigation camera to take the images that have been combined into this stereo, 180-degree view of the rover's surroundings during the 1,856th Martian day, or sol, of Spirit's surface mission (March 23, 2009). The center of the view is toward the west-southwest. This view combines images from the left-eye and right-eye sides of the navigation camera. It appears three-dimensional when viewed through red-blue glasses with the red lens on the left. The rover had driven 25.82 meters (84.7 feet) west-northwestward earlier on Sol 1856. This is the longest drive on Mars so far by a rover using only five wheels. Spirit lost the use of its right-front wheel in March 2006. Before Sol 1856, the farthest Spirit had covered in a single sol's five-wheel drive was 24.83 meters (81.5 feet), on Sol 1363 (Nov. 3, 2007). The Sol 1856 drive made progress on a route planned for taking Spirit around the western side of the low plateau called 'Home Plate.' A portion of the northwestern edge of Home Plate is prominent in the left quarter of this image, toward the south. This view is presented as a cylindrical-perspective projection with geometric seam correction.2008-03-01
dual view mammography with anticipated increased image contrast ; and (4) expectedly improved positive predictive value, especially for...ray source allows for reduced radiation dose as compared to standard dual-view mammography and additionally improves image contrast between soft...clear signal enhancing ~2cm diameter, detailed volume of tracer anterior to the chest wall which corresponded to that seen in the contrast enhanced
NASA Technical Reports Server (NTRS)
2006-01-01
Parallax gives depth to life. Simultaneous viewing from slightly different vantage points makes binocular humans superior to monocular cyclopes, and fixes us in the third dimension of the Universe. We've been stunned by 3-d images of Venus and Mars (along with more familiar views of earth). Now astronomers plan to give us the best view of all, 3-d images of the dynamic Sun. That's one of the prime goals of NASA's Solar Terrestrial Relations Observatories, also known as STEREO. STEREO is a pair of spacecraft observatories, one placed in orbit in front of earth, and one to be placed in an earth-trailing orbit. Simultaneous observations of the Sun with the two STEREO spacecraft will provide extraordinary 3-d views of all types of solar activity, especially the dramatic events called coronal mass ejections which send high energy particles from the outer solar atmosphere hurtling towards earth. The image above the first image of the sun by the two STEREO spacecraft, an extreme ultraviolet shot of the Sun's million-degree corona, taken by the Extreme Ultraviolet Imager on the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI) instrument package. STEREO's first 3-d solar images should be available in April if all goes well. Put on your red and blue glasses!
Fast and low-dose computed laminography using compressive sensing based technique
NASA Astrophysics Data System (ADS)
Abbas, Sajid; Park, Miran; Cho, Seungryong
2015-03-01
Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.
Space Radar Image of Long Valley, California - 3-D view
1999-05-01
This is a three-dimensional perspective view of Long Valley, California by the Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar on board the space shuttle Endeavour. This view was constructed by overlaying a color composite SIR-C image on a digital elevation map. The digital elevation map was produced using radar interferometry, a process by which radar data are acquired on different passes of the space shuttle and, which then, are compared to obtain elevation information. The data were acquired on April 13, 1994 and on October 3, 1994, during the first and second flights of the SIR-C/X-SAR radar instrument. The color composite radar image was produced by assigning red to the C-band (horizontally transmitted and vertically received) polarization; green to the C-band (vertically transmitted and received) polarization; and blue to the ratio of the two data sets. Blue areas in the image are smooth and yellow areas are rock outcrops with varying amounts of snow and vegetation. The view is looking north along the northeastern edge of the Long Valley caldera, a volcanic collapse feature created 750,000 years ago and the site of continued subsurface activity. Crowley Lake is off the image to the left. http://photojournal.jpl.nasa.gov/catalog/PIA01757
Combination of CT scanning and fluoroscopy imaging on a flat-panel CT scanner
NASA Astrophysics Data System (ADS)
Grasruck, M.; Gupta, R.; Reichardt, B.; Suess, Ch.; Schmidt, B.; Stierstorfer, K.; Popescu, S.; Brady, T.; Flohr, T.
2006-03-01
We developed and evaluated a prototype flat-panel detector based Volume CT (fpVCT) scanner. The fpVCT scanner consists of a Varian 4030CB a-Si flat-panel detector mounted in a multi slice CT-gantry (Siemens Medical Solutions). It provides a 25 cm field of view with 18 cm z-coverage at the isocenter. In addition to the standard tomographic scanning, fpVCT allows two new scan modes: (1) fluoroscopic imaging from any arbitrary rotation angle, and (2) continuous, time-resolved tomographic scanning of a dynamically changing viewing volume. Fluoroscopic imaging is feasible by modifying the standard CT gantry so that the imaging chain can be oriented along any user-selected rotation angle. Scanning with a stationary gantry, after it has been oriented, is equivalent to a conventional fluoroscopic examination. This scan mode enables combined use of high-resolution tomography and real-time fluoroscopy with a clinically usable field of view in the z direction. The second scan mode allows continuous observation of a timeevolving process such as perfusion. The gantry can be continuously rotated for up to 80 sec, with the rotation time ranging from 3 to 20 sec, to gather projection images of a dynamic process. The projection data, that provides a temporal log of the viewing volume, is then converted into multiple image stacks that capture the temporal evolution of a dynamic process. Studies using phantoms, ex vivo specimens, and live animals have confirmed that these new scanning modes are clinically usable and offer a unique view of the anatomy and physiology that heretofore has not been feasible using static CT scanning. At the current level of image quality and temporal resolution, several clinical applications such a dynamic angiography, tumor enhancement pattern and vascularity studies, organ perfusion, and interventional applications are in reach.
Comparative performance evaluation of a new a-Si EPID that exceeds quad high-definition resolution.
McConnell, Kristen A; Alexandrian, Ara; Papanikolaou, Niko; Stathakis, Sotiri
2018-01-01
Electronic portal imaging devices (EPIDs) are an integral part of the radiation oncology workflow for treatment setup verification. Several commercial EPID implementations are currently available, each with varying capabilities. To standardize performance evaluation, Task Group Report 58 (TG-58) and TG-142 outline specific image quality metrics to be measured. A LinaTech Image Viewing System (IVS), with the highest commercially available pixel matrix (2688x2688 pixels), was independently evaluated and compared to an Elekta iViewGT (1024x1024 pixels) and a Varian aSi-1000 (1024x768 pixels) using a PTW EPID QC Phantom. The IVS, iViewGT, and aSi-1000 were each used to acquire 20 images of the PTW QC Phantom. The QC phantom was placed on the couch and aligned at isocenter. The images were exported and analyzed using the epidSoft image quality assurance (QA) software. The reported metrics were signal linearity, isotropy of signal linearity, signal-tonoise ratio (SNR), low contrast resolution, and high-contrast resolution. These values were compared between the three EPID solutions. Computed metrics demonstrated comparable results between the EPID solutions with the IVS outperforming the aSi-1000 and iViewGT in the low and high-contrast resolution analysis. The performance of three commercial EPID solutions have been quantified, evaluated, and compared using results from the PTW QC Phantom. The IVS outperformed the other panels in low and high-contrast resolution, but to fully realize the benefits of the IVS, the selection of the monitor on which to view the high-resolution images is important to prevent down sampling and visual of resolution.
NASA Astrophysics Data System (ADS)
Henri, Christopher J.; Pike, Gordon; Collins, D. Louis; Peters, Terence M.
1990-07-01
We present two methods for acquiring and viewing integrated 3-D images of cerebral vasculature and cortical anatomy. The aim of each technique is to provide the neurosurgeon or radiologist with a 3-D image containing information which cannot ordinarily be obtained from a single imaging modality. The first approach employs recent developments in MR which is now capable of imaging flowing blood as well as static tissue. Here, true 3-D data are acquired and displayed using volume or surface rendering techniques. The second approach is based on the integration of x-ray projection angiograms and tomographic image data, allowing a composite image of anatomy and vasculature to be viewed in 3-D. This is accomplished by superimposing an angiographic stereo-pair onto volume rendered images of either CT or MR data created from matched viewing geometries. The two approaches are outlined and compared. Results are presented for each technique and potential clinical applications discussed.
PlenoPatch: Patch-Based Plenoptic Image Manipulation.
Zhang, Fang-Lue; Wang, Jue; Shechtman, Eli; Zhou, Zi-Ye; Shi, Jia-Xin; Hu, Shi-Min
2017-05-01
Patch-based image synthesis methods have been successfully applied for various editing tasks on still images, videos and stereo pairs. In this work we extend patch-based synthesis to plenoptic images captured by consumer-level lenselet-based devices for interactive, efficient light field editing. In our method the light field is represented as a set of images captured from different viewpoints. We decompose the central view into different depth layers, and present it to the user for specifying the editing goals. Given an editing task, our method performs patch-based image synthesis on all affected layers of the central view, and then propagates the edits to all other views. Interaction is done through a conventional 2D image editing user interface that is familiar to novice users. Our method correctly handles object boundary occlusion with semi-transparency, thus can generate more realistic results than previous methods. We demonstrate compelling results on a wide range of applications such as hole-filling, object reshuffling and resizing, changing object depth, light field upscaling and parallax magnification.
Schaefer, Evelyn G; Halldorson, Michael K; Dizon-Reynante, Cheryl
2011-04-01
The flashbulb accounts of 38 participants concerning the September 11th 2001 terrorist attack reported at both 28 hours and 6 months following the event were examined for quantity, quality, and consistency as a function of the time lapse between first learning of the event and initial viewing of media images. The flashbulb accounts of those who reported seeing images at least an hour after learning of the event differed qualitatively, but not quantitatively, from accounts of participants who reported seeing images at the same time as or within minutes of learning of the event. Delayed viewing of images resulted in less elaborate and generally less consistent accounts across the 6-month interval. The results are discussed in terms of factors affecting flashbulb memory formation and individual differences in connectedness to the event.
1999-08-24
One wide-angle and eight narrow-angle camera images of Miranda, taken by NASA Voyager 2, were combined in this view. The controlled mosaic was transformed to an orthographic view centered on the south pole.
NASA Astrophysics Data System (ADS)
Wu, Xiongxiong; Wang, Xiaorui; Zhang, Jianlei; Yuan, Ying; Chen, Xiaoxiang
2017-04-01
To realize large field of view (FOV) and high-resolution dynamic gaze of the moving target, this paper proposes the monocentric multiscale foveated (MMF) imaging system based on monocentric multiscale design and foveated imaging. First we present the MMF imaging system concept. Then we analyze large field curvature and distortion of the secondary image when the spherical intermediate image produced by the primary monocentric objective lens is relayed by the microcameras. Further a type of zoom endoscope objective lens is selected as the initial structure and optimized to minimize the field curvature and distortion with ZEMAX optical design software. The simulation results show that the maximum field curvature in full field of view is below 0.25 mm and the maximum distortion in full field of view is below 0.6%, which can meet the requirements of the microcamera in the proposed MMF imaging system. In addition, a simple doublet is used to design the foveated imaging system. Results of the microcamera together with the foveated imager compose the results of the whole MMF imaging system.
Polarimetric Imaging for the Detection of Disturbed Surfaces
2009-06-01
9 Figure 4. Rayleigh Roughness Criterion as a Function of Incident Angle ......................10 Figure 5. Definition of Geometrical...Terms (after Egan & Hallock, 1966).....................11 Figure 6. Haleakala Ash Depolarization for (a) °0 Viewing Angle and (b) °60 Viewing... Angle (from Egan et al., 1968)..........................................................13 Figure 7. Basalt Depolarization at (a) °0 Viewing Angle and
27. AERIAL VIEW OF ARVFS FIELD TEST SITE AS IT ...
27. AERIAL VIEW OF ARVFS FIELD TEST SITE AS IT LOOKED IN 1983. OBLIQUE VIEW FACING EAST. BUNKER IS IN FOREGROUND, PROTECTIVE SHED FOR WFRP AT TOP OF IMAGE. INEL PHOTO NUMBER 83-574-12-1, TAKEN IN 1983. PHOTOGRAPHER: ROMERO. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID
18. VIEW LOOKING FORWARD FROM ENGINE ROOM INTO GALLEY. STARBOARD ...
18. VIEW LOOKING FORWARD FROM ENGINE ROOM INTO GALLEY. STARBOARD ENGINE, FUEL TANK AND BATTERIES SHOWN IN RIGHT SIDE OF IMAGE. OIL-FIRED GALLEY STOVE AND FORWARD COMPANIONWAY LADDER IS IN VIEW THROUGH DOORWAY BEYOND. (HAER FIELD TEAM MEMBER CHRISTOPHER CYZEWSKI IN GALLEY) - Pilot Schooner "Alabama", Moored in harbor at Vineyard Haven, Vineyard Haven, Dukes County, MA
Juno View of Jupiter Southern Lights
2016-09-02
This infrared image gives an unprecedented view of the southern aurora of Jupiter, as captured by NASA's Juno spacecraft on August 27, 2016. The planet's southern aurora can hardly be seen from Earth due to our home planet's position in respect to Jupiter's south pole. Juno's unique polar orbit provides the first opportunity to observe this region of the gas-giant planet in detail. Juno's Jovian Infrared Auroral Mapper (JIRAM) camera acquired the view at wavelengths ranging from 3.3 to 3.6 microns -- the wavelengths of light emitted by excited hydrogen ions in the polar regions. The view is a mosaic of three images taken just minutes apart from each other, about four hours after the perijove pass while the spacecraft was moving away from Jupiter. http://photojournal.jpl.nasa.gov/catalog/PIA21033
Smith, Richard W.
1979-01-01
An acoustic imaging system for displaying an object viewed by a moving array of transducers as the array is pivoted about a fixed point within a given plane. A plurality of transducers are fixedly positioned and equally spaced within a laterally extending array and operatively directed to transmit and receive acoustic signals along substantially parallel transmission paths. The transducers are sequentially activated along the array to transmit and receive acoustic signals according to a preestablished sequence. Means are provided for generating output voltages for each reception of an acoustic signal, corresponding to the coordinate position of the object viewed as the array is pivoted. Receptions from each of the transducers are presented on the same display at coordinates corresponding to the actual position of the object viewed to form a plane view of the object scanned.
Lanning, Sharon K; Best, Al M; Temple, Henry J; Richards, Philip S; Carey, Allison; McCauley, Laurie K
2006-02-01
Accurate and consistent radiographic interpretation among clinical instructors is needed for assessment of teaching, student performance, and patient care. The purpose of this investigation was to determine if the method of radiographic viewing affects accuracy and consistency of instructors' determinations of bone loss. Forty-one clinicians who provide instruction in a dental school clinical teaching program (including periodontists, general dentists, periodontal graduate students, and dental hygienists) quantified bone loss for up to twenty-five teeth into four descriptive categories using a view box for plain film viewing or a projection system for digitized image viewing. Ratings were compared to the correct category as determined by direct measurement using the Schei ruler. Agreement with the correct choice for the view box and projection system was 70.2 percent and 64.5 percent, respectively. The mean difference was better for a projection system due to small rater error by graduate students. Projection system ratings were slightly less consistent than view box ratings. Dental hygiene faculty ratings were the most consistent but least accurate. Although the projection system resulted in slightly reduced accuracy and consistency among instructors, training sessions utilizing a single method for projecting digitized radiographic images have their advantages and may positively influence dental education and patient care by enhancing accuracy and consistency of radiographic interpretation among instructors.
Variation of MODIS reflectance and vegetation indices with viewing geometry and soybean development.
Breunig, Fábio M; Galvão, Lênio S; Formaggio, Antônio R; Epiphanio, José C N
2012-06-01
Directional effects introduce a variability in reflectance and vegetation index determination, especially when large field-of-view sensors are used (e.g., Moderate Resolution Imaging Spectroradiometer - MODIS). In this study, we evaluated directional effects on MODIS reflectance and four vegetation indices (Normalized Difference Vegetation Index - NDVI; Enhanced Vegetation Index - EVI; Normalized Difference Water Index - NDWI(1640) and NDWI(2120)) with the soybean development in two growing seasons (2004-2005 and 2005-2006). To keep the reproductive stage for a given cultivar as a constant factor while varying viewing geometry, pairs of images obtained in close dates and opposite view angles were analyzed. By using a non-parametric statistics with bootstrapping and by normalizing these indices for angular differences among viewing directions, their sensitivities to directional effects were studied. Results showed that the variation in MODIS reflectance between consecutive phenological stages was generally smaller than that resultant from viewing geometry for closed canopies. The contrary was observed for incomplete canopies. The reflectance of the first seven MODIS bands was higher in the backscattering. Except for the EVI, the other vegetation indices had larger values in the forward scattering direction. Directional effects decreased with canopy closure. The NDVI was lesser affected by directional effects than the other indices, presenting the smallest differences between viewing directions for fixed phenological stages.
Lee, Su Hyun; Cho, Nariya; Chang, Jung Min; Koo, Hye Ryoung; Kim, Jin You; Kim, Won Hwa; Bae, Min Sun; Yi, Ann; Moon, Woo Kyung
2013-10-28
Purpose To determine whether two-view shear-wave elastography (SWE) improves the performance of radiologists in differentiating benign from malignant breast masses compared with single-view SWE. Materials and Methods This prospective study was conducted with institutional review board approval, and written informed consent was obtained. B-mode ultrasonographic (US) and orthogonal SWE images were obtained for 219 breast masses (136 benign and 83 malignant; mean size, 14.8 mm) in 219 consecutive women (mean age, 47.9 years; range, 20-78 years). Five blinded radiologists independently assessed the likelihood of malignancy for three data sets: B-mode US alone, B-mode US and single-view SWE, and B-mode US and two-view SWE. Interobserver agreement regarding Breast Imaging Reporting and Data System (BI-RADS) category and the area under the receiver operating characteristic curve (AUC) of each data set were compared. Results Interobserver agreement was moderate (κ = 0.560 ± 0.015 [standard error of the mean]) for BI-RADS category assessment with B-mode US alone. When SWE was added to B-mode US, five readers showed substantial interobserver agreement (κ = 0.629 ± 0.017 for single-view SWE; κ = 0.651 ± 0.014 for two-view SWE). The mean AUC of B-mode US was 0.870 (range, 0.855-0.884). The AUC of B-mode US and two-view SWE (average, 0.928; range, 0.904-0.941) was higher than that of B-mode US and single-view SWE (average, 0.900; range, 0.890-0.920), with statistically significant differences for three readers (P ≤ .003). Conclusion The performance of radiologists in differentiating benign from malignant breast masses was improved when B-mode US was combined with two-view SWE compared with that when B-mode US was combined with single-view SWE. © RSNA, 2013 Supplemental material: S1.
Lee, Su Hyun; Cho, Nariya; Chang, Jung Min; Koo, Hye Ryoung; Kim, Jin You; Kim, Won Hwa; Bae, Min Sun; Yi, Ann; Moon, Woo Kyung
2014-02-01
To determine whether two-view shear-wave elastography (SWE) improves the performance of radiologists in differentiating benign from malignant breast masses compared with single-view SWE. This prospective study was conducted with institutional review board approval, and written informed consent was obtained. B-mode ultrasonographic (US) and orthogonal SWE images were obtained for 219 breast masses (136 benign and 83 malignant; mean size, 14.8 mm) in 219 consecutive women (mean age, 47.9 years; range, 20-78 years). Five blinded radiologists independently assessed the likelihood of malignancy for three data sets: B-mode US alone, B-mode US and single-view SWE, and B-mode US and two-view SWE. Interobserver agreement regarding Breast Imaging Reporting and Data System (BI-RADS) category and the area under the receiver operating characteristic curve (AUC) of each data set were compared. Interobserver agreement was moderate (κ = 0.560 ± 0.015 [standard error of the mean]) for BI-RADS category assessment with B-mode US alone. When SWE was added to B-mode US, five readers showed substantial interobserver agreement (κ = 0.629 ± 0.017 for single-view SWE; κ = 0.651 ± 0.014 for two-view SWE). The mean AUC of B-mode US was 0.870 (range, 0.855-0.884). The AUC of B-mode US and two-view SWE (average, 0.928; range, 0.904-0.941) was higher than that of B-mode US and single-view SWE (average, 0.900; range, 0.890-0.920), with statistically significant differences for three readers (P ≤ .003). The performance of radiologists in differentiating benign from malignant breast masses was improved when B-mode US was combined with two-view SWE compared with that when B-mode US was combined with single-view SWE. © RSNA, 2013
Macy, Jonathan T.; Chassin, Laurie; Presson, Clark C.; Yeung, Ellen
2015-01-01
Objective Test the effect of exposure to the U.S. Food and Drug Administration’s proposed graphic images with text warning statements for cigarette packages on implicit and explicit attitudes toward smoking. Design and methods A two-session web-based study was conducted with 2192 young adults 18–25 years old. During session one, demographics, smoking behavior, and baseline implicit and explicit attitudes were assessed. Session two, completed on average 18 days later, contained random assignment to viewing one of three sets of cigarette packages, graphic images with text warnings, text warnings only, or current U.S Surgeon General’s text warnings. Participants then completed post-exposure measures of implicit and explicit attitudes. ANCOVAs tested the effect of condition on the outcomes, controlling for baseline attitudes. Results Smokers who viewed packages with graphic images plus text warnings demonstrated more negative implicit attitudes compared to smokers in the other conditions (p=.004). For the entire sample, explicit attitudes were more negative for those who viewed graphic images plus text warnings compared to those who viewed current U.S. Surgeon General’s text warnings (p=.014), but there was no difference compared to those who viewed text-only warnings. Conclusion Graphic health warnings on cigarette packages can influence young adult smokers’ implicit attitudes toward smoking. PMID:26442992
Negative stimulus-response compatibility observed with a briefly displayed image of a hand.
Vainio, Lari
2011-12-01
Manual responses can be primed by viewing an image of a hand. The left-right identity of the viewed hand reflexively facilitates responses of the hand that corresponds to the identity. Previous research also suggests that when the response activation is triggered by an arrow, which is backward-masked and presented briefly, the activation manifests itself in the negative priming effect. The present study showed that response activation, which is produced by an identity of a briefly presented image of a hand, can be similarly associated with a negative priming effect. However, in contrast to the previously reported negative priming effects, the hand stimuli produced negative priming even when the hand was not backward-masked and did not contain task-relevant information. The study supports the view that the automatic inhibition of motor activation triggered by briefly viewed objects is a general and basic functional principle in exogenous motor control processes. Copyright © 2011 Elsevier Inc. All rights reserved.
2017-08-11
NASA's Cassini spacecraft looks toward the night side of Saturn's moon Titan in a view that highlights the extended, hazy nature of the moon's atmosphere. During its long mission at Saturn, Cassini has frequently observed Titan at viewing angles like this, where the atmosphere is backlit by the Sun, in order to make visible the structure of the hazes. Titan's high-altitude haze layer appears blue here, whereas the main atmospheric haze is orange. The difference in color could be due to particle sizes in the haze. The blue haze likely consists of smaller particles than the orange haze. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The image was taken with the Cassini spacecraft narrow-angle camera on May 29, 2017. The view was acquired at a distance of approximately 1.2 million miles (2 million kilometers) from Titan. Image scale is 5 miles (9 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21625
Dual-view-zone tabletop 3D display system based on integral imaging.
He, Min-Yang; Zhang, Han-Le; Deng, Huan; Li, Xiao-Wei; Li, Da-Hai; Wang, Qiong-Hua
2018-02-01
In this paper, we propose a dual-view-zone tabletop 3D display system based on integral imaging by using a multiplexed holographic optical element (MHOE) that has the optical properties of two sets of microlens arrays. The MHOE is recorded by a reference beam using the single-exposure method. The reference beam records the wavefronts of a microlens array from two different directions. Thus, when the display beam is projected on the MHOE, two wavefronts with the different directions will be rebuilt and the 3D virtual images can be reconstructed in two viewing zones. The MHOE has angle and wavelength selectivity. Under the conditions of the matched wavelength and the angle of the display beam, the diffraction efficiency of the MHOE is greatest. Because the unmatched light just passes through the MHOE, the MHOE has the advantage of a see-through display. The experimental results confirm the feasibility of the dual-view-zone tabletop 3D display system.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
A multi-view face recognition system based on cascade face detector and improved Dlib
NASA Astrophysics Data System (ADS)
Zhou, Hongjun; Chen, Pei; Shen, Wei
2018-03-01
In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.
How much articular displacement can be detected using fluoroscopy for tibial plateau fractures?
Haller, Justin M; O'Toole, Robert; Graves, Matthew; Barei, David; Gardner, Michael; Kubiak, Erik; Nascone, Jason; Nork, Sean; Presson, Angela P; Higgins, Thomas F
2015-11-01
While there is conflicting evidence regarding the importance of anatomic reduction for tibial plateau fractures, there are currently no studies that analyse our ability to grade reduction based on fluoroscopic imaging. The purpose of this study was to determine the accuracy of fluoroscopy in judging tibial plateau articular reduction. Ten embalmed human cadavers were selected. The lateral plateau was sagitally sectioned, and the joint was reduced under direct visualization. Lateral, anterior-posterior (AP), and joint line fluoroscopic views were obtained. The same fluoroscopic views were obtained with 2mm displacement and 5mm displacement. The images were randomised, and eight orthopaedic traumatologists were asked whether the plateau was reduced. Within each pair of conditions (view and displacement from 0mm to 5mm) sensitivity, specificity, and intraclass correlations (ICC) were evaluated. The AP-lateral view with 5mm displacement yielded the highest accuracy for detecting reduction at 90% (95% CI: 83-94%). For the other conditions, accuracy ranged from (37-83%). Sensitivity was highest for the reduced lateral view (79%, 95% CI: 57-91%). Specificity was highest in the AP-lateral view 98% (95% CI: 93-99%) for 5mm step-off. ICC was perfect for the AP-lateral view with 5mm displacement, but otherwise agreement ranged from poor to moderate at ICC=0.09-0.46. Finally, there was no additional benefit to including the joint-line view with the AP and lateral views. Using both AP and lateral views for 5mm displacement had the highest accuracy, specificity, and ICC. Outside of this scenario, agreement was poor to moderate and accuracy was low. Applying this clinically, direct visualization of the articular surface may be necessary to ensure malreduction less than 5mm. Copyright © 2015 Elsevier Ltd. All rights reserved.
Atmospheric Science Data Center
2014-05-15
... View Larger Image Multi-angle views of the Appalachian Mountains, March 6, 2000 . ... Center Atmospheric Science Data Center in Hampton, VA. Photo credit: NASA/GSFC/LaRC/JPL, MISR Science Team Other formats ...
... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...
An extensive dataset of eye movements during viewing of complex images.
Wilming, Niklas; Onat, Selim; Ossandón, José P; Açık, Alper; Kietzmann, Tim C; Kaspar, Kai; Gameiro, Ricardo R; Vormberg, Alexandra; König, Peter
2017-01-31
We present a dataset of free-viewing eye-movement recordings that contains more than 2.7 million fixation locations from 949 observers on more than 1000 images from different categories. This dataset aggregates and harmonizes data from 23 different studies conducted at the Institute of Cognitive Science at Osnabrück University and the University Medical Center in Hamburg-Eppendorf. Trained personnel recorded all studies under standard conditions with homogeneous equipment and parameter settings. All studies allowed for free eye-movements, and differed in the age range of participants (~7-80 years), stimulus sizes, stimulus modifications (phase scrambled, spatial filtering, mirrored), and stimuli categories (natural and urban scenes, web sites, fractal, pink-noise, and ambiguous artistic figures). The size and variability of viewing behavior within this dataset presents a strong opportunity for evaluating and comparing computational models of overt attention, and furthermore, for thoroughly quantifying strategies of viewing behavior. This also makes the dataset a good starting point for investigating whether viewing strategies change in patient groups.
Secure information display with limited viewing zone by use of multi-color visual cryptography.
Yamamoto, Hirotsugu; Hayasaki, Yoshio; Nishida, Nobuo
2004-04-05
We propose a display technique that ensures security of visual information by use of visual cryptography. A displayed image appears as a completely random pattern unless viewed through a decoding mask. The display has a limited viewing zone with the decoding mask. We have developed a multi-color encryption code set. Eight colors are represented in combinations of a displayed image composed of red, green, blue, and black subpixels and a decoding mask composed of transparent and opaque subpixels. Furthermore, we have demonstrated secure information display by use of an LCD panel.
An Analysis of Image Segmentation Time in Beam’s-Eye-View Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Chun; Spelbring, D.R.; Chen, George T.Y.
In this work we tabulate and histogram the image segmentation time for beam’s eye view (BEV) treatment planning in our center. The average time needed to generate contours on CT images delineating normal structures and treatment target volumes is calculated using a data base containing over 500 patients’ BEV plans. The average number of contours and total image segmentation time needed for BEV plans in three common treatment sites, namely, head/neck, lung/chest, and prostate, were estimated.
A Macintosh-Based Scientific Images Video Analysis System
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Friedland, Peter (Technical Monitor)
1994-01-01
A set of experiments was designed at MIT's Man-Vehicle Laboratory in order to evaluate the effects of zero gravity on the human orientation system. During many of these experiments, the movements of the eyes are recorded on high quality video cassettes. The images must be analyzed off-line to calculate the position of the eyes at every moment in time. To this aim, I have implemented a simple inexpensive computerized system which measures the angle of rotation of the eye from digitized video images. The system is implemented on a desktop Macintosh computer, processes one play-back frame per second and exhibits adequate levels of accuracy and precision. The system uses LabVIEW, a digital output board, and a video input board to control a VCR, digitize video images, analyze them, and provide a user friendly interface for the various phases of the process. The system uses the Concept Vi LabVIEW library (Graftek's Image, Meudon la Foret, France) for image grabbing and displaying as well as translation to and from LabVIEW arrays. Graftek's software layer drives an Image Grabber board from Neotech (Eastleigh, United Kingdom). A Colour Adapter box from Neotech provides adequate video signal synchronization. The system also requires a LabVIEW driven digital output board (MacADIOS II from GW Instruments, Cambridge, MA) controlling a slightly modified VCR remote control used mainly to advance the video tape frame by frame.
The Distal Humerus Axial View: Assessment of Displacement in Medial Epicondyle Fractures.
Souder, Christopher D; Farnsworth, Christine L; McNeil, Natalie P; Bomar, James D; Edmonds, Eric W
2015-01-01
The assessment and treatment of childhood medial epicondyle humerus fractures continues to be associated with significant debate. Several studies demonstrate that standard radiographic views are unable to accurately portray the true displacement. Without reliable ways to assess the amount of displacement, how can we debate treatment and outcomes? This study introduces a novel imaging technique for the evaluation of medial epicondyle fractures. An osteotomy of a cadaveric humerus was performed to simulate a medial epicondyle fracture. Plain radiographs were obtained with the fracture fragment displaced anteriorly in 2-mm increments between 0 and 18 mm. Anteroposterior (AP), internal oblique (IR), lateral (LAT), and distal humerus axial (AXIAL) views were performed. Axial images were obtained by positioning the central ray above the shoulder at 15 to 20 degrees from the long axis of the humerus, centered on the distal humerus. Displacement (mm) was measured by 7 orthopaedic surgeons on digital radiographs. At 10 mm displacement, AP views underestimated displacement by 5.5±0.6 mm and IR views underestimated by 3.8±2.1 mm. On LAT views, readers were not able to visualize fragments with <10 mm displacement. Displacement ≥10 mm from LAT views was overestimated by 1 reader by up to 4.6 mm and underestimated by others by up to 18.0 mm. AXIAL images more closely estimated the true amount of displacement, with a mean 1.5±1.1 mm error in measurement for <10 mm displacement and a mean 0.8±0.7 mm error for displacements of ≥10 mm. AXIAL measurements correlated strongly with the actual displacement (r=0.998, P<0.05); AP measurements did not (r=0.655, P=0.55). Intraclass correlation coefficient (ICC) was 0.257 for AP and IR measurements; ICC was 0.974 for AXIAL measurements. Standard imaging, consisting of AP, IR, and LAT radiographs, consistently underestimates the actual displacement of medial epicondyle humerus fractures. The newly described AXIAL projection more accurately and reliably demonstrated the true displacement while reducing the need for advanced imaging such as computed tomography. This simple view can be easily obtained at a clinic visit, enhancing the surgeon's ability to determine the true displacement.
Who's that Girl: Television's Role in the Body Image Development of Young White and Black Women
ERIC Educational Resources Information Center
Schooler, Deborah; Ward, L. Monique; Merriwether, Ann; Caruthers, Allison
2004-01-01
Although findings indicate a connection between frequent media use and greater body dissatisfaction, little attention has focused on the role of race. Accordingly, this study investigates the relation between television viewing and body image among 87 Black and 584 White women. Participants reported monthly viewing amounts of mainstream and…
Volcanoes of the Alaska Peninsula and Aleutian Islands, Alaska: selected photographs
Neal, Christina A.; McGimsey, Robert G.
2002-01-01
This CD-ROM contains 97 digital images of volcanoes along the Aleutian volcanic arc in Alaska. Perspectives include distant aerial shots, ground views of volcanic products and processes, and dramatic views of eruptions in progress. Each image is stored as a .PCD file in five resolutions. Brief captions, a location map, and glossary are included.
Don't Look down: Emotional Arousal Elevates Height Perception
ERIC Educational Resources Information Center
Stefanucci, Jeanine K.; Storbeck, Justin
2009-01-01
In a series of experiments, it was found that emotional arousal can influence height perception. In Experiment 1, participants viewed either arousing or nonarousing images before estimating the height of a 2-story balcony and the size of a target on the ground below the balcony. People who viewed arousing images overestimated height and target…
NASA Technical Reports Server (NTRS)
Morey, W. W.
1984-01-01
This report covers the development and testing of a prototype combustor viewing system. The system allows one to see and record images from the inside of an operating gas turbine combustor. The program proceeded through planned phases of conceptual design, preliminary testing to resolve problem areas, prototype design and fabrication, and rig testing. Successful tests were completed with the viewing system in the laboratory, in a high pressure combustor rig, and on a Pratt and Whitney PW20307 jet engine. Both film and video recordings were made during the tests. Digital image analysis techniques were used to enhance images and bring out special effects. The use of pulsed laser illumination was also demonstrated as a means for observing liner surfaces in the presence of luminous flame.
Elliptical field-of-view PROPELLER imaging.
Devaraj, Ajit; Pipe, James G
2009-09-01
Traditionally two-dimensional scans are designed to support an isotropic field-of-view (iFOV). When imaging elongated objects, significant savings in scan time can potentially be achieved by supporting an elliptical field-of-view (eFOV). This work presents an empirical closed-form solution to adapt the PROPELLER trajectory for an eFOV. The proposed solution is built on the geometry of the PROPELLER trajectory permitting the scan prescription and data reconstruction to remain largely similar to standard PROPELLER. The achieved FOV is experimentally validated by the point spread function (PSF) of a phantom scan. The details of potential savings in scan time and the signal-to-noise ratio (SNR) performance in comparison to iFOV scans for both phantom and in-vivo images are also described.
Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M
2004-01-01
In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.
Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.
Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene
2016-01-01
To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.
Feeling Is Believing: Evaluative Conditioning and the Ethics of Pharmaceutical Advertising.
Biegler, Paul; Vargas, Patrick
2016-06-01
A central goal in regulating direct-to-consumer advertising of prescription pharmaceuticals (DTCA) is to ensure that explicit drug claims are truthful. Yet imagery can also alter viewer attitudes, and the degree to which this occurs in DTCA is uncertain. Addressing this data gap, we provide evidence that positive feelings produced by images can promote favourable beliefs about pharmaceuticals. We had participants view a fictitious anti-influenza drug paired with unrelated images that elicited either positive, neutral or negative feelings. Participants who viewed positive images rated the influenza drug as significantly more effective, safe, and beneficial than did participants who viewed negative images. This effect, known as evaluative conditioning, is well described in experimental social psychology but has not previously been shown with pharmaceuticals. We discuss how evaluative conditioning in DTCA may compromise viewer autonomy, and canvass possible regulatory responses.
Three dimensional perspective view of false-color image of eastern Hawaii
NASA Technical Reports Server (NTRS)
1994-01-01
This is a three dimensional perspective view of false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies C-Band and L-Band. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The image was acquired on April 12, 1994 during the 52nd orbit of the Shuttle Endeavour by the Spaceborne Imaging Radar-C and X-Band Synthetic Aperture Radar (SIR-C/X-SAR). The area shown is approximately 34 by 57 kilomters with the top of the image pointing toward north-west. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. Visible in the center of the image in blue are the summit crater (Kilauea Caidera) which contains the smaller Halemaumau Crater, and the line of collapsed craters below them that form the Chain of Craters Road. The rain forest appears bright in the image while green areas correspond to lower vegetation. The lava flows have differen
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
2016-10-18
perspective view of Charon's informally named "Serenity Chasm" consists of topography generated from stereo reconstruction of images taken by New Horizons' Long Range Reconnaissance Imager (LORRI) and Multispectral Visible Imaging Camera (MVIC), supplemented by a "shape-from-shading" algorithm. The topography is then overlain with the PIA21128 image mosaic and the perspective view is rendered. The MVIC image was taken from a distance of 45,458 miles (73,159 kilometers) while the LORRI picture was taken from 19,511 miles (31,401 kilometers) away, both on July 14, 2015. http://photojournal.jpl.nasa.gov/catalog/PIA21129
Busettini, C; Miles, F A; Schwarz, U; Carl, J R
1994-01-01
Recent experiments on monkeys have indicated that the eye movements induced by brief translation of either the observer or the visual scene are a linear function of the inverse of the viewing distance. For the movements of the observer, the room was dark and responses were attributed to a translational vestibulo-ocular reflex (TVOR) that senses the motion through the otolith organs; for the movements of the scene, which elicit ocular following, the scene was projected and adjusted in size and speed so that the retinal stimulation was the same at all distances. The shared dependence on viewing distance was consistent with the hypothesis that the TVOR and ocular following are synergistic and share central pathways. The present experiments looked for such dependencies on viewing distance in human subjects. When briefly accelerated along the interaural axis in the dark, human subjects generated compensatory eye movements that were also a linear function of the inverse of the viewing distance to a previously fixated target. These responses, which were attributed to the TVOR, were somewhat weaker than those previously recorded from monkeys using similar methods. When human subjects faced a tangent screen onto which patterned images were projected, brief motion of those images evoked ocular following responses that showed statistically significant dependence on viewing distance only with low-speed stimuli (10 degrees/s). This dependence was at best weak and in the reverse direction of that seen with the TVOR, i.e., responses increased as viewing distance increased. We suggest that in generating an internal estimate of viewing distance subjects may have used a confounding cue in the ocular-following paradigm--the size of the projected scene--which was varied directly with the viewing distance in these experiments (in order to preserve the size of the retinal image). When movements of the subject were randomly interleaved with the movements of the scene--to encourage the expectation of ego-motion--the dependence of ocular following on viewing distance altered significantly: with higher speed stimuli (40 degrees/s) many responses (63%) now increased significantly as viewing distance decreased, though less vigorously than the TVOR. We suggest that the expectation of motion results in the subject placing greater weight on cues such as vergence and accommodation that provide veridical distance information in our experimental situation: cue selection is context specific.
Pandey, Anil Kumar; Saroha, Kartik; Sharma, Param Dev; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
In this study, we have developed a simple image processing application in MATLAB that uses suprathreshold stochastic resonance (SSR) and helps the user to visualize abdominopelvic tumor on the exported prediuretic positron emission tomography/computed tomography (PET/CT) images. A brainstorming session was conducted for requirement analysis for the program. It was decided that program should load the screen captured PET/CT images and then produces output images in a window with a slider control that should enable the user to view the best image that visualizes the tumor, if present. The program was implemented on personal computer using Microsoft Windows and MATLAB R2013b. The program has option for the user to select the input image. For the selected image, it displays output images generated using SSR in a separate window having a slider control. The slider control enables the user to view images and select one which seems to provide the best visualization of the area(s) of interest. The developed application enables the user to select, process, and view output images in the process of utilizing SSR to detect the presence of abdominopelvic tumor on prediuretic PET/CT image.
Large-field-of-view imaging by multi-pupil adaptive optics.
Park, Jung-Hoon; Kong, Lingjie; Zhou, Yifeng; Cui, Meng
2017-06-01
Adaptive optics can correct for optical aberrations. We developed multi-pupil adaptive optics (MPAO), which enables simultaneous wavefront correction over a field of view of 450 × 450 μm 2 and expands the correction area to nine times that of conventional methods. MPAO's ability to perform spatially independent wavefront control further enables 3D nonplanar imaging. We applied MPAO to in vivo structural and functional imaging in the mouse brain.
NASA's Great Observatories Celebrate the International Year of Astronomy
NASA Technical Reports Server (NTRS)
2009-01-01
[figure removed for brevity, see original site] Click on the image for larger version In 1609, Galileo improved the newly invented telescope, turned it toward the heavens, and revolutionized our view of the universe. In celebration of the 400th anniversary of this milestone, 2009 has been designated as the International Year of Astronomy. Today, NASA's Great Observatories are continuing Galileo's legacy with stunning images and breakthrough science from the Hubble Space Telescope, the Spitzer Space Telescope, and the Chandra X-ray Observatory. While Galileo observed the sky using visible light seen by the human eye, technology now allows us to observe in many wavelengths, including Spitzer's infrared view and Chandra's view in X-rays. Each wavelength region shows different aspects of celestial objects and often reveals new objects that could not otherwise be studied. This image of the spiral galaxy Messier 101 is a composite of views from Spitzer, Hubble, and Chandra. The red color shows Spitzer's view in infrared light. It highlights the heat emitted by dust lanes in the galaxy where stars can form. The yellow color is Hubble's view in visible light. Most of this light comes from stars, and they trace the same spiral structure as the dust lanes. The blue color shows Chandra's view in X-ray light. Sources of X-rays include million-degree gas, exploded stars, and material colliding around black holes. Such composite images allow astronomers to see how features seen in one wavelength match up with those seen in another wavelength. It's like seeing with a camera, night vision goggles, and X-ray vision all at once. In the four centuries since Galileo, astronomy has changed dramatically. Yet our curiosity and quest for knowledge remain the same. So, too, does our wonder at the splendor of the universe. The International Year of Astronomy Great Observatories Image Unveiling is supported by the NASA Science Mission Directorate Astrophysics Division. The project is a collaboration between the Space Telescope Science Institute, the Spitzer Science Center, and the Chandra X-ray Center.3-D Perspective View, Kamchatka Peninsula, Russia
2000-03-23
This perspective view shows the western side of the volcanically active Kamchatka Peninsula in eastern Russia. The image was generated using the first data collected during NASA Shuttle Radar Topography Mission SRTM.
2011-04-04
This image layout shows two views of the same baby star from NASA Spitzer Space Telescope. Spitzer view shows that this star has a second, identical jet shooting off in the opposite direction of the first.
Combined multi-spectrum and orthogonal Laplacianfaces for fast CB-XLCT imaging with single-view data
NASA Astrophysics Data System (ADS)
Zhang, Haibo; Geng, Guohua; Chen, Yanrong; Qu, Xuan; Zhao, Fengjun; Hou, Yuqing; Yi, Huangjian; He, Xiaowei
2017-12-01
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, which has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. Single-view data reconstruction as a key issue of CB-XLCT imaging promotes the effective study of dynamic XLCT imaging. However, it suffers from serious ill-posedness in the inverse problem. In this paper, a multi-spectrum strategy is adopted to relieve the ill-posedness of reconstruction. The strategy is based on the third-order simplified spherical harmonic approximation model. Then, an orthogonal Laplacianfaces-based method is proposed to reduce the large computational burden without degrading the imaging quality. Both simulated data and in vivo experimental data were used to evaluate the efficiency and robustness of the proposed method. The results are satisfactory in terms of both location and quantitative recovering with computational efficiency, indicating that the proposed method is practical and promising for single-view CB-XLCT imaging.
Want, Stephen C; Saiphoo, Alyssa
2017-03-01
The present study investigated whether social comparisons with media images are cognitively efficient (demanding minimal mental effort) or cognitively effortful processes, in a sample of female undergraduate students (N=151) who reported feeling pressure from the media regarding their appearance. Two groups were shown 12 images of thin and attractive female models. One group was asked to memorize a complex 8-digit number during exposure to the images (Cognitively Busy condition), while the other memorized a much simpler number (Free View condition). A third group (Control condition) viewed images without people. Participants in the Free View condition demonstrated significantly increased negative mood and lowered appearance satisfaction from before to after exposure, while participants in the Cognitively Busy and Control conditions did not. We argue that these results suggest social comparisons with media images are at least somewhat cognitively effortful even among women who say they feel pressure from the media. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Xie, Hongbo; Ren, Delun; Wang, Chao; Mao, Chensheng; Yang, Lei
2018-02-01
Ultrafast time stretch imaging offers unprecedented imaging speed and enables new discoveries in scientific research and engineering. One challenge in exploiting time stretch imaging in mid-infrared is the lack of high-quality diffractive optical elements (DOEs), which encode the image information into mid-infrared optical spectrum. This work reports the design and optimization of mid-infrared DOE with high diffraction-efficiency, broad bandwidth and large field of view. Using various typical materials with their refractive indices ranging from 1.32 to 4.06 in ? mid-infrared band, diffraction efficiencies of single-layer and double-layer DOEs have been studied in different wavelength bands with different field of views. More importantly, by replacing the air gap of double-layer DOE with carefully selected optical materials, one optimized ? triple-layer DOE, with efficiency higher than 95% in the whole ? mid-infrared window and field of view greater than ?, is designed and analyzed. This new DOE device holds great potential in ultrafast mid-infrared time stretch imaging and spectroscopy.
X-RAY IMAGING Achieving the third dimension using coherence
Robinson, Ian; Huang, Xiaojing
2017-01-25
X-ray imaging is extensively used in medical and materials science. Traditionally, the depth dimension is obtained by turning the sample to gain different views. The famous penetrating properties of X-rays mean that projection views of the subject sample can be readily obtained in the linear absorption regime. 180 degrees of projections can then be combined using computed tomography (CT) methods to obtain a full 3D image, a technique extensively used in medical imaging. In the work now presented in Nature Materials, Stephan Hruszkewycz and colleagues have demonstrated genuine 3D imaging by a new method called 3D Bragg projection ptychography1. Ourmore » approach combines the 'side view' capability of using Bragg diffraction from a crystalline sample with the coherence capabilities of ptychography. Thus, it results in a 3D image from a 2D raster scan of a coherent beam across a sample that does not have to be rotated.« less
Perspective View with Landsat Overlay, Salt Lake City Olympics Venues, Utah
NASA Technical Reports Server (NTRS)
2002-01-01
The 2002 Winter Olympics are hosted by Salt Lake City at several venues within the city, in nearby cities, and within the adjacent Wasatch Mountains. This computer generated perspective image provides a northward looking 'view from space' that includes all of these Olympic sites. In the south, next to Utah Lake, Provo hosts the ice hockey competition. In the north, northeast of the Great Salt Lake, Ogden hosts curling, and the nearby Snow Basin ski area hosts the downhill events. In between, southeast of the Great Salt Lake, Salt Lake City hosts the Olympic Village and the various skating events. Further east, across the Wasatch Mountains, the Park City area ski resorts host the bobsled, ski jumping, and snowboarding events. The Winter Olympics are always hosted in mountainous terrain. This view shows the dramatic landscape that makes the Salt Lake City region a world-class center for winter sports.
This 3-D perspective view was generated using topographic data from the Shuttle Radar Topography Mission (SRTM) and a Landsat 5 satellite image mosaic. Topographic expression is exaggerated four times.For a full-resolution, annotated version of this image, please select Figure 1, below: [figure removed for brevity, see original site] Landsat has been providing visible and infrared views of the Earth since 1972. SRTM elevation data matches the 30-meter (98-foot) resolution of most Landsat images and will substantially help in analyzing the large and growing Landsat image archive, managed by the U.S. Geological Survey (USGS).Elevation data used in this image was acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter(approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.Size: View width 48.8 kilometers (30.2 miles), View distance 177 kilometers (110 miles) Location: 41 deg. North lat., 112.0 deg. West lon. Orientation: View North, 20 degrees below horizontal Image Data: Landsat Bands 3, 2, 1 as red, green, blue, respectively. Original Data Resolution: SRTM 1 arcsecond (30 meters or 98 feet), Thematic Mapper 30 meters (98 feet) Date Acquired: February 2000 (SRTM), 1990s (Landsat 5 image mosaic)The Blue Marble 43 Years Later
Atmospheric Science Data Center
2017-05-16
article title: The Blue Marble 43 Years Later View larger image ... points over more than four decades apart. The iconic "Blue Marble" view on the left was taken 43 years ago on December 7, 1972 from ... points over more than four decades apart. The iconic "Blue Marble" view on the left was taken 43 years ago on December 7, 1972 from ...
ERIC Educational Resources Information Center
Nielsen, Dorte Guldbrand; Gotzsche, Ole; Sonne, Ole; Eika, Berit
2012-01-01
Two major views on the relationship between basic science knowledge and clinical knowledge stand out; the Two-world view seeing basic science and clinical science as two separate knowledge bases and the encapsulated knowledge view stating that basic science knowledge plays an overt role being encapsulated in the clinical knowledge. However, resent…
Color and 3D views of the Sierra Nevada mountains
NASA Technical Reports Server (NTRS)
2002-01-01
A stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras provides a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left. Some prominent features are Mono Lake, in the center of the image; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges. Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.
NASA Technical Reports Server (NTRS)
Post, R. B.; Welch, R. B.
1996-01-01
Visually perceived eye level (VPEL) was measured while subjects viewed two vertical lines which were either upright or pitched about the horizontal axis. In separate conditions, the display consisted of a relatively large pair of lines viewed at a distance of 1 m, or a display scaled to one third the dimensions and viewed at a distance of either 1 m or 33.3 cm. The small display viewed at 33.3 cm produced a retinal image the same size as that of the large display at 1 m. Pitch of all three displays top-toward and top-away from the observer caused upward and downward VPEL shifts, respectively. These effects were highly similar for the large display and the small display viewed at 33.3 cm (ie equal retinal size), but were significantly smaller for the small display viewed at 1 m. In a second experiment, perceived size of the three displays was measured and found to be highly accurate. The results of the two experiments indicate that the effect of optical pitch on VPEL depends on the retinal image size of stimuli rather than on perceived size.
Image of God: effect on coping and psychospiritual outcomes in early breast cancer survivors.
Schreiber, Judith A
2011-05-01
To examine the effect of breast cancer survivors' views of God on religious coping strategies, depression, anxiety, stress, concerns about recurrence, and psychological well-being. Exploratory, cross-sectional, comparative survey. Outpatients from community and university oncology practices in the southeastern United States. 130 early breast cancer survivors (6-30 months postdiagnosis). Self-report written survey packets were mailed to practice-identified survivors. Image of God, religious coping strategies, depression, anxiety, stress, concerns about recurrence, and psychological well-being. Women who viewed God as highly engaged used more coping strategies to promote spiritual conservation in proportion to coping strategies that reflect spiritual struggle. Women who viewed God as highly engaged maintained psychological well-being when either spiritual conservation or spiritual struggle coping styles were used. No differences in variables were noted for women who viewed God as more or less angry. The belief in an engaged God is significantly related to increased psychological well-being, decreased psychological distress, and decreased concern about recurrence. Addressing survivors' issues related to psychological adjustment and concern about recurrence within their world view would allow for more personalized and effective interventions. Future research should be conducted to establish how the view that God is engaged affects coping and psychological adjustment across diverse groups of cancer survivors and groups with monotheistic, polytheistic, and naturalistic world views. This could lead to a practical method for examining the influence of these world views on individuals' responses to cancer diagnosis, treatment, and survivorship.
"The swarming of life": moving images, education, and views through the microscope.
Gaycken, Oliver
2011-09-01
Discussions of the scientific uses of moving-image technologies have emphasized applications that culminated in static images, such as the chronophotographic decomposition of movement into discrete and measurable instants. The projection of movement, however, was also an important capability of moving-image technologies that scientists employed in a variety of ways. Views through the microscope provide a particularly sustained and prominent instance of the scientific uses of the moving image. The category of "education" subsumes theses various scientific uses, providing a means by which to bridge the cultures of scientific and popular scientific moving images.
Is there a preference for linearity when viewing natural images?
NASA Astrophysics Data System (ADS)
Kane, David; Bertamío, Marcelo
2015-01-01
The system gamma of the imaging pipeline, defined as the product of the encoding and decoding gammas, is typically greater than one and is stronger for images viewed with a dark background (e.g. cinema) than those viewed in lighter conditions (e.g. office displays).1-3 However, for high dynamic range (HDR) images reproduced on a low dynamic range (LDR) monitor, subjects often prefer a system gamma of less than one,4 presumably reflecting the greater need for histogram equalization in HDR images. In this study we ask subjects to rate the perceived quality of images presented on a LDR monitor using various levels of system gamma. We reveal that the optimal system gamma is below one for images with a HDR and approaches or exceeds one for images with a LDR. Additionally, the highest quality scores occur for images where a system gamma of one is optimal, suggesting a preference for linearity (where possible). We find that subjective image quality scores can be predicted by computing the degree of histogram equalization of the lightness distribution. Accordingly, an optimal, image dependent system gamma can be computed that maximizes perceived image quality.
View angle effect in LANDSAT imagery
NASA Technical Reports Server (NTRS)
Kaneko, T.; Engvall, J. L.
1977-01-01
The view angle effect in LANDSAT 2 imagery was investigated. The LANDSAT multispectral scanner scans over a range of view angles of -5.78 to 5.78 degrees. The view angle effect, which is caused by differing view angles, could be studied by comparing data collected at different view angles over a fixed location at a fixed time. Since such LANDSAT data is not available, consecutive day acquisition data were used as a substitute: they were collected over the same geographical location, acquired 24 hours apart, with a view angle change of 7 to 8 degrees at a latitude of 35 to 45 degrees. It is shown that there is approximately a 5% reduction in the average sensor response on the second-day acquisitions as compared with the first-day acquisitions, and that the view angle effect differs field to field and crop to crop. On false infrared color pictures the view angle effect causes changes primarily in brightness and to a lesser degree in color (hue and saturation). An implication is that caution must be taken when images with different view angles are combined for classification and a signature extension technique needs to take the view angle effect into account.
MISR Sees the Sierra Nevadas in Stereo
NASA Technical Reports Server (NTRS)
2000-01-01
These MISR images of the Sierra Nevada mountains near the California-Nevada border were acquired on August 12, 2000 during Terra orbit 3472. On the left is an image from the vertical-viewing (nadir) camera. On the right is a stereo 'anaglyph' created using the nadir and 45.6-degree forward-viewing cameras, providing a three-dimensional view of the scene when viewed with red/blue glasses. The red filter should be placed over your left eye. To facilitate the stereo viewing, the images have been oriented with north toward the left.Some prominent features are Mono Lake, in the center of the images; Walker Lake, to its left; and Lake Tahoe, near the lower left. This view of the Sierra Nevadas includes Yosemite, Kings Canyon, and Sequoia National Parks. Mount Whitney, the highest peak in the contiguous 48 states (elev. 14,495 feet), is visible near the righthand edge. Above it (to the east), the Owens Valley shows up prominently between the Sierra Nevada and Inyo ranges.Precipitation falling as rain or snow on the Sierras feeds numerous rivers flowing southwestward into the San Joaquin Valley. The abundant fields of this productive agricultural area can be seen along the lower right; a large number of reservoirs that supply water for crop irrigation are apparent in the western foothills of the Sierras. Urban areas in the valley appear as gray patches; among the California cities that are visible are Fresno, Merced, and Modesto.MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.2013-01-01
Introduction This prospective study aimed to assess whether use of the subxiphoid acoustic window in transthoracic echocardiography (TTE) can be an accurate alternative in the absence of an apical view to assess hemodynamic parameters. Methods This prospective study took place in a teaching hospital medical ICU. Over a 4-month period, TTE was performed in patients admitted for more than 24 hours. Two operators rated the quality of parasternal, apical, and subxiphoid acoustic windows as Excellent, Good, Acceptable, Poor, or No image. In the subpopulation presenting adequate (rated as acceptable or higher) apical and subxiphoid views, we compared the left ventricular ejection fraction (LVEF), the ratio between right and left ventricular end-diastolic areas (RVEDA/LVEDA), the ratio between early and late mitral inflow on pulsed Doppler (E/A ratio), the aortic velocity time integral (Ao VTI), and the ratio between early mitral inflow and displacement of the mitral annulus on tissue Doppler imaging (E/Ea ratio). Results An adequate apical view was obtained in 80%, and an adequate subxiphoid view was obtained in 63% of the 107 patients included. Only 5% of patients presented an adequate subxiphoid view without an adequate apical view. In the subpopulation of patients with adequate apical and subxiphoid windows (n = 65), LVEF, E/A, and RVEDA/LVEDA were comparable on both views, and were strongly correlated (r > 0.80) with acceptable biases and precision. However, the Ao VTI and the E/Ea ratio were lower on the subxiphoid view than on the apical view (18 ± 5 versus 16 ± 5 cm and 9.6 ± 4.6 versus 7.6 ± 4 cm, respectively, P = 0.001 for both). Conclusions An adequate TTE subxiphoid window was obtained in fewer than two thirds of ICU patients. In addition to the classic indication for the subxiphoid window to study the vena cava and pericardium, this view can be used to study right and left ventricular morphology and function, but does not provide accurate hemodynamic Doppler information. ICU echocardiographers should therefore record both apical and subxiphoid views to assess comprehensively the cardiac function and hemodynamic status. PMID:24004960
Biocular vehicle display optical designs
NASA Astrophysics Data System (ADS)
Chu, H.; Carter, Tom
2012-06-01
Biocular vehicle display optics is a fast collimating lens (f / # < 0.9) that presents the image of the display at infinity to both eyes of the viewer. Each eye captures the scene independently and the brain merges the two images into one through the overlapping portions of the images. With the recent conversion from analog CRT based displays to lighter, more compact active-matrix organic light-emitting diodes (AMOLED) digital image sources, display optical designs have evolved to take advantage of the higher resolution AMOLED image sources. To maximize the field of view of the display optics and fully resolve the smaller pixels, the digital image source is pre-magnified by relay optics or a coherent taper fiber optics plate. Coherent taper fiber optics plates are used extensively to: 1. Convert plano focal planes to spherical focal planes in order to eliminate Petzval field curvature. This elimination enables faster lens speed and/or larger field of view of eye pieces, display optics. 2. Provide pre-magnification to lighten the work load of the optics to further increase the numerical aperture and/or field of view. 3. Improve light flux collection efficiency and field of view by collecting all the light emitted by the image source and guiding imaging light bundles toward the lens aperture stop. 4. Reduce complexity of the optical design and overall packaging volume by replacing pre-magnification optics with a compact taper fiber optics plate. This paper will review and compare the performance of biocular vehicle display designs without and with taper fiber optics plate.
2012-08-17
The Curiosity engineering team created this cylindrical projection view from images taken by NASA Curiosity rover front hazard avoidance cameras underneath the rover deck on Sol 0. Pictured here are are the pigeon-toed the wheels.
2010-05-26
NASA Cassini spacecraft looks toward the limb of Saturn and, on the right of this image, views part of the rings through the planet atmosphere. Saturn atmosphere can distort the view of the rings from some angles.
2014-12-05
From about three times the distance from Earth to the moon, NASA's Dawn spacecraft spies its final destination -- the dwarf planet Ceres. The resolution of this image does not yet exceed the best views of Ceres, which were obtained by the Hubble Space Telescope (see PIA10235). Nonetheless, Ceres' spherical shape is clearly revealed here. Sunlight illuminates the dwarf planet from the right, leaving a sliver of the surface in shadow at left. A zoomed-in view is provided in Figure 1, along with the original unmagnified, uncropped view. The image was taken on Dec. 1, 2014 with the Dawn spacecraft's framing camera, using a clear spectral filter. Dawn was about 740,000 miles (1.2 million kilometers) from Ceres at the time. Ceres is 590 miles (950 kilometers) across and was discovered in 1801. http://photojournal.jpl.nasa.gov/catalog/PIA19049
NASA Technical Reports Server (NTRS)
West, E. A.
1993-01-01
Magnetographs, which measure polarized light, allow solar astronomers to infer the magnetic field intensity on the Sun. The Marshall Space Flight Center (MSFC) Vector Magnetograph is such an imaging instrument. The instrument requires rapid modulation between polarization states to minimize seeing effects. The accuracy of those polarization measurements is dependent on stable modulators with small field-of-view errors. Although these devices are very important in ground-based telescopes, extending the field of view of electro-optical crystals such as KD*Ps (potassium di-deuterium phosphate) could encourage the development of these devices for other imaging applications. The work that was done at MSFC as part of the Center Director's Discretionary Fund (CDDF) to reduce the field-of-view errors of instruments that use KD*P modulators in their polarimeters is described.
Web-based CERES Clouds QC Property Viewing Tool
NASA Astrophysics Data System (ADS)
Smith, R. A.
2015-12-01
Churngwei Chu1, Rita Smith1, Sunny Sun-Mack1, Yan Chen1, Elizabeth Heckert1, Patrick Minnis21 Science Systems and Applications, Inc., Hampton, Virginia2 NASA Langley Research Center, Hampton, Virginia This presentation will display the capabilities of a web-based CERES cloud property viewer. Aqua/Terra/NPP data will be chosen for examples. It will demonstrate viewing of cloud properties in gridded global maps, histograms, time series displays, latitudinal zonal images, binned data charts, data frequency graphs, and ISCCP plots. Images can be manipulated by the user to narrow boundaries of the map as well as color bars and value ranges, compare datasets, view data values, and more. Other atmospheric studies groups will be encouraged to put their data into the underlying NetCDF data format and view their data with the tool.
Crosstalk in automultiscopic 3-D displays: blessing in disguise?
NASA Astrophysics Data System (ADS)
Jain, Ashish; Konrad, Janusz
2007-02-01
Most of 3-D displays suffer from interocular crosstalk, i.e., the perception of an unintended view in addition to intended one. The resulting "ghosting" at high-contrast object boundaries is objectionable and interferes with depth perception. In automultiscopic (no glasses, multiview) displays using microlenses or parallax barrier, the effect is compounded since several unintended views may be perceived at once. However, we recently discovered that crosstalk in automultiscopic displays can be also beneficial. Since spatial multiplexing of views in order to prepare a composite image for automultiscopic viewing involves sub-sampling, prior anti-alias filtering is required. To date, anti-alias filter design has ignored the presence of crosstalk in automultiscopic displays. In this paper, we propose a simple multiplexing model that takes crosstalk into account. Using this model we derive a mathematical expression for the spectrum of single view with crosstalk, and we show that it leads to reduced spectral aliasing compared to crosstalk-free case. We then propose a new criterion for the characterization of ideal anti-alias pre-filter. In the experimental part, we describe a simple method to measure optical crosstalk between views using digital camera. We use the measured crosstalk parameters to find the ideal frequency response of anti-alias filter and we design practical digital filters approximating this response. Having applied the designed filters to a number of multiview images prior to multiplexing, we conclude that, due to their increased bandwidth, the filters lead to visibly sharper 3-D images without increasing aliasing artifacts.
Multi-image CAD employing features derived from ipsilateral mammographic views
NASA Astrophysics Data System (ADS)
Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.; Gur, David
1999-05-01
On mammograms, certain kinds of features related to masses (e.g., location, texture, degree of spiculation, and integrated density difference) tend to be relatively invariant, or at last predictable, with respect to breast compression. Thus, ipsilateral pairs of mammograms may contain information not available from analyzing single views separately. To demonstrate the feasibility of incorporating multi-view features into CAD algorithm, `single-image' CAD was applied to each individual image in a set of 60 ipsilateral studies, after which all possible pairs of suspicious regions, consisting of one from each view, were formed. For these 402 pairs we defined and evaluated `multi-view' features such as: (1) relative position of centers of regions; (2) ratio of lengths of region projections parallel to nipple axis lines; (3) ratio of integrated contrast difference; (4) ratio of the sizes of the suspicious regions; and (5) measure of relative complexity of region boundaries. Each pair was identified as either a `true positive/true positive' (T) pair (i.e., two regions which are projections of the same actual mass), or as a falsely associated pair (F). Distributions for each feature were calculated. A Bayesian network was trained and tested to classify pairs of suspicious regions based exclusively on the multi-view features described above. Distributions for all features were significantly difference for T versus F pairs as indicated by likelihood ratios. Performance of the Bayesian network, which was measured by ROC analysis, indicates a significant ability to distinguish between T pairs and F pairs (Az equals 0.82 +/- 0.03), using information that is attributed to the multi-view content. This study is the first demonstration that there is a significant amount of spatial information that can be derived from ipsilateral pairs of mammograms.
Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery
NASA Technical Reports Server (NTRS)
Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr
2008-01-01
The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to that of an endo-scissor, but the linkage would be configured to enable adjustment of the camera angle instead of actuation of a scissor blade. It is envisioned that thicknesses of the tool shaft and the camera would be less than 4 mm, so that the camera-tipped tool could be swiftly inserted and withdrawn through a dime-size opening. Electronic cameras having dimensions of the order of millimeters are already commercially available, but their designs are not optimized for use in endoscopic brain surgery. The variety of potential endoscopic, thoracoscopic, and laparoscopic applications can be expected to increase as further development of electronic cameras yields further miniaturization and improvements in imaging performance.
Timmers, Janine; Voorde, Marloes Ten; Engen, Ruben E van; Landsveld-Verhoeven, Cary van; Pijnappel, Ruud; Greve, Kitty Droogh-de; Heeten, Gerard J den; Broeders, Mireille J M
2015-10-01
To compare projected breast area, image quality, pain experience and radiation dose between mammography performed with and without radiolucent positioning sheets. 184 women screened in the Dutch breast screening programme (May-June 2012) provided written informed consent to have one additional image taken with positioning sheets. 5 cases were excluded (missing data). Pain was scored using the Numeric Rating Scale. Radiation dose was estimated using the Dance model and projected breast area using computer software. Two radiologists and two radiographers assessed image quality. With positioning sheets significantly more pectoral muscle, lateral and medial breast tissue was projected (CC-views) and more and deeper depicted pectoral muscle (MLO-views). In contrast, visibility of white and darker areas was better on images without positioning sheets, radiologists were therefore better able to detect abnormalities (MLO-views). Women experienced more pain with positioning sheets (MLO-views only, mean difference NRS 0.98; SD 1.71; p=0,00). Mammograms with positioning sheets showed more breast tissue. Increased breast thickness after compression with sheets resulted in less visibility of white and darker areas and thus reduced detection of abnormalities. Also, women experienced more pain (MLO-views) due to the sheet material. A practical consideration is the fact that more subcutaneous fat tissue and skin are being pulled forward leading to folds in the nipple area. On balance, improvement to the current design is required before implementation in screening practice can be considered. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Earth Observations taken by Expedition 38 crewmember
2013-11-16
ISS038-E-005515 (16 Nov. 2013) --- Activity at Kliuchevskoi Volcano on Kamchatka Peninsula in the Russian Federation is featured in this image photographed by an Expedition 38 crew member on the International Space Station. When viewing conditions are favorable, crew members onboard the space station can take unusual and striking images of Earth. This photograph provides a view of an eruption plume emanating from Kliuchevskoi Volcano, one of the many active volcanoes on the Kamchatka Peninsula. Nadir views – looking “straight down”—that are typical of orbital satellite imagery tend to flatten the appearance of the landscape by reducing the sense of three dimensions of the topography. In contrast, this image was taken from the ISS with a very oblique viewing angle that gives a strong sense of three dimensions, which is accentuated by the shadows cast by the volcanic peaks. This resulted in a view similar to what a person might see from a low-altitude airplane. The image was taken when the space station was located over a ground position more than 1,500 kilometers to the southwest. The plume – likely a combination of steam, volcanic gases, and ash – is extended to the east-southeast by prevailing winds; the dark region to the north-northwest of the plume is likely a product of both shadow and ash settling out. Several other volcanoes are visible in the image, including Ushkovsky, Tolbachik, Zimina, and Udina. To the south-southwest of Kliuchevskoi lies Bezymianny Volcano which appears to be emitting a small steam plume (visible at center).
NASA Astrophysics Data System (ADS)
van der Linde, Ian; Rajashekar, Umesh; Cormack, Lawrence K.; Bovik, Alan C.
2005-03-01
Recent years have seen a resurgent interest in eye movements during natural scene viewing. Aspects of eye movements that are driven by low-level image properties are of particular interest due to their applicability to biologically motivated artificial vision and surveillance systems. In this paper, we report an experiment in which we recorded observers" eye movements while they viewed calibrated greyscale images of natural scenes. Immediately after viewing each image, observers were shown a test patch and asked to indicate if they thought it was part of the image they had just seen. The test patch was either randomly selected from a different image from the same database or, unbeknownst to the observer, selected from either the first or last location fixated on the image just viewed. We find that several low-level image properties differed significantly relative to the observers" ability to successfully designate each patch. We also find that the differences between patch statistics for first and last fixations are small compared to the differences between hit and miss responses. The goal of the paper was to, in a non-cognitive natural setting, measure the image properties that facilitate visual memory, additionally observing the role that temporal location (first or last fixation) of the test patch played. We propose that a memorability map of a complex natural scene may be constructed to represent the low-level memorability of local regions in a similar fashion to the familiar saliency map, which records bottom-up fixation attractors.
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.
2012-01-01
As imagery is collected from an airborne platform, an individual viewing the images wants to know from where on the Earth the images were collected. To do this, some information about the camera needs to be known, such as its position and orientation relative to the Earth. This can be provided by common inertial navigation systems (INS). Once the location of the camera is known, it is useful to project an image onto some representation of the Earth. Due to the non-smooth terrain of the Earth (mountains, valleys, etc.), this projection is highly non-linear. Thus, to ensure accurate projection, one needs to project onto a digital elevation map (DEM). This allows one to view the images overlaid onto a representation of the Earth. A code has been developed that takes an image, a model of the camera used to acquire that image, the pose of the camera during acquisition (as provided by an INS), and a DEM, and outputs an image that has been geo-rectified. The world coordinate of the bounds of the image are provided for viewing purposes. The code finds a mapping from points on the ground (DEM) to pixels in the image. By performing this process for all points on the ground, one can "paint" the ground with the image, effectively performing a projection of the image onto the ground. In order to make this process efficient, a method was developed for finding a region of interest (ROI) on the ground to where the image will project. This code is useful in any scenario involving an aerial imaging platform that moves and rotates over time. Many other applications are possible in processing aerial and satellite imagery.
Young, Rachel; Hinnant, Amanda; Leshner, Glenn
2016-07-01
Antiobesity health communication campaigns often target individual behavior, but these ads might inflate the role of individual responsibility at the expense of other health determinants. In a 2 × 2 full-factorial, randomized, online experiment, 162 American adults viewed antiobesity advertisements that varied in emphasizing social or individual causation for obesity through text and images. Locus for attribution of responsibility for obesity causes and solutions was measured, as was how these responses were moderated by political ideology. Participants who viewed text emphasizing individual responsibility were less likely to agree that genetic factors caused obesity. Conservative participants who viewed images of overweight individuals were less likely than liberal participants to agree that social factors were responsible for causing obesity. In addition, among conservative participants who viewed images of fast food versus images of overweight individuals, agreement that the food industry bore some responsibility mediated support for policy solutions to obesity. These findings, among others, demonstrate that awareness of multilevel determinants of health outcomes can be a precursor of support for policy solutions to obesity among those not politically inclined to support antiobesity policy. In addition, stigmatizing images of overweight individuals in antiobesity campaigns might overemphasize the role of individual behavior in obesity at the expense of other factors.
America National Parks Viewed in 3D by NASA MISR Anaglyph 2
2016-08-25
Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite is releasing four new anaglyphs that showcase 33 of our nation's national parks, monuments, historical sites and recreation areas in glorious 3D. Shown in the annotated image are Grand Teton National Park, John D. Rockefeller Memorial Parkway, Yellowstone National Park, and parts of Craters of the Moon National Monument. MISR views Earth with nine cameras pointed at different angles, giving it the unique capability to produce anaglyphs, stereoscopic images that allow the viewer to experience the landscape in three dimensions. The anaglyphs were made by combining data from MISR's vertical-viewing and 46-degree forward-pointing camera. You will need red-blue glasses in order to experience the 3D effect; ensure you place the red lens over your left eye. The images have been rotated so that north is to the left in order to enable 3D viewing because the Terra satellite flies from north to south. All of the images are 235 miles (378 kilometers) from west to east. These data were acquired June 25, 2016, Orbit 87876. http://photojournal.jpl.nasa.gov/catalog/PIA20890
Mindful attention reduces neural and self-reported cue-induced craving in smokers
Creswell, John David; Tabibnia, Golnaz; Julson, Erica; Kober, Hedy; Tindle, Hilary A.
2013-01-01
An emerging body of research suggests that mindfulness-based interventions may be beneficial for smoking cessation and the treatment of other addictive disorders. One way that mindfulness may facilitate smoking cessation is through the reduction of craving to smoking cues. The present work considers whether mindful attention can reduce self-reported and neural markers of cue-induced craving in treatment seeking smokers. Forty-seven (n = 47) meditation-naïve treatment-seeking smokers (12-h abstinent from smoking) viewed and made ratings of smoking and neutral images while undergoing functional magnetic resonance imaging (fMRI). Participants were trained and instructed to view these images passively or with mindful attention. Results indicated that mindful attention reduced self-reported craving to smoking images, and reduced neural activity in a craving-related region of subgenual anterior cingulate cortex (sgACC). Moreover, a psychophysiological interaction analysis revealed that mindful attention reduced functional connectivity between sgACC and other craving-related regions compared to passively viewing smoking images, suggesting that mindfulness may decouple craving neurocircuitry when viewing smoking cues. These results provide an initial indication that mindful attention may describe a ‘bottom-up’ attention to one’s present moment experience in ways that can help reduce subjective and neural reactivity to smoking cues in smokers. PMID:22114078
Gao, Yang; Bian, Zhaoying; Huang, Jing; Zhang, Yunwan; Niu, Shanzhou; Feng, Qianjin; Chen, Wufan; Liang, Zhengrong; Ma, Jianhua
2014-01-01
To realize low-dose imaging in X-ray computed tomography (CT) examination, lowering milliampere-seconds (low-mAs) or reducing the required number of projection views (sparse-view) per rotation around the body has been widely studied as an easy and effective approach. In this study, we are focusing on low-dose CT image reconstruction from the sinograms acquired with a combined low-mAs and sparse-view protocol and propose a two-step image reconstruction strategy. Specifically, to suppress significant statistical noise in the noisy and insufficient sinograms, an adaptive sinogram restoration (ASR) method is first proposed with consideration of the statistical property of sinogram data, and then to further acquire a high-quality image, a total variation based projection onto convex sets (TV-POCS) method is adopted with a slight modification. For simplicity, the present reconstruction strategy was termed as “ASR-TV-POCS.” To evaluate the present ASR-TV-POCS method, both qualitative and quantitative studies were performed on a physical phantom. Experimental results have demonstrated that the present ASR-TV-POCS method can achieve promising gains over other existing methods in terms of the noise reduction, contrast-to-noise ratio, and edge detail preservation. PMID:24977611
America National Parks Viewed in 3D by NASA MISR Anaglyph 4
2016-08-25
Just in time for the U.S. National Park Service's Centennial celebration on Aug. 25, NASA's Multiangle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra satellite is releasing four new anaglyphs that showcase 33 of our nation's national parks, monuments, historical sites and recreation areas in glorious 3D. Shown in the annotated image are Sequoia National Park, Kings Canyon National Park, Manzanar National Historic Site, Devils Postpile National Monument, Yosemite National Park, and parts of Death Valley National Park. MISR views Earth with nine cameras pointed at different angles, giving it the unique capability to produce anaglyphs, stereoscopic images that allow the viewer to experience the landscape in three dimensions. The anaglyphs were made by combining data from MISR's vertical-viewing and 46-degree forward-pointing camera. You will need red-blue glasses in order to experience the 3D effect; ensure you place the red lens over your left eye. The images have been rotated so that north is to the left in order to enable 3D viewing because the Terra satellite flies from north to south. All of the images are 235 miles (378 kilometers) from west to east. These data were acquired July 7, 2016, Orbit 88051. http://photojournal.jpl.nasa.gov/catalog/PIA20892
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
Negative Stimulus-Response Compatibility Observed with a Briefly Displayed Image of a Hand
ERIC Educational Resources Information Center
Vainio, Lari
2011-01-01
Manual responses can be primed by viewing an image of a hand. The left-right identity of the viewed hand reflexively facilitates responses of the hand that corresponds to the identity. Previous research also suggests that when the response activation is triggered by an arrow, which is backward-masked and presented briefly, the activation manifests…
Living in a World with Eyeballs: How Women Make Meaning of Body Image in the College Environment
ERIC Educational Resources Information Center
Stanley, Chrystal Ann
2013-01-01
Negative body image is pervasive among traditional, college-age women and takes a heavy toll on women's economic, personal, and political lives. Previous research has indicated that a large percentage of women hold negative views of their body. Women embarking on higher education are not exempt from these negative views. Conversely, college…
Two Mirrors: Infinite Images of DiCaprio
ERIC Educational Resources Information Center
Fadeev, Pavel
2015-01-01
Movies are mostly viewed for entertainment. Mixing entertainment and physics gets students excited as we look at a famous movie scene from a different point of view. The following is a link to a fragment from the 2010 motion picture "Inception": http://www.youtube.com/watch?v=q3tBBhYJeAw. The following problem, based on images in facing…
Ouldarbi, L; Talbi, M; Coëtmellec, S; Lebrun, D; Gréhan, G; Perret, G; Brunel, M
2016-11-10
We realize simplified-tomography experiments on irregular rough particles using interferometric out-of-focus imaging. Using two angles of view, we determine the global 3D-shape, the dimensions, and the 3D-orientation of irregular rough particles whose morphologies belong to families such as sticks, plates, and crosses.
Copy image of "'Under the Great Arch' of Refectory Bridge ...
Copy image of "'Under the Great Arch' of Refectory Bridge connecting the Dining Room with the Practice House, Delta, and the Villa. The Refectory Cloister is seen beyond the arch"; a similar, but recent, view can be seen in MD-1109-A-16. (NPS view book, p. 25) - National Park Seminary, Main, Linden Lane, Silver Spring, Montgomery County, MD