Underwater video enhancement using multi-camera super-resolution
NASA Astrophysics Data System (ADS)
Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.
2017-12-01
Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.
Compact full-motion video hyperspectral cameras: development, image processing, and applications
NASA Astrophysics Data System (ADS)
Kanaev, A. V.
2015-10-01
Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.
A scalable multi-DLP pico-projector system for virtual reality
NASA Astrophysics Data System (ADS)
Teubl, F.; Kurashima, C.; Cabral, M.; Fels, S.; Lopes, R.; Zuffo, M.
2014-03-01
Virtual Reality (VR) environments can offer immersion, interaction and realistic images to users. A VR system is usually expensive and requires special equipment in a complex setup. One approach is to use Commodity-Off-The-Shelf (COTS) desktop multi-projectors manually or camera based calibrated to reduce the cost of VR systems without significant decrease of the visual experience. Additionally, for non-planar screen shapes, special optics such as lenses and mirrors are required thus increasing costs. We propose a low-cost, scalable, flexible and mobile solution that allows building complex VR systems that projects images onto a variety of arbitrary surfaces such as planar, cylindrical and spherical surfaces. This approach combines three key aspects: 1) clusters of DLP-picoprojectors to provide homogeneous and continuous pixel density upon arbitrary surfaces without additional optics; 2) LED lighting technology for energy efficiency and light control; 3) smaller physical footprint for flexibility purposes. Therefore, the proposed system is scalable in terms of pixel density, energy and physical space. To achieve these goals, we developed a multi-projector software library called FastFusion that calibrates all projectors in a uniform image that is presented to viewers. FastFusion uses a camera to automatically calibrate geometric and photometric correction of projected images from ad-hoc positioned projectors, the only requirement is some few pixels overlapping amongst them. We present results with eight Pico-projectors, with 7 lumens (LED) and DLP 0.17 HVGA Chipset.
Design of a single projector multiview 3D display system
NASA Astrophysics Data System (ADS)
Geng, Jason
2014-03-01
Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network
Qu, Xiaobo; He, Yifan
2018-01-01
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666
Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.
Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di
2018-03-06
Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Qin, Chen; Ren, Bin; Guo, Longfei; Dou, Wenhua
2014-11-01
Multi-projector three dimension display is a promising multi-view glass-free three dimension (3D) display technology, can produce full colour high definition 3D images on its screen. One key problem of multi-projector 3D display is how to acquire the source images of projector array while avoiding pseudoscopic problem. This paper analysis the displaying characteristics of multi-projector 3D display first and then propose a projector content synthetic method using tetrahedral transform. A 3D video format that based on stereo image pair and associated disparity map is presented, it is well suit for any type of multi-projector 3D display and has advantage in saving storage usage. Experiment results show that our method solved the pseudoscopic problem.
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
Multi-frame super-resolution with quality self-assessment for retinal fundus videos.
Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P
2014-01-01
This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.
Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki
2017-11-01
Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.
Takeshima, T; Takahashi, T; Yamashita, J; Okada, Y; Watanabe, S
2018-05-25
Multi-emitter fitting algorithms have been developed to improve the temporal resolution of single-molecule switching nanoscopy, but the molecular density range they can analyse is narrow and the computation required is intensive, significantly limiting their practical application. Here, we propose a computationally fast method, wedged template matching (WTM), an algorithm that uses a template matching technique to localise molecules at any overlapping molecular density from sparse to ultrahigh density with subdiffraction resolution. WTM achieves the localization of overlapping molecules at densities up to 600 molecules μm -2 with a high detection sensitivity and fast computational speed. WTM also shows localization precision comparable with that of DAOSTORM (an algorithm for high-density super-resolution microscopy), at densities up to 20 molecules μm -2 , and better than DAOSTORM at higher molecular densities. The application of WTM to a high-density biological sample image demonstrated that it resolved protein dynamics from live cell images with subdiffraction resolution and a temporal resolution of several hundred milliseconds or less through a significant reduction in the number of camera images required for a high-density reconstruction. WTM algorithm is a computationally fast, multi-emitter fitting algorithm that can analyse over a wide range of molecular densities. The algorithm is available through the website. https://doi.org/10.17632/bf3z6xpn5j.1. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
DMD-based LED-illumination super-resolution and optical sectioning microscopy.
Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei
2013-01-01
Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×10(7) pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens.
DMD-based LED-illumination Super-resolution and optical sectioning microscopy
Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei
2013-01-01
Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×107 pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens. PMID:23346373
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms
Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg
2013-01-01
Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387
Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy
NASA Astrophysics Data System (ADS)
Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan
2016-03-01
Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.
NASA Astrophysics Data System (ADS)
Descloux, A.; Grußmayer, K. S.; Bostan, E.; Lukes, T.; Bouwens, A.; Sharipov, A.; Geissbuehler, S.; Mahul-Mellier, A.-L.; Lashuel, H. A.; Leutenegger, M.; Lasser, T.
2018-03-01
Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures. However, going `beyond the diffraction barrier' comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution. We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution. The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope. We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes. This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument. The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy.
ERIC Educational Resources Information Center
Flory, John
Although there have been great developments in motion picture technology, such as super 8mm film, magnetic sound, low cost color film, simpler projectors and movie cameras, and cartridge-loading projectors, there is still only limited use of audiovisual materials in the classroom today. This paper suggests some of the possible reasons for the lack…
Brunstein, Maia; Wicker, Kai; Hérault, Karine; Heintzmann, Rainer; Oheim, Martin
2013-11-04
Most structured illumination microscopes use a physical or synthetic grating that is projected into the sample plane to generate a periodic illumination pattern. Albeit simple and cost-effective, this arrangement hampers fast or multi-color acquisition, which is a critical requirement for time-lapse imaging of cellular and sub-cellular dynamics. In this study, we designed and implemented an interferometric approach allowing large-field, fast, dual-color imaging at an isotropic 100-nm resolution based on a sub-diffraction fringe pattern generated by the interference of two colliding evanescent waves. Our all-mirror-based system generates illumination pat-terns of arbitrary orientation and period, limited only by the illumination aperture (NA = 1.45), the response time of a fast, piezo-driven tip-tilt mirror (10 ms) and the available fluorescence signal. At low µW laser powers suitable for long-period observation of life cells and with a camera exposure time of 20 ms, our system permits the acquisition of super-resolved 50 µm by 50 µm images at 3.3 Hz. The possibility it offers for rapidly adjusting the pattern between images is particularly advantageous for experiments that require multi-scale and multi-color information. We demonstrate the performance of our instrument by imaging mitochondrial dynamics in cultured cortical astrocytes. As an illustration of dual-color excitation dual-color detection, we also resolve interaction sites between near-membrane mitochondria and the endoplasmic reticulum. Our TIRF-SIM microscope provides a versatile, compact and cost-effective arrangement for super-resolution imaging, allowing the investigation of co-localization and dynamic interactions between organelles--important questions in both cell biology and neurophysiology.
Scalable large format 3D displays
NASA Astrophysics Data System (ADS)
Chang, Nelson L.; Damera-Venkata, Niranjan
2010-02-01
We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.
Supporting lander and rover operation: a novel super-resolution restoration technique
NASA Astrophysics Data System (ADS)
Tao, Yu; Muller, Jan-Peter
2015-04-01
Higher resolution imaging data is always desirable to critical rover engineering operations, such as landing site selection, path planning, and optical localisation. For current Mars missions, 25cm HiRISE images have been widely used by the MER & MSL engineering team for rover path planning and location registration/adjustment. However, 25cm is not high enough resolution to be able to view individual rocks (≤2m in size) or visualise the types of sedimentary features that rover onboard cameras might observe. Nevertheless, due to various physical constraints (e.g. telescope size and mass) from the imaging instruments themselves, one needs to be able to tradeoff spatial resolution and bandwidth. This means that future imaging systems are likely to be limited to resolve features larger than 25cm. We have developed a novel super-resolution algorithm/pipeline to be able to restore higher resolution image from the non-redundant sub-pixel information contained in multiple lower resolution raw images [Tao & Muller 2015]. We will demonstrate with experiments performed using 5-10 overlapped 25cm HiRISE images for MER-A, MER-B & MSL to resolve 5-10cm super resolution images that can be directly compared to rover imagery at a range of 5 metres from the rover cameras but in our case can be used to visualise features many kilometres away from the actual rover traverse. We will demonstrate how these super-resolution images together with image understanding software can be used to quantify rock size-frequency distributions as well as measure sedimentary rock layers for several critical sites for comparison with rover orthorectified image mosaic to demonstrate optimality of using our super-resolution resolved image to better support future lander and rover operation in future. We present the potential of super-resolution for virtual exploration to the ˜400 HiRISE areas which have been viewed 5 or more times and the potential application of this technique to all of the ESA ExoMars Trace Gas orbiter CaSSiS stereo, multi-angle and colour camera images from 2017 onwards. Acknowledgements: The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No.312377 PRoViDE.
Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection
Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei
2012-01-01
Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
NASA Astrophysics Data System (ADS)
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-12-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane.
Multi-dimensional super-resolution imaging enables surface hydrophobicity mapping
Bongiovanni, Marie N.; Godet, Julien; Horrocks, Mathew H.; Tosatto, Laura; Carr, Alexander R.; Wirthensohn, David C.; Ranasinghe, Rohan T.; Lee, Ji-Eun; Ponjavic, Aleks; Fritz, Joelle V.; Dobson, Christopher M.; Klenerman, David; Lee, Steven F.
2016-01-01
Super-resolution microscopy allows biological systems to be studied at the nanoscale, but has been restricted to providing only positional information. Here, we show that it is possible to perform multi-dimensional super-resolution imaging to determine both the position and the environmental properties of single-molecule fluorescent emitters. The method presented here exploits the solvatochromic and fluorogenic properties of nile red to extract both the emission spectrum and the position of each dye molecule simultaneously enabling mapping of the hydrophobicity of biological structures. We validated this by studying synthetic lipid vesicles of known composition. We then applied both to super-resolve the hydrophobicity of amyloid aggregates implicated in neurodegenerative diseases, and the hydrophobic changes in mammalian cell membranes. Our technique is easily implemented by inserting a transmission diffraction grating into the optical path of a localization-based super-resolution microscope, enabling all the information to be extracted simultaneously from a single image plane. PMID:27929085
DeCicco, Anthony E; Sokil, Alexis B; Marhefka, Gregary D; Reist, Kirk; Hansen, Christopher L
2015-04-01
Obesity is not only associated with an increased risk of coronary artery disease, but also decreases the accuracy of many diagnostic modalities pertinent to this disease. Advances in myocardial perfusion imaging (MPI) have mitigated somewhat the effects of obesity, although the feasibility of MPI in the super-obese (defined as a BMI > 50) is currently untested. We undertook this study to assess the practicality of MPI in the super-obese using a multi-headed solid-state gamma camera with attenuation correction. We retrospectively identified consecutive super-obese patients referred for MPI at our institution. The images were interpreted by 3 blinded, experienced readers and graded for quality and diagnosis, and subjectively evaluated the contribution of attenuation correction. Clinical follow-up was obtained from review of medical records. 72 consecutive super-obese patients were included. Their BMI ranged from 50 to 67 (55.7 ± 5.1). Stress image quality was considered good or excellent in 45 (63%), satisfactory in 24 (33%), poor in 3 (4%), and uninterpretable in 0 patients. Rest images were considered good or excellent in 34 (49%), satisfactory in 23 (33%), poor in 13 (19%), and uninterpretable in 0 patients. Attenuation correction changed the interpretation in 34 (47%) of studies. MPI is feasible and provides acceptable image quality for super-obese patients, although it may be camera and protocol dependent.
Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen
2016-03-07
We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Multi-projector auto-calibration and placement optimization for non-planar surfaces
NASA Astrophysics Data System (ADS)
Li, Dong; Xie, Jinghui; Zhao, Lu; Zhou, Lijing; Weng, Dongdong
2015-10-01
Non-planar projection has been widely applied in virtual reality and digital entertainment and exhibitions because of its flexible layout and immersive display effects. Compared with planar projection, a non-planar projection is more difficult to achieve because projector calibration and image distortion correction are difficult processes. This paper uses a cylindrical screen as an example to present a new method for automatically calibrating a multi-projector system in a non-planar environment without using 3D reconstruction. This method corrects the geometric calibration error caused by the screen's manufactured imperfections, such as an undulating surface or a slant in the vertical plane. In addition, based on actual projection demand, this paper presents the overall performance evaluation criteria for the multi-projector system. According to these criteria, we determined the optimal placement for the projectors. This method also extends to surfaces that can be parameterized, such as spheres, ellipsoids, and paraboloids, and demonstrates a broad applicability.
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Enhancing multi-spot structured illumination microscopy with fluorescence difference
NASA Astrophysics Data System (ADS)
Ward, Edward N.; Torkelsen, Frida H.; Pal, Robert
2018-03-01
Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Guang, E-mail: lig2@mskcc.org; Wei, Jie; Kadbi, Mo
Purpose: To develop and evaluate a super-resolution approach to reconstruct time-resolved 4-dimensional magnetic resonance imaging (TR-4DMRI) with a high spatiotemporal resolution for multi-breathing cycle motion assessment. Methods and Materials: A super-resolution approach was developed to combine fast 3-dimensional (3D) cine MRI with low resolution during free breathing (FB) and high-resolution 3D static MRI during breath hold (BH) using deformable image registration. A T1-weighted, turbo field echo sequence, coronal 3D cine acquisition, partial Fourier approximation, and SENSitivity Encoding parallel acceleration were used. The same MRI pulse sequence, field of view, and acceleration techniques were applied in both FB and BH acquisitions;more » the intensity-based Demons deformable image registration method was used. Under an institutional review board–approved protocol, 7 volunteers were studied with 3D cine FB scan (voxel size: 5 × 5 × 5 mm{sup 3}) at 2 Hz for 40 seconds and a 3D static BH scan (2 × 2 × 2 mm{sup 3}). To examine the image fidelity of 3D cine and super-resolution TR-4DMRI, a mobile gel phantom with multi-internal targets was scanned at 3 speeds and compared with the 3D static image. Image similarity among 3D cine, 4DMRI, and 3D static was evaluated visually using difference image and quantitatively using voxel intensity correlation and Dice index (phantom only). Multi-breathing-cycle waveforms were extracted and compared in both phantom and volunteer images using the 3D cine as the references. Results: Mild imaging artifacts were found in the 3D cine and TR-4DMRI of the mobile gel phantom with a Dice index of >0.95. Among 7 volunteers, the super-resolution TR-4DMRI yielded high voxel-intensity correlation (0.92 ± 0.05) and low voxel-intensity difference (<0.05). The detected motion differences between TR-4DMRI and 3D cine were −0.2 ± 0.5 mm (phantom) and −0.2 ± 1.9 mm (diaphragms). Conclusion: Super-resolution TR-4DMRI has been reconstructed with adequate temporal (2 Hz) and spatial (2 × 2 × 2 mm{sup 3}) resolutions. Further TR-4DMRI characterization and improvement are necessary before clinical applications. Multi-breathing cycles can be examined, providing patient-specific breathing irregularities and motion statistics for future 4D radiation therapy.« less
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-01-01
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements. PMID:29414893
Zhu, Hong; Tang, Xinming; Xie, Junfeng; Song, Weidong; Mo, Fan; Gao, Xiaoming
2018-02-07
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L ₀ gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Secoli, R; Zondervan, D; Reinkensmeyer, D
2012-01-01
For children with a severe disability, such as can arise from cerebral palsy, becoming independent in mobility is a critical goal. Currently, however, driver's training for powered wheelchair use is labor intensive, requiring hand-over-hand assistance from a skilled therapist to keep the trainee safe. This paper describes the design of a mixed reality environment for semi-autonomous training of wheelchair driving skills. In this system, the wheelchair is used as the gaming input device, and users train driving skills by maneuvering through floor-projected games created with a multi-projector system and a multi-camera tracking system. A force feedback joystick assists in steering and enhances safety.
NASA Astrophysics Data System (ADS)
McMackin, Lenore; Herman, Matthew A.; Weston, Tyler
2016-02-01
We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.
A novel super-resolution camera model
NASA Astrophysics Data System (ADS)
Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli
2015-05-01
Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.
Development of infrared scene projectors for testing fire-fighter cameras
NASA Astrophysics Data System (ADS)
Neira, Jorge E.; Rice, Joseph P.; Amon, Francine K.
2008-04-01
We have developed two types of infrared scene projectors for hardware-in-the-loop testing of thermal imaging cameras such as those used by fire-fighters. In one, direct projection, images are projected directly into the camera. In the other, indirect projection, images are projected onto a diffuse screen, which is then viewed by the camera. Both projectors use a digital micromirror array as the spatial light modulator, in the form of a Micromirror Array Projection System (MAPS) engine having resolution of 800 x 600 with mirrors on a 17 micrometer pitch, aluminum-coated mirrors, and a ZnSe protective window. Fire-fighter cameras are often based upon uncooled microbolometer arrays and typically have resolutions of 320 x 240 or lower. For direct projection, we use an argon-arc source, which provides spectral radiance equivalent to a 10,000 Kelvin blackbody over the 7 micrometer to 14 micrometer wavelength range, to illuminate the micromirror array. For indirect projection, an expanded 4 watt CO II laser beam at a wavelength of 10.6 micrometers illuminates the micromirror array and the scene formed by the first-order diffracted light from the array is projected onto a diffuse aluminum screen. In both projectors, a well-calibrated reference camera is used to provide non-uniformity correction and brightness calibration of the projected scenes, and the fire-fighter cameras alternately view the same scenes. In this paper, we compare the two methods for this application and report on our quantitative results. Indirect projection has an advantage of being able to more easily fill the wide field of view of the fire-fighter cameras, which typically is about 50 degrees. Direct projection more efficiently utilizes the available light, which will become important in emerging multispectral and hyperspectral applications.
Enhancing multi-spot structured illumination microscopy with fluorescence difference
Torkelsen, Frida H.
2018-01-01
Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested. PMID:29657751
Toward the light field display: autostereoscopic rendering via a cluster of projectors.
Yang, Ruigang; Huang, Xinyu; Li, Sifang; Jaynes, Christopher
2008-01-01
Ultimately, a display device should be capable of reproducing the visual effects observed in reality. In this paper we introduce an autostereoscopic display that uses a scalable array of digital light projectors and a projection screen augmented with microlenses to simulate a light field for a given three-dimensional scene. Physical objects emit or reflect light in all directions to create a light field that can be approximated by the light field display. The display can simultaneously provide many viewers from different viewpoints a stereoscopic effect without head tracking or special viewing glasses. This work focuses on two important technical problems related to the light field display; calibration and rendering. We present a solution to automatically calibrate the light field display using a camera and introduce two efficient algorithms to render the special multi-view images by exploiting their spatial coherence. The effectiveness of our approach is demonstrated with a four-projector prototype that can display dynamic imagery with full parallax.
NASA Astrophysics Data System (ADS)
Kim, Min Young; Cho, Hyung Suck; Kim, Jae H.
2002-10-01
In recent years, intelligent autonomous mobile robots have drawn tremendous interests as service robots for serving human or industrial robots for replacing human. To carry out the task, robots must be able to sense and recognize 3D space that they live or work. In this paper, we deal with the topic related to 3D sensing system for the environment recognition of mobile robots. For this, the structured lighting is basically utilized for a 3D visual sensor system because of the robustness on the nature of the navigation environment and the easy extraction of feature information of interest. The proposed sensing system is classified into a trinocular vision system, which is composed of the flexible multi-stripe laser projector, and two cameras. The principle of extracting the 3D information is based on the optical triangulation method. With modeling the projector as another camera and using the epipolar constraints which the whole cameras makes, the point-to-point correspondence between the line feature points in each image is established. In this work, the principle of this sensor is described in detail, and a series of experimental tests is performed to show the simplicity and efficiency and accuracy of this sensor system for 3D the environment sensing and recognition.
Super-resolution in a defocused plenoptic camera: a wave-optics-based approach.
Sahin, Erdem; Katkovnik, Vladimir; Gotchev, Atanas
2016-03-01
Plenoptic cameras enable the capture of a light field with a single device. However, with traditional light field rendering procedures, they can provide only low-resolution two-dimensional images. Super-resolution is considered to overcome this drawback. In this study, we present a super-resolution method for the defocused plenoptic camera (Plenoptic 1.0), where the imaging system is modeled using wave optics principles and utilizing low-resolution depth information of the scene. We are particularly interested in super-resolution of in-focus and near in-focus scene regions, which constitute the most challenging cases. The simulation results show that the employed wave-optics model makes super-resolution possible for such regions as long as sufficiently accurate depth information is available.
NASA Astrophysics Data System (ADS)
Rothmund, Sabrina; Niethammer, Uwe; Walter, Marco; Joswig, Manfred
2013-04-01
In recent years, the high-resolution and multi-temporal 3D mapping of the Earth's surface using terrestrial laser scanning (TLS), ground-based optical images and especially low-cost UAV-based aerial images (Unmanned Aerial Vehicle) has grown in importance. This development resulted from the progressive technical improvement of the imaging systems and the freely available multi-view stereo (MVS) software packages. These different methods of data acquisition for the generation of accurate, high-resolution digital surface models (DSMs) were applied as part of an eight-week field campaign at the Super-Sauze landslide (South French Alps). An area of approximately 10,000 m² with long-term average displacement rates greater than 0.01 m/day has been investigated. The TLS-based point clouds were acquired at different viewpoints with an average point spacing between 10 to 40 mm and at different dates. On these days, more than 50 optical images were taken on points along a predefined line on the side part of the landslide by a low-cost digital compact camera. Additionally, aerial images were taken by a radio-controlled mini quad-rotor UAV equipped with another low-cost digital compact camera. The flight altitude ranged between 20 m and 250 m and produced a corresponding ground resolution between 0.6 cm and 7 cm. DGPS measurements were carried out as well in order to geo-reference and validate the point cloud data. To generate unscaled photogrammetric 3D point clouds from a disordered and tilted image set, we use the widespread open-source software package Bundler and PMVS2 (University of Washington). These multi-temporal DSMs are required on the one hand to determine the three-dimensional surface deformations and on the other hand it will be required for differential correction for orthophoto production. Drawing on the example of the acquired data at the Super-Sauze landslide, we demonstrate the potential but also the limitations of the photogrammetric point clouds. To determine the quality of the photogrammetric point cloud, these point clouds are compared with the TLS-based DSMs. The comparison shows that photogrammetric points accuracies are in the range of cm to dm, therefore don't reach the quality of the high-resolution TLS-based DSMs. Further, the validation of the photogrammetric point clouds reveals that some of them have internal curvature effects. The advantage of the photogrammetric 3D data acquisition is the use of low-cost equipment and less time-consuming data collection in the field. While the accuracy of the photogrammetric point clouds is not as high as TLS-based DSMs, the advantages of the former method are seen when applied in areas where dm-range is sufficient.
Super-resolved refocusing with a plenoptic camera
NASA Astrophysics Data System (ADS)
Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu
2011-03-01
This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).
NASA Astrophysics Data System (ADS)
Feng, Zhixin
2018-02-01
Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.
NASA Astrophysics Data System (ADS)
Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.
2016-07-01
The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.
LWIR NUC using an uncooled microbolometer camera
NASA Astrophysics Data System (ADS)
Laveigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian; McHugh, Steve
2010-04-01
Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector. Ideally, NUC will be performed in the same band in which the scene projector will be used. Cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, however, cooled large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Santa Barbara Infrared, Inc. reports progress on a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution are the main difficulties. A discussion of processes developed to mitigate these issues follows.
Three-dimensional super-resolved live cell imaging through polarized multi-angle TIRF.
Zheng, Cheng; Zhao, Guangyuan; Liu, Wenjie; Chen, Youhua; Zhang, Zhimin; Jin, Luhong; Xu, Yingke; Kuang, Cuifang; Liu, Xu
2018-04-01
Measuring three-dimensional nanoscale cellular structures is challenging, especially when the structure is dynamic. Owing to the informative total internal reflection fluorescence (TIRF) imaging under varied illumination angles, multi-angle (MA) TIRF has been examined to offer a nanoscale axial and a subsecond temporal resolution. However, conventional MA-TIRF still performs badly in lateral resolution and fails to characterize the depth image in densely distributed regions. Here, we emphasize the lateral super-resolution in the MA-TIRF, exampled by simply introducing polarization modulation into the illumination procedure. Equipped with a sparsity and accelerated proximal algorithm, we examine a more precise 3D sample structure compared with previous methods, enabling live cell imaging with a temporal resolution of 2 s and recovering high-resolution mitochondria fission and fusion processes. We also shared the recovery program, which is the first open-source recovery code for MA-TIRF, to the best of our knowledge.
Adjustment of multi-CCD-chip-color-camera heads
NASA Astrophysics Data System (ADS)
Guyenot, Volker; Tittelbach, Guenther; Palme, Martin
1999-09-01
The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.
Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan
2017-01-01
In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866
Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT
NASA Astrophysics Data System (ADS)
Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.
2009-01-01
We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.
The research on multi-projection correction based on color coding grid array
NASA Astrophysics Data System (ADS)
Yang, Fan; Han, Cheng; Bai, Baoxing; Zhang, Chao; Zhao, Yunxiu
2017-10-01
There are many disadvantages such as lower timeliness, greater manual intervention in multi-channel projection system, in order to solve the above problems, this paper proposes a multi-projector correction technology based on color coding grid array. Firstly, a color structured light stripe is generated by using the De Bruijn sequences, then meshing the feature information of the color structured light stripe image. We put the meshing colored grid intersection as the center of the circle, and build a white solid circle as the feature sample set of projected images. It makes the constructed feature sample set not only has the perceptual localization, but also has good noise immunity. Secondly, we establish the subpixel geometric mapping relationship between the projection screen and the individual projectors by using the structure of light encoding and decoding based on the color array, and the geometrical mapping relation is used to solve the homography matrix of each projector. Lastly the brightness inconsistency of the multi-channel projection overlap area is seriously interfered, it leads to the corrected image doesn't fit well with the observer's visual needs, and we obtain the projection display image of visual consistency by using the luminance fusion correction algorithm. The experimental results show that this method not only effectively solved the problem of distortion of multi-projection screen and the issue of luminance interference in overlapping region, but also improved the calibration efficient of multi-channel projective system and reduced the maintenance cost of intelligent multi-projection system.
Full-field 3D shape measurement of specular object having discontinuous surfaces
NASA Astrophysics Data System (ADS)
Zhang, Zonghua; Huang, Shujun; Gao, Nan; Gao, Feng; Jiang, Xiangqian
2017-06-01
This paper presents a novel Phase Measuring Deflectometry (PMD) method to measure specular objects having discontinuous surfaces. A mathematical model is established to directly relate the absolute phase and depth, instead of the phase and gradient. Based on the model, a hardware measuring system has been set up, which consists of a precise translating stage, a projector, a diffuser and a camera. The stage locates the projector and the diffuser together to a known position during measurement. By using the model-based and machine vision methods, system calibration is accomplished to provide the required parameters and conditions. The verification tests are given to evaluate the effectiveness of the developed system. 3D (Three-Dimensional) shapes of a concave mirror and a monolithic multi-mirror array having multiple specular surfaces have been measured. Experimental results show that the proposed method can obtain 3D shape of specular objects having discontinuous surfaces effectively
NASA Astrophysics Data System (ADS)
Mao, Cuili; Lu, Rongsheng; Liu, Zhijian
2018-07-01
In fringe projection profilometry, the phase errors caused by the nonlinear intensity response of digital projectors needs to be correctly compensated. In this paper, a multi-frequency inverse-phase method is proposed. The theoretical model of periodical phase errors is analyzed. The periodical phase errors can be adaptively compensated in the wrapped maps by using a set of fringe patterns. The compensated phase is then unwrapped with multi-frequency method. Compared with conventional methods, the proposed method can greatly reduce the periodical phase error without calibrating measurement system. Some simulation and experimental results are presented to demonstrate the validity of the proposed approach.
Super-resolution mapping using multi-viewing CHRIS/PROBA data
NASA Astrophysics Data System (ADS)
Dwivedi, Manish; Kumar, Vinay
2016-04-01
High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.
Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system
NASA Astrophysics Data System (ADS)
Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi
2010-05-01
Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.
Super-resolution for imagery from integrated microgrid polarimeters.
Hardie, Russell C; LeMaster, Daniel A; Ratliff, Bradley M
2011-07-04
Imagery from microgrid polarimeters is obtained by using a mosaic of pixel-wise micropolarizers on a focal plane array (FPA). Each distinct polarization image is obtained by subsampling the full FPA image. Thus, the effective pixel pitch for each polarization channel is increased and the sampling frequency is decreased. As a result, aliasing artifacts from such undersampling can corrupt the true polarization content of the scene. Here we present the first multi-channel multi-frame super-resolution (SR) algorithms designed specifically for the problem of image restoration in microgrid polarization imagers. These SR algorithms can be used to address aliasing and other degradations, without sacrificing field of view or compromising optical resolution with an anti-aliasing filter. The new SR methods are designed to exploit correlation between the polarimetric channels. One of the new SR algorithms uses a form of regularized least squares and has an iterative solution. The other is based on the faster adaptive Wiener filter SR method. We demonstrate that the new multi-channel SR algorithms are capable of providing significant enhancement of polarimetric imagery and that they outperform their independent channel counterparts.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
Transmission function properties for multi-layered structures: application to super-resolution.
Mattiucci, N; D'Aguanno, G; Scalora, M; Bloemer, M J; Sibilia, C
2009-09-28
We discuss the properties of the transmission function in the k-space for a generic multi-layered structure. In particular we analytically demonstrate that a transmission greater than one in the evanescent spectrum (amplification of the evanescent modes) can be directly linked to the guided modes supported by the structure. Moreover we show that the slope of the phase of the transmission function in the propagating spectrum is inversely proportional to the ability of the structure to compensate the diffraction of the propagating modes. We apply these findings to discuss several examples where super-resolution is achieved thanks to the simultaneous availability of the amplification of the evanescent modes and the diffraction compensation of the propagating modes.
NASA Astrophysics Data System (ADS)
Ichikawa, Takashi; Obata, Tomokazu
2016-08-01
A design of the wide-field infrared camera (AIRC) for Antarctic 2.5m infrared telescope (AIRT) is presented. The off-axis design provides a 7'.5 ×7'. 5 field of view with 0".22 pixel-1 in the wavelength range of 1 to 5 μm for the simultaneous three-color bands using cooled optics and three 2048×2048 InSb focal plane arrays. Good image quality is obtained over the entire field of view with practically no chromatic aberration. The image size corresponds to the refraction limited for 2.5 m telescope at 2 μm and longer. To enjoy the stable atmosphere with extremely low perceptible water vapor (PWV), superb seeing quality, and the cadence of the polar winter at Dome Fuji on the Antarctic plateau, the camera will be dedicated to the transit observations of exoplanets. The function of a multi-object spectroscopic mode with low spectra resolution (R 50-100) will be added for the spectroscopic transit observation at 1-5 μm. The spectroscopic capability in the environment of extremely low PWV of Antarctica will be very effective for the study of the existence of water vapor in the atmosphere of super earths.
Field-Portable Pixel Super-Resolution Colour Microscope
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742
Field-portable pixel super-resolution colour microscope.
Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan
2013-01-01
Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.
Venkataramani, Varun; Kardorff, Markus; Herrmannsdörfer, Frank; Wieneke, Ralph; Klein, Alina; Tampé, Robert; Heilemann, Mike; Kuner, Thomas
2018-04-03
With continuing advances in the resolving power of super-resolution microscopy, the inefficient labeling of proteins with suitable fluorophores becomes a limiting factor. For example, the low labeling density achieved with antibodies or small molecule tags limits attempts to reveal local protein nano-architecture of cellular compartments. On the other hand, high laser intensities cause photobleaching within and nearby an imaged region, thereby further reducing labeling density and impairing multi-plane whole-cell 3D super-resolution imaging. Here, we show that both labeling density and photobleaching can be addressed by repetitive application of trisNTA-fluorophore conjugates reversibly binding to a histidine-tagged protein by a novel approach called single-epitope repetitive imaging (SERI). For single-plane super-resolution microscopy, we demonstrate that, after multiple rounds of labeling and imaging, the signal density is increased. Using the same approach of repetitive imaging, washing and re-labeling, we demonstrate whole-cell 3D super-resolution imaging compensated for photobleaching above or below the imaging plane. This proof-of-principle study demonstrates that repetitive labeling of histidine-tagged proteins provides a versatile solution to break the 'labeling barrier' and to bypass photobleaching in multi-plane, whole-cell 3D experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G; Zakian, K; Deasy, J
Purpose: To develop a novel super-resolution time-resolved 4DMRI technique to evaluate multi-breath, irregular and complex organ motion without respiratory surrogate for radiotherapy planning. Methods: The super-resolution time-resolved (TR) 4DMRI approach combines a series of low-resolution 3D cine MRI images acquired during free breathing (FB) with a high-resolution breath-hold (BH) 3DMRI via deformable image registration (DIR). Five volunteers participated in the study under an IRB-approved protocol. The 3D cine images with voxel size of 5×5×5 mm{sup 3} at two volumes per second (2Hz) were acquired coronally using a T1 fast field echo sequence, half-scan (0.8) acceleration, and SENSE (3) parallel imaging.more » Phase-encoding was set in the lateral direction to minimize motion artifacts. The BH image with voxel size of 2×2×2 mm{sup 3} was acquired using the same sequence within 10 seconds. A demons-based DIR program was employed to produce super-resolution 2Hz 4DMRI. Registration quality was visually assessed using difference images between TR 4DMRI and 3D cine and quantitatively assessed using average voxel correlation. The fidelity of the 3D cine images was assessed using a gel phantom and a 1D motion platform by comparing mobile and static images. Results: Owing to voxel intensity similarity using the same MRI scanning sequence, accurate DIR between FB and BH images is achieved. The voxel correlations between 3D cine and TR 4DMRI are greater than 0.92 in all cases and the difference images illustrate minimal residual error with little systematic patterns. The 3D cine images of the mobile gel phantom preserve object geometry with minimal scanning artifacts. Conclusion: The super-resolution time-resolved 4DMRI technique has been achieved via DIR, providing a potential solution for multi-breath motion assessment. Accurate DIR mapping has been achieved to map high-resolution BH images to low-resolution FB images, producing 2Hz volumetric high-resolution 4DMRI. Further validation and improvement are still required prior to clinical applications. This study is in part supported by the NIH (U54CA137788/U54CA132378).« less
Jing, Xiao-Yuan; Zhu, Xiaoke; Wu, Fei; Hu, Ruimin; You, Xinge; Wang, Yunhong; Feng, Hui; Yang, Jing-Yu
2017-03-01
Person re-identification has been widely studied due to its importance in surveillance and forensics applications. In practice, gallery images are high resolution (HR), while probe images are usually low resolution (LR) in the identification scenarios with large variation of illumination, weather, or quality of cameras. Person re-identification in this kind of scenarios, which we call super-resolution (SR) person re-identification, has not been well studied. In this paper, we propose a semi-coupled low-rank discriminant dictionary learning (SLD 2 L) approach for SR person re-identification task. With the HR and LR dictionary pair and mapping matrices learned from the features of HR and LR training images, SLD 2 L can convert the features of the LR probe images into HR features. To ensure that the converted features have favorable discriminative capability and the learned dictionaries can well characterize intrinsic feature spaces of the HR and LR images, we design a discriminant term and a low-rank regularization term for SLD 2 L. Moreover, considering that low resolution results in different degrees of loss for different types of visual appearance features, we propose a multi-view SLD 2 L (MVSLD 2 L) approach, which can learn the type-specific dictionary pair and mappings for each type of feature. Experimental results on multiple publicly available data sets demonstrate the effectiveness of our proposed approaches for the SR person re-identification task.
An Example-Based Super-Resolution Algorithm for Selfie Images
William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
High-precision real-time 3D shape measurement based on a quad-camera system
NASA Astrophysics Data System (ADS)
Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao
2018-01-01
Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications
NASA Astrophysics Data System (ADS)
Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.
2012-07-01
The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.
Multi-pulse pumping for far-field super-resolution imaging
NASA Astrophysics Data System (ADS)
Requena, Sebastian; Raut, Sangram; Doan, Hung; Kimball, Joe; Fudala, Rafal; Borejdo, Julian; Gryczynski, Ignacy; Strzhemechny, Yuri; Gryczynski, Zygmunt
2016-02-01
Recently, far-field optical imaging with a resolution significantly beyond diffraction limit has attracted tremendous attention allowing for high resolution imaging in living objects. Various methods have been proposed that are divided in to two basic approaches; deterministic super-resolution like STED or RESOLFT and stochastic super-resolution like PALM or STORM. We propose to achieve super-resolution in far-field fluorescence imaging by the use of controllable (on-demand) bursts of pulses that can change the fluorescence signal of long-lived component over one order of magnitude. We demonstrate that two beads, one labeled with a long-lived dye and another with a short-lived dye, separated by a distance lower than 100 nm can be easily resolved in a single experiment. The proposed method can be used to separate two biological structures in a cell by targeting them with two antibodies labeled with long-lived and short-lived fluorophores.
Autocalibration of a projector-camera system.
Okatani, Takayuki; Deguchi, Koichiro
2005-12-01
This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.
Autocalibration of multiprojector CAVE-like immersive environments.
Sajadi, Behzad; Majumder, Aditi
2012-03-01
In this paper, we present the first method for the geometric autocalibration of multiple projectors on a set of CAVE-like immersive display surfaces including truncated domes and 4 or 5-wall CAVEs (three side walls, floor, and/or ceiling). All such surfaces can be categorized as swept surfaces and multiple projectors can be registered on them using a single uncalibrated camera without using any physical markers on the surface. Our method can also handle nonlinear distortion in the projectors, common in compact setups where a short throw lens is mounted on each projector. Further, when the whole swept surface is not visible from a single camera view, we can register the projectors using multiple pan and tilted views of the same camera. Thus, our method scales well with different size and resolution of the display. Since we recover the 3D shape of the display, we can achieve registration that is correct from any arbitrary viewpoint appropriate for head-tracked single-user virtual reality systems. We can also achieve wallpapered registration, more appropriate for multiuser collaborative explorations. Though much more immersive than common surfaces like planes and cylinders, general swept surfaces are used today only for niche display environments. Even the more popular 4 or 5-wall CAVE is treated as a piecewise planar surface for calibration purposes and hence projectors are not allowed to be overlapped across the corners. Our method opens up the possibility of using such swept surfaces to create more immersive VR systems without compromising the simplicity of having a completely automatic calibration technique. Such calibration allows completely arbitrary positioning of the projectors in a 5-wall CAVE, without respecting the corners.
The Development of a Computer Controlled Super 8 Motion Picture Projector.
ERIC Educational Resources Information Center
Reynolds, Eldon J.
Instructors in Child Development at the University of Texas at Austin selected sound motion pictures as the most effective medium to simulate the observation of children in nursery laboratories. A computer controlled projector was designed for this purpose. An interface and control unit controls the Super 8 projector from a time-sharing computer…
Conceptual design for an AIUC multi-purpose spectrograph camera using DMD technology
NASA Astrophysics Data System (ADS)
Rukdee, S.; Bauer, F.; Drass, H.; Vanzi, L.; Jordan, A.; Barrientos, F.
2017-02-01
Current and upcoming massive astronomical surveys are expected to discover a torrent of objects, which need groundbased follow-up observations to characterize their nature. For transient objects in particular, rapid early and efficient spectroscopic identification is needed. In particular, a small-field Integral Field Unit (IFU) would mitigate traditional slit losses and acquisition time. To this end, we present the design of a Digital Micromirror Device (DMD) multi-purpose spectrograph camera capable of running in several modes: traditional longslit, small-field patrol IFU, multi-object and full-field IFU mode via Hadamard spectra reconstruction. AIUC Optical multi-purpose CAMera (AIUCOCAM) is a low-resolution spectrograph camera of R 1,600 covering the spectral range of 0.45-0.85 μm. We employ a VPH grating as a disperser, which is removable to allow an imaging mode. This spectrograph is envisioned for use on a 1-2 m class telescope in Chile to take advantage of good site conditions. We present design decisions and challenges for a costeffective robotized spectrograph. The resulting instrument is remarkably versatile, capable of addressing a wide range of scientific topics.
You, Wei; Cretu, Edmond; Rohling, Robert
2013-11-01
This paper investigates a low computational cost, super-resolution ultrasound imaging method that leverages the asymmetric vibration mode of CMUTs. Instead of focusing on the broadband received signal on the entire CMUT membrane, we utilize the differential signal received on the left and right part of the membrane obtained by a multi-electrode CMUT structure. The differential signal reflects the asymmetric vibration mode of the CMUT cell excited by the nonuniform acoustic pressure field impinging on the membrane, and has a resonant component in immersion. To improve the resolution, we propose an imaging method as follows: a set of manifold matrices of CMUT responses for multiple focal directions are constructed off-line with a grid of hypothetical point targets. During the subsequent imaging process, the array sequentially steers to multiple angles, and the amplitudes (weights) of all hypothetical targets at each angle are estimated in a maximum a posteriori (MAP) process with the manifold matrix corresponding to that angle. Then, the weight vector undergoes a directional pruning process to remove the false estimation at other angles caused by the side lobe energy. Ultrasound imaging simulation is performed on ring and linear arrays with a simulation program adapted with a multi-electrode CMUT structure capable of obtaining both average and differential received signals. Because the differential signals from all receiving channels form a more distinctive temporal pattern than the average signals, better MAP estimation results are expected than using the average signals. The imaging simulation shows that using differential signals alone or in combination with the average signals produces better lateral resolution than the traditional phased array or using the average signals alone. This study is an exploration into the potential benefits of asymmetric CMUT responses for super-resolution imaging.
Center for Coastline Security Technology, Year 3
2008-05-01
Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection
Super-Resolution in Plenoptic Cameras Using FPGAs
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-01-01
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes. PMID:24841246
Super-resolution in plenoptic cameras using FPGAs.
Pérez, Joel; Magdaleno, Eduardo; Pérez, Fernando; Rodríguez, Manuel; Hernández, David; Corrales, Jaime
2014-05-16
Plenoptic cameras are a new type of sensor that extend the possibilities of current commercial cameras allowing 3D refocusing or the capture of 3D depths. One of the limitations of plenoptic cameras is their limited spatial resolution. In this paper we describe a fast, specialized hardware implementation of a super-resolution algorithm for plenoptic cameras. The algorithm has been designed for field programmable graphic array (FPGA) devices using VHDL (very high speed integrated circuit (VHSIC) hardware description language). With this technology, we obtain an acceleration of several orders of magnitude using its extremely high-performance signal processing capability through parallelism and pipeline architecture. The system has been developed using generics of the VHDL language. This allows a very versatile and parameterizable system. The system user can easily modify parameters such as data width, number of microlenses of the plenoptic camera, their size and shape, and the super-resolution factor. The speed of the algorithm in FPGA has been successfully compared with the execution using a conventional computer for several image sizes and different 3D refocusing planes.
Super-resolution with a positive epsilon multi-quantum-well super-lens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bak, A. O.; Giannini, V.; Maier, S. A.
2013-12-23
We design an anisotropic and dichroic quantum metamaterial that is able to achieve super-resolution without the need for a negative permittivity. When exploring the parameters of the structure, we take into account the limits of semiconductor fabrication technology based on quantum well stacks. By heavily doping the structure with free electrons, we infer an anisotropic effective medium with a prolate ellipsoid dispersion curve which allows for near-diffractionless propagation of light (similar to an epsilon-near-zero hyperbolic lens). This, coupled with low absorption, allows us to resolve images at the sub-wavelength scale at distances 6 times greater than equivalent natural materials.
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
NASA Astrophysics Data System (ADS)
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Detailed analysis of an optimized FPP-based 3D imaging system
NASA Astrophysics Data System (ADS)
Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges
2016-05-01
In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.
Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur
2018-01-01
Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.
Liquid crystal light valve technologies for display applications
NASA Astrophysics Data System (ADS)
Kikuchi, Hiroshi; Takizawa, Kuniharu
2001-11-01
The liquid crystal (LC) light valve, which is a spatial light modulator that uses LC material, is a very important device in the area of display development, image processing, optical computing, holograms, etc. In particular, there have been dramatic developments in the past few years in the application of the LC light valve to projectors and other display technologies. Various LC operating modes have been developed, including thin film transistors, MOS-FETs and other active matrix drive techniques to meet the requirements for higher resolution, and substantial improvements have been achieved in the performance of optical systems, resulting in brighter display images. Given this background, the number of applications for the LC light valve has greatly increased. The resolution has increased from QVGA (320 x 240) to QXGA (2048 x 1536) or even super- high resolution of eight million pixels. In the area of optical output, projectors of 600 to 13,000 lm are now available, and they are used for presentations, home theatres, electronic cinema and other diverse applications. Projectors using the LC light valve can display high- resolution images on large screens. They are now expected to be developed further as part of hyper-reality visual systems. This paper provides an overview of the needs for large-screen displays, human factors related to visual effects, the way in which LC light valves are applied to projectors, improvements in moving picture quality, and the results of the latest studies that have been made to increase the quality of images and moving images or pictures.
How to Choose--and Use--Motion Picture Projectors
ERIC Educational Resources Information Center
Training, 1976
1976-01-01
Suggests techniques for selecting super 8 and 16mm movie projectors for various training and communication needs. Charts list various characteristics for 17 models of 8mm projectors with built-in screen, 7 models without screen, and 33 models of 16mm projectors. (WL)
Optical super-resolution effect induced by nonlinear characteristics of graphene oxide films
NASA Astrophysics Data System (ADS)
Zhao, Yong-chuang; Nie, Zhong-quan; Zhai, Ai-ping; Tian, Yan-ting; Liu, Chao; Shi, Chang-kun; Jia, Bao-hua
2018-01-01
In this work, we focus on the optical super-resolution effect induced by strong nonlinear saturation absorption (NSA) of graphene oxide (GO) membranes. The third-order optical nonlinearities are characterized by the canonical Z-scan technique under femtosecond laser (wavelength: 800 nm, pulse width: 100 fs) excitation. Through controlling the applied femtosecond laser energy, NSA of the GO films can be tuned continuously. The GO film is placed at the focal plane as a unique amplitude filter to improve the resolution of the focused field. A multi-layer system model is proposed to present the generation of a deep sub-wavelength spot associated with the nonlinearity of GO films. Moreover, the parameter conditions to achieve the best resolution (˜λ/6) are determined entirely. The demonstrated results here are useful for high density optical recoding and storage, nanolithography, and super-resolution optical imaging.
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Jianping; Sofia, Giulia; Tarolli, Paolo
2014-05-01
Moon surface features have great significance in understanding and reconstructing the lunar geological evolution. Linear structures like rilles and ridges are closely related to the internal forced tectonic movement. The craters widely distributed on the moon are also the key research targets for external forced geological evolution. The extremely rare availability of samples and the difficulty for field works make remote sensing the most important approach for planetary studies. New and advanced lunar probes launched by China, U.S., Japan and India provide nowadays a lot of high-quality data, especially in the form of high-resolution Digital Terrain Models (DTMs), bringing new opportunities and challenges for feature extraction on the moon. The aim of this study is to recognize and extract lunar features using geomorphometric analysis based on multi-scale parameters and multi-resolution DTMs. The considered digital datasets include CE1-LAM (Chang'E One, Laser AltiMeter) data with resolution of 500m/pix, LRO-WAC (Lunar Reconnaissance Orbiter, Wide Angle Camera) data with resolution of 100m/pix, LRO-LOLA (Lunar Reconnaissance Orbiter, Lunar Orbiter Laser Altimeter) data with resolution of 60m/pix, and LRO-NAC (Lunar Reconnaissance Orbiter, Narrow Angle Camera) data with resolution of 2-5m/pix. We considered surface derivatives to recognize the linear structures including Rilles and Ridges. Different window scales and thresholds for are considered for feature extraction. We also calculated the roughness index to identify the erosion/deposits area within craters. The results underline the suitability of the adopted methods for feature recognition on the moon surface. The roughness index is found to be a useful tool to distinguish new craters, with higher roughness, from the old craters, which present a smooth and less rough surface.
Portable and cost-effective pixel super-resolution on-chip microscope for telemedicine applications.
Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan
2011-01-01
We report a field-portable lensless on-chip microscope with a lateral resolution of <1 μm and a large field-of-view of ~24 mm(2). This microscope is based on digital in-line holography and a pixel super-resolution algorithm to process multiple lensfree holograms and obtain a single high-resolution hologram. In its compact and cost-effective design, we utilize 23 light emitting diodes butt-coupled to 23 multi-mode optical fibers, and a simple optical filter, with no moving parts. Weighing only ~95 grams, we demonstrate the performance of this field-portable microscope by imaging various objects including human malaria parasites in thin blood smears.
Fast 3D NIR systems for facial measurement and lip-reading
NASA Astrophysics Data System (ADS)
Brahm, Anika; Ramm, Roland; Heist, Stefan; Rulff, Christian; Kühmstedt, Peter; Notni, Gunther
2017-05-01
Structured-light projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. In particular, there is a great demand for accurate and fast 3D scans of human faces or facial regions of interest in medicine, safety, face modeling, games, virtual life, or entertainment. New developments of facial expression detection and machine lip-reading can be used for communication tasks, future machine control, or human-machine interactions. In such cases, 3D information may offer more detailed information than 2D images which can help to increase the power of current facial analysis algorithms. In this contribution, we present new 3D sensor technologies based on three different methods of near-infrared projection technologies in combination with a stereo vision setup of two cameras. We explain the optical principles of an NIR GOBO projector, an array projector and a modified multi-aperture projection method and compare their performance parameters to each other. Further, we show some experimental measurement results of applications where we realized fast, accurate, and irritation-free measurements of human faces.
NASA Astrophysics Data System (ADS)
Takeuchi, Eric B.; Flint, Graham W.; Bergstedt, Robert; Solone, Paul J.; Lee, Dicky; Moulton, Peter F.
2001-03-01
Electronic cinema projectors are being developed that use a digital micromirror device (DMDTM) to produce the image. Photera Technologies has developed a new architecture that produces truly digital imagery using discrete pulse trains of red, green, and blue light in combination with a DMDTM where in the number of pulses that are delivered to the screen during a given frame can be defined in a purely digital fashion. To achieve this, a pulsed RGB laser technology pioneered by Q-Peak is combined with a novel projection architecture that we refer to as Laser Digital CameraTM. This architecture provides imagery wherein, during the time interval of each frame, individual pixels on the screen receive between zero and 255 discrete pulses of each color; a circumstance which yields 24-bit color. Greater color depth, or increased frame rate is achievable by increasing the pulse rate of the laser. Additionally, in the context of multi-screen theaters, a similar architecture permits our synchronously pulsed RGB source to simultaneously power three screens in a color sequential manner; thereby providing an efficient use of photons, together with the simplifications which derive from using a single DMDTM chip in each projector.
Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.
Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho
2016-04-18
We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.
Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic
2017-03-01
Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.
Numerical analysis of wavefront measurement characteristics by using plenoptic camera
NASA Astrophysics Data System (ADS)
Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun
2016-01-01
To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.
MEGARA: the new multi-object and integral field spectrograph for GTC
NASA Astrophysics Data System (ADS)
Carrasco, E.; Páez, G.; Izazaga-Pére, R.; Gil de Paz, A.; Gallego, J.; Iglesias-Páramo, J.
2017-07-01
MEGARA is an optical integral-field unit and multi-object spectrograph for the 10.4m Gran Telescopio Canarias. Both observational modes will provide identical spectral resolutions Rfwhm ˜ 6,000, 12,000 and 18,700. The spectrograph is a collimator-camera system. The unique characteristics of MEGARA in terms of throughput and versatility make this instrument the most efficient tool to date to analyze astrophysical objects at intermediate spectral resolutions. The instrument is currently at the telescope for on-sky commissioning. Here we describe the as-built main characteristics the instrument.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
Perspectives in Super-resolved Fluorescence Microscopy: What comes next?
NASA Astrophysics Data System (ADS)
Cremer, Christoph; Birk, Udo
2016-04-01
The Nobel Prize in Chemistry 2014 has been awarded to three scientists involved in the development of STED and PALM super-resolution fluorescence microscopy (SRM) methods. They have proven that it is possible to overcome the hundred year old theoretical limit for the resolution potential of light microscopy (of about 200 nm for visible light), which for decades has precluded a direct glimpse of the molecular machinery of life. None of the present-day super-resolution techniques have invalidated the Abbe limit for light optical detection; however, they have found clever ways around it. In this report, we discuss some of the challenges still to be resolved before arising SRM approaches will be fit to bring about the revolution in Biology and Medicine envisaged. Some of the challenges discussed are the applicability to image live and/or large samples, the further enhancement of resolution, future developments of labels, and multi-spectral approaches.
Super-resolved all-refocused image with a plenoptic camera
NASA Astrophysics Data System (ADS)
Wang, Xiang; Li, Lin; Hou, Guangqi
2015-12-01
This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.
Wang, Yan; Li, Jingwen; Sun, Bing; Yang, Jian
2016-01-01
Azimuth resolution of airborne stripmap synthetic aperture radar (SAR) is restricted by the azimuth antenna size. Conventionally, a higher azimuth resolution should be achieved by employing alternate modes that steer the beam in azimuth to enlarge the synthetic antenna aperture. However, if a data set of a certain region, consisting of multiple tracks of airborne stripmap SAR data, is available, the azimuth resolution of specific small region of interest (ROI) can be conveniently improved by a novel azimuth super-resolution method as introduced by this paper. The proposed azimuth super-resolution method synthesize the azimuth bandwidth of the data selected from multiple discontinuous tracks and contributes to a magnifier-like function with which the ROI can be further zoomed in with a higher azimuth resolution than that of the original stripmap images. Detailed derivation of the azimuth super-resolution method, including the steps of two-dimensional dechirping, residual video phase (RVP) removal, data stitching and data correction, is provided. The restrictions of the proposed method are also discussed. Lastly, the presented approach is evaluated via both the single- and multi-target computer simulations. PMID:27304959
Design of tangential multi-energy SXR cameras for tokamak plasmas
NASA Astrophysics Data System (ADS)
Yamazaki, H.; Delgado-Aparicio, L. F.; Pablant, N.; Hill, K.; Bitter, M.; Takase, Y.; Ono, M.; Stratton, B.
2017-10-01
A new synthetic diagnostic capability has been built to study the response of tangential multi-energy soft x-ray pin-hole cameras for arbitrary plasma densities (ne , D), temperature (Te) and ion concentrations (nZ). For tokamaks and future facilities to operate safely in a high-pressure long-pulse discharge, it is imperative to address key issues associated with impurity sources, core transport and high-Z impurity accumulation. Multi-energy soft xray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (e.g. Te, nZ and ΔZeff). These systems are designed to sample the continuum- and line-emission from low- to high-Z impurities (e.g. C, O, Al, Si, Ar, Ca, Fe, Ni and Mo) in multiple energy-ranges. These x-ray cameras will be installed in the MST-RFP, as well as NSTX-U and DIII-D tokamaks, measuring the radial structure of the photon emissivity with a radial resolution below 1 cm at a 500 Hz frame rate and a photon-energy resolution of 500 eV. The layout and response expected for the new systems will be shown for different plasma conditions and impurity concentrations. The effect of toroidal rotation driving poloidal asymmetries in the core radiation is also addressed for the case of NSTX-U.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
Nomad devices for interactions in immersive virtual environments
NASA Astrophysics Data System (ADS)
George, Paul; Kemeny, Andras; Merienne, Frédéric; Chardonnet, Jean-Rémy; Thouvenin, Indira Mouttapa; Posselt, Javier; Icart, Emmanuel
2013-03-01
Renault is currently setting up a new CAVE™, a 5 rear-projected wall virtual reality room with a combined 3D resolution of 100 Mpixels, distributed over sixteen 4k projectors and two 2k projector as well as an additional 3D HD collaborative powerwall. Renault's CAVE™ aims at answering needs of the various vehicle conception steps [1]. Starting from vehicle Design, through the subsequent Engineering steps, Ergonomic evaluation and perceived quality control, Renault has built up a list of use-cases and carried out an early software evaluation in the four sided CAVE™ of Institute Image, called MOVE. One goal of the project is to study interactions in a CAVE™, especially with nomad devices such as IPhone or IPad to manipulate virtual objects and to develop visualization possibilities. Inspired by nomad devices current uses (multi-touch gestures, IPhone UI look'n'feel and AR applications), we have implemented an early feature set taking advantage of these popular input devices. In this paper, we present its performance through measurement data collected in our test platform, a 4-sided homemade low-cost virtual reality room, powered by ultra-short-range and standard HD home projectors.
Visualization in aerospace research with a large wall display system
NASA Astrophysics Data System (ADS)
Matsuo, Yuichi
2002-05-01
National Aerospace Laboratory of Japan has built a large- scale visualization system with a large wall-type display. The system has been operational since April 2001 and comprises a 4.6x1.5-meter (15x5-foot) rear projection screen with 3 BARCO 812 high-resolution CRT projectors. The reason we adopted the 3-gun CRT projectors is support for stereoscopic viewing, ease with color/luminosity matching and accuracy of edge-blending. The system is driven by a new SGI Onyx 3400 server of distributed shared-memory architecture with 32 CPUs, 64Gbytes memory, 1.5TBytes FC RAID disk and 6 IR3 graphics pipelines. Software is another important issue for us to make full use of the system. We have introduced some applications available in a multi- projector environment such as AVS/MPE, EnSight Gold and COVISE, and been developing some software tools that create volumetric images with using SGI graphics libraries. The system is mainly used for visualization fo computational fluid dynamics (CFD) simulation sin aerospace research. Visualized CFD results are of our help for designing an improved configuration of aerospace vehicles and analyzing their aerodynamic performances. These days we also use it for various collaborations among researchers.
Measuring the performance of super-resolution reconstruction algorithms
NASA Astrophysics Data System (ADS)
Dijk, Judith; Schutte, Klamer; van Eekeren, Adam W. M.; Bijl, Piet
2012-06-01
For many military operations situational awareness is of great importance. This situational awareness and related tasks such as Target Acquisition can be acquired using cameras, of which the resolution is an important characteristic. Super resolution reconstruction algorithms can be used to improve the effective sensor resolution. In order to judge these algorithms and the conditions under which they operate best, performance evaluation methods are necessary. This evaluation, however, is not straightforward for several reasons. First of all, frequency-based evaluation techniques alone will not provide a correct answer, due to the fact that they are unable to discriminate between structure-related and noise-related effects. Secondly, most super-resolution packages perform additional image enhancement techniques such as noise reduction and edge enhancement. As these algorithms improve the results they cannot be evaluated separately. Thirdly, a single high-resolution ground truth is rarely available. Therefore, evaluation of the differences in high resolution between the estimated high resolution image and its ground truth is not that straightforward. Fourth, different artifacts can occur due to super-resolution reconstruction, which are not known on forehand and hence are difficult to evaluate. In this paper we present a set of new evaluation techniques to assess super-resolution reconstruction algorithms. Some of these evaluation techniques are derived from processing on dedicated (synthetic) imagery. Other evaluation techniques can be evaluated on both synthetic and natural images (real camera data). The result is a balanced set of evaluation algorithms that can be used to assess the performance of super-resolution reconstruction algorithms.
An Overview of the CBERS-2 Satellite and Comparison of the CBERS-2 CCD Data with the L5 TM Data
NASA Technical Reports Server (NTRS)
Chandler, Gyanesh
2007-01-01
CBERS satellite carries on-board a multi sensor payload with different spatial resolutions and collection frequencies. HRCCD (High Resolution CCD Camera), IRMSS (Infrared Multispectral Scanner), and WFI (Wide-Field Imager). The CCD and the WFI camera operate in the VNIR regions, while the IRMSS operates in SWIR and thermal region. In addition to the imaging payload, the satellite carries a Data Collection System (DCS) and Space Environment Monitor (SEM).
Multiple-aperture optical design for micro-level cameras using 3D-printing method
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung
2018-02-01
The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.
Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery
NASA Astrophysics Data System (ADS)
Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.
2017-12-01
Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being organized to collect distributed cloud data sets suitable for MODIS-CERES cloud radiation science and solar forecasting algorithm development. A low-cost and robust sensor design suitable for large scale fabrication and long term deployment has been developed during the project prototyping phase.
Nuclear medicine imaging system
Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George
1986-01-07
A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.
Nuclear medicine imaging system
Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George
1986-01-01
A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2006-06-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2004-09-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27'x 27') UB/VRI optimized mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6\\arcmin\\ field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4'x 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 x 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench beam combiner with visible and near-infrared imagers utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC/NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2008-07-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' × 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5' × 0.5') imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
Long-term monitoring on environmental disasters using multi-source remote sensing technique
NASA Astrophysics Data System (ADS)
Kuo, Y. C.; Chen, C. F.
2017-12-01
Environmental disasters are extreme events within the earth's system that cause deaths and injuries to humans, as well as causing damages and losses of valuable assets, such as buildings, communication systems, farmlands, forest and etc. In disaster management, a large amount of multi-temporal spatial data is required. Multi-source remote sensing data with different spatial, spectral and temporal resolutions is widely applied on environmental disaster monitoring. With multi-source and multi-temporal high resolution images, we conduct rapid, systematic and seriate observations regarding to economic damages and environmental disasters on earth. It is based on three monitoring platforms: remote sensing, UAS (Unmanned Aircraft Systems) and ground investigation. The advantages of using UAS technology include great mobility and availability in real-time rapid and more flexible weather conditions. The system can produce long-term spatial distribution information from environmental disasters, obtaining high-resolution remote sensing data and field verification data in key monitoring areas. It also supports the prevention and control on ocean pollutions, illegally disposed wastes and pine pests in different scales. Meanwhile, digital photogrammetry can be applied on the camera inside and outside the position parameters to produce Digital Surface Model (DSM) data. The latest terrain environment information is simulated by using DSM data, and can be used as references in disaster recovery in the future.
Joint estimation of high resolution images and depth maps from light field cameras
NASA Astrophysics Data System (ADS)
Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki
2014-03-01
Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.
Methods for increasing the sensitivity of gamma-ray imagers
Mihailescu, Lucian [Pleasanton, CA; Vetter, Kai M [Alameda, CA; Chivers, Daniel H [Fremont, CA
2012-02-07
Methods are presented that increase the position resolution and granularity of double sided segmented semiconductor detectors. These methods increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.
Systems for increasing the sensitivity of gamma-ray imagers
Mihailescu, Lucian; Vetter, Kai M.; Chivers, Daniel H.
2012-12-11
Systems that increase the position resolution and granularity of double sided segmented semiconductor detectors are provided. These systems increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
Introduction to the virtual special issue on super-resolution imaging techniques
NASA Astrophysics Data System (ADS)
Cao, Liangcai; Liu, Zhengjun
2017-12-01
Until quite recently, the resolution of optical imaging instruments, including telescopes, cameras and microscopes, was considered to be limited by the diffraction of light and by image sensors. In the past few years, many exciting super-resolution approaches have emerged that demonstrate intriguing ways to bypass the classical limit in optics and detectors. More and more research groups are engaged in the study of advanced super-resolution schemes, devices, algorithms, systems, and applications [1-6]. Super-resolution techniques involve new methods in science and engineering of optics [7,8], measurements [9,10], chemistry [11,12] and information [13,14]. Promising applications, particularly in biomedical research and semiconductor industry, have been successfully demonstrated.
NASA Astrophysics Data System (ADS)
Hayashida, K.; Kawabata, T.; Nakajima, H.; Inoue, S.; Tsunemi, H.
2017-10-01
The best angular resolution of 0.5 arcsec is realized with the X-ray mirror onborad the Chandra satellite. Nevertheless, further better or comparable resolution is anticipated to be difficult in near future. In fact, the goal of ATHENA telescope is 5 arcsec in the angular resolution. We propose a new type of X-ray interferometer consisting simply of an X-ray absorption grating and an X-ray spectral imaging detector, such as X-ray CCDs or new generation CMOS detectors, by stacking the multi images created with the Talbot interferenece (Hayashida et al. 2016). This system, now we call Multi Image X-ray Interferometer Module (MIXIM) enables arcseconds resolution with very small satellites of 50cm size, and sub-arcseconds resolution with small sattellites. We have performed ground experiments, in which a micro-focus X-ray source, grating with pitch of 4.8μm, and 30 μm pixel detector placed about 1m from the source. We obtained the self-image (interferometirc fringe) of the grating for wide band pass around 10keV. This result corresponds to about 2 arcsec resolution for parrallel beam incidence. The MIXIM is usefull for high angular resolution imaging of relatively bright sources. Search for super massive black holes and resolving AGN torus would be the targets of this system.
NASA Astrophysics Data System (ADS)
Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Fujita, T.; Takeuchi, K.; Okochi, H.; Ogata, H.; Kuroshima, H.; Ohsuka, S.; Nakamura, S.; Hirayanagi, M.; Adachi, S.; Uchiyama, T.; Suzuki, H.
2014-11-01
After the nuclear disaster in Fukushima, radiation decontamination has become particularly urgent. To help identify radiation hotspots and ensure effective decontamination operation, we have developed a novel Compton camera based on Ce-doped Gd3Al2Ga3O12 scintillators and multi-pixel photon counter (MPPC) arrays. Even though its sensitivity is several times better than that of other cameras being tested in Fukushima, we introduce a depth-of-interaction (DOI) method to further improve the angular resolution. For gamma rays, the DOI information, in addition to 2-D position, is obtained by measuring the pulse-height ratio of the MPPC arrays coupled to ends of the scintillator. We present the detailed performance and results of various field tests conducted in Fukushima with the prototype 2-D and DOI Compton cameras. Moreover, we demonstrate stereo measurement of gamma rays that enables measurement of not only direction but also approximate distance to radioactive hotspots.
Guffei, Amanda; Sarkar, Rahul; Klewes, Ludger; Righolt, Christiaan; Knecht, Hans; Mai, Sabine
2010-12-01
Hodgkin's lymphoma is characterized by the presence of mono-nucleated Hodgkin cells and bi- to multi-nucleated Reed-Sternberg cells. We have recently shown telomere dysfunction and aberrant synchronous/asynchronous cell divisions during the transition of Hodgkin cells to Reed-Sternberg cells.1 To determine whether overall changes in nuclear architecture affect genomic instability during the transition of Hodgkin cells to Reed-Sternberg cells, we investigated the nuclear organization of chromosomes in these cells. Three-dimensional fluorescent in situ hybridization revealed irregular nuclear positioning of individual chromosomes in Hodgkin cells and, more so, in Reed-Sternberg cells. We characterized an increasingly unequal distribution of chromosomes as mono-nucleated cells became multi-nucleated cells, some of which also contained chromosome-poor 'ghost' cell nuclei. Measurements of nuclear chromosome positions suggested chromosome overlaps in both types of cells. Spectral karyotyping then revealed both aneuploidy and complex chromosomal rearrangements: multiple breakage-bridge-fusion cycles were at the origin of the multiple rearranged chromosomes. This conclusion was challenged by super resolution three-dimensional structured illumination imaging of Hodgkin and Reed-Sternberg nuclei. Three-dimensional super resolution microscopy data documented inter-nuclear DNA bridges in multi-nucleated cells but not in mono-nucleated cells. These bridges consisted of chromatids and chromosomes shared by two Reed-Sternberg nuclei. The complexity of chromosomal rearrangements increased as Hodgkin cells developed into multi-nucleated cells, thus indicating tumor progression and evolution in Hodgkin's lymphoma, with Reed-Sternberg cells representing the highest complexity in chromosomal rearrangements in this disease. This is the first study to demonstrate nuclear remodeling and associated genomic instability leading to the generation of Reed-Sternberg cells of Hodgkin's lymphoma. We defined nuclear remodeling as a key feature of Hodgkin's lymphoma, highlighting the relevance of nuclear architecture in cancer.
An attentive multi-camera system
NASA Astrophysics Data System (ADS)
Napoletano, Paolo; Tisato, Francesco
2014-03-01
Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.
Multi-Angle Snowflake Camera Value-Added Product
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shkurko, Konstantin; Garrett, T.; Gaustad, K
The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less
Determining fast orientation changes of multi-spectral line cameras from the primary images
NASA Astrophysics Data System (ADS)
Wohlfeil, Jürgen
2012-01-01
Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.
Projector-Camera Systems for Immersive Training
2006-01-01
average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available
Wide-field depth-sectioning fluorescence microscopy using projector-generated patterned illumination
NASA Astrophysics Data System (ADS)
Delica, Serafin; Mar Blanca, Carlo
2007-10-01
We present a simple and cost-effective wide-field, depth-sectioning, fluorescence microscope utilizing a commercial multimedia projector to generate excitation patterns on the sample. Highly resolved optical sections of fluorescent pollen grains at 1.9 μm axial resolution are constructed using the structured illumination technique. This requires grid excitation patterns to be scanned across the sample, which is straightforwardly implemented by creating slideshows of gratings at different phases, projecting them onto the sample, and synchronizing camera acquisition with slide transition. In addition to rapid dynamic pattern generation, the projector provides high illumination power and spectral excitation selectivity. We exploit these properties by imaging mouse neural cells in cultures multistained with Alexa 488 and Cy3. The spectral and structural neural information is effectively resolved in three dimensions. The flexibility and commercial availability of this light source is envisioned to open multidimensional imaging to a broader user base.
Caetano, Fabiana A; Dirk, Brennan S; Tam, Joshua H K; Cavanagh, P Craig; Goiko, Maria; Ferguson, Stephen S G; Pasternak, Stephen H; Dikeakos, Jimmy D; de Bruyn, John R; Heit, Bryan
2015-12-01
Our current understanding of the molecular mechanisms which regulate cellular processes such as vesicular trafficking has been enabled by conventional biochemical and microscopy techniques. However, these methods often obscure the heterogeneity of the cellular environment, thus precluding a quantitative assessment of the molecular interactions regulating these processes. Herein, we present Molecular Interactions in Super Resolution (MIiSR) software which provides quantitative analysis tools for use with super-resolution images. MIiSR combines multiple tools for analyzing intermolecular interactions, molecular clustering and image segmentation. These tools enable quantification, in the native environment of the cell, of molecular interactions and the formation of higher-order molecular complexes. The capabilities and limitations of these analytical tools are demonstrated using both modeled data and examples derived from the vesicular trafficking system, thereby providing an established and validated experimental workflow capable of quantitatively assessing molecular interactions and molecular complex formation within the heterogeneous environment of the cell.
Cheng, Victor S; Bai, Jinfen; Chen, Yazhu
2009-11-01
As the needs for various kinds of body surface information are wide-ranging, we developed an imaging-sensor integrated system that can synchronously acquire high-resolution three-dimensional (3D) far-infrared (FIR) thermal and true-color images of the body surface. The proposed system integrates one FIR camera and one color camera with a 3D structured light binocular profilometer. To eliminate the emotion disturbance of the inspector caused by the intensive light projection directly into the eye from the LCD projector, we have developed a gray encoding strategy based on the optimum fringe projection layout. A self-heated checkerboard has been employed to perform the calibration of different types of cameras. Then, we have calibrated the structured light emitted by the LCD projector, which is based on the stereo-vision idea and the least-squares quadric surface-fitting algorithm. Afterwards, the precise 3D surface can fuse with undistorted thermal and color images. To enhance medical applications, the region-of-interest (ROI) in the temperature or color image representing the surface area of clinical interest can be located in the corresponding position in the other images through coordinate system transformation. System evaluation demonstrated a mapping error between FIR and visual images of three pixels or less. Experiments show that this work is significantly useful in certain disease diagnoses.
A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018
A coded structured light system based on primary color stripe projection and monochrome imaging.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-10-14
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2010-07-01
An overview of instrumentation for the Large Binocular Telescope is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27 × 27) mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the straight-through F/15 Gregorian focus incorporating multiple slit masks for multi-object spectroscopy over a 6 field and spectral resolutions of up to 8000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCIFER), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at a bent interior focal station and designed for seeing-limited (FOV: 4 × 4) imaging, long-slit spectroscopy, and multi-object spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0.5 × 0.5) imaging and long-slit spectroscopy. Strategic instruments under development for the remaining two combined focal stations include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support. Over the past two years the LBC and the first LUCIFER instrument have been brought into routine scientific operation and MODS1 commissioning is set to begin in the fall of 2010.
SuperSpec, The On-Chip Spectrometer: Improved NEP and Antenna Performance
NASA Astrophysics Data System (ADS)
Wheeler, Jordan; Hailey-Dunsheath, S.; Shirokoff, E.; Barry, P. S.; Bradford, C. M.; Chapman, S.; Che, G.; Doyle, S.; Glenn, J.; Gordon, S.; Hollister, M.; Kovács, A.; LeDuc, H. G.; Mauskopf, P.; McGeehan, R.; McKenney, C.; Reck, T.; Redford, J.; Ross, C.; Shiu, C.; Tucker, C.; Turner, J.; Walker, S.; Zmuidzinas, J.
2018-05-01
SuperSpec is a new technology for mm and sub-mm spectroscopy. It is an on-chip spectrometer being developed for multi-object, moderate-resolution (R˜ 300 ), large bandwidth survey spectroscopy of high-redshift galaxies for the 1 mm atmospheric window. This band accesses the CO ladder in the redshift range of z = 0-4 and the [CII] 158 μm line from redshift z = 5-9. SuperSpec employs a novel architecture in which detectors are coupled to a series of resonant filters along a single microwave feedline instead of using dispersive optics. This construction allows for the creation of a full spectrometer occupying only ˜ 10 cm^2 of silicon, a reduction in size of several orders of magnitude when compared to standard grating spectrometers. This small profile enables the production of future multi-beam spectroscopic instruments envisioned for the millimeter band to measure the redshifts of dusty galaxies efficiently. The SuperSpec collaboration is currently pushing toward the deployment of a SuperSpec demonstration instrument in fall of 2018. The progress with the latest SuperSpec prototype devices is presented; reporting increased responsivity via a reduced inductor volume (2.6 μm^3 ) and the incorporation of a new broadband antenna. A detector NEP of 3-4 × 10^{-18} W/Hz^{0.5} is obtained, sufficient for background-limited observation on mountaintop sites. In addition, beam maps and efficiency measurements of a new wide-band dual bow-tie slot antenna are shown.
Jiang, Tingting; Raviram, Ramya; Snetkova, Valentina; Rocha, Pedro P; Proudhon, Charlotte; Badri, Sana; Bonneau, Richard; Skok, Jane A; Kluger, Yuval
2016-10-14
Use of low resolution single cell DNA FISH and population based high resolution chromosome conformation capture techniques have highlighted the importance of pairwise chromatin interactions in gene regulation. However, it is unlikely that associations involving regulatory elements act in isolation of other interacting partners that also influence their impact. Indeed, the influence of multi-loci interactions remains something of an enigma as beyond low-resolution DNA FISH we do not have the appropriate tools to analyze these. Here we present a method that uses standard 4C-seq data to identify multi-loci interactions from the same cell. We demonstrate the feasibility of our method using 4C-seq data sets that identify known pairwise and novel tri-loci interactions involving the Tcrb and Igk antigen receptor enhancers. We further show that the three Igk enhancers, MiEκ, 3'Eκ and Edκ, interact simultaneously in this super-enhancer cluster, which add to our previous findings showing that loss of one element decreases interactions between all three elements as well as reducing their transcriptional output. These findings underscore the functional importance of simultaneous interactions and provide new insight into the relationship between enhancer elements. Our method opens the door for studying multi-loci interactions and their impact on gene regulation in other biological settings. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Jiang, Tingting; Raviram, Ramya; Snetkova, Valentina; Rocha, Pedro P.; Proudhon, Charlotte; Badri, Sana; Bonneau, Richard; Skok, Jane A.; Kluger, Yuval
2016-01-01
Use of low resolution single cell DNA FISH and population based high resolution chromosome conformation capture techniques have highlighted the importance of pairwise chromatin interactions in gene regulation. However, it is unlikely that associations involving regulatory elements act in isolation of other interacting partners that also influence their impact. Indeed, the influence of multi-loci interactions remains something of an enigma as beyond low-resolution DNA FISH we do not have the appropriate tools to analyze these. Here we present a method that uses standard 4C-seq data to identify multi-loci interactions from the same cell. We demonstrate the feasibility of our method using 4C-seq data sets that identify known pairwise and novel tri-loci interactions involving the Tcrb and Igk antigen receptor enhancers. We further show that the three Igk enhancers, MiEκ, 3′Eκ and Edκ, interact simultaneously in this super-enhancer cluster, which add to our previous findings showing that loss of one element decreases interactions between all three elements as well as reducing their transcriptional output. These findings underscore the functional importance of simultaneous interactions and provide new insight into the relationship between enhancer elements. Our method opens the door for studying multi-loci interactions and their impact on gene regulation in other biological settings. PMID:27439714
Optimizing Imaging Conditions for Demanding Multi-Color Super Resolution Localization Microscopy
Nahidiazar, Leila; Agronskaia, Alexandra V.; Broertjes, Jorrit; van den Broek, Bram; Jalink, Kees
2016-01-01
Single Molecule Localization super-resolution Microscopy (SMLM) has become a powerful tool to study cellular architecture at the nanometer scale. In SMLM, single fluorophore labels are made to repeatedly switch on and off (“blink”), and their exact locations are determined by mathematically finding the centers of individual blinks. The image quality obtainable by SMLM critically depends on efficacy of blinking (brightness, fraction of molecules in the on-state) and on preparation longevity and labeling density. Recent work has identified several combinations of bright dyes and imaging buffers that work well together. Unfortunately, different dyes blink optimally in different imaging buffers, and acquisition of good quality 2- and 3-color images has therefore remained challenging. In this study we describe a new imaging buffer, OxEA, that supports 3-color imaging of the popular Alexa dyes. We also describe incremental improvements in preparation technique that significantly decrease lateral- and axial drift, as well as increase preparation longevity. We show that these improvements allow us to collect very large series of images from the same cell, enabling image stitching, extended 3D imaging as well as multi-color recording. PMID:27391487
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ajami, N K; Duan, Q; Gao, X
2005-04-11
This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less
Design of tangential multi-energy soft x-ray camera for NSTX-U
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, Luis F.; Maddox, J.; Pablant, N.; Hill, K.; Bitter, M.; Stratton, B.; Efthimion, Phillip
2016-10-01
For tokamaks and future facilities to operate safely in a high-pressure long-pulse discharge, it is imperative to address key issues associated with impurity sources, core transport and high-Z impurity accumulation. Multi-energy SXR imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (Te, nZ and ΔZeff). A new tangential multi-energy soft x-ray pin-hole camera is being design to sample the continuum- and line-emission from low-, medium- and high-Z impurities. This new x-ray diagnostic will be installed on an equatorial midplane port of NSTX-U tokamak and will measure the radial structure of the photon emissivity with a radial resolution below 1 cm at a 500 Hz frame rate and a photon-energy resolution of 500 eV. The layout and response expected of the new system will be shown for different plasma conditions and impurity concentrations. The effect of toroidal rotation driving poloidal asymmetries in the core radiation is also addressed. This effort is designed to contribute to the near- and long-term highest priority research goals for NSTX-U which will integrate a non-inductive operation at reduced collisionality, long energy-confinement-times and a transition to a divertor solution with metal walls.
NASA Astrophysics Data System (ADS)
Lu, Chieh Han; Chen, Peilin; Chen, Bi-Chang
2017-02-01
Optical imaging techniques provide much important information in understanding life science especially cellular structure and morphology because "seeing is believing". However, the resolution of optical imaging is limited by the diffraction limit, which is discovered by Ernst Abbe, i.e. λ/2(NA) (NA is the numerical aperture of the objective lens). Fluorescence super-resolution microscopic techniques such as Stimulated emission depletion microscopy (STED), Photoactivated localization microscopy (PALM), and Stochastic optical reconstruction microscopy (STORM) are invented to have the capability of seeing biological entities down to molecular level that are smaller than the diffraction limit (around 200-nm in lateral resolution). These techniques do not physically violate the Abbe limit of resolution but exploit the photoluminescence properties and labelling specificity of fluorescence molecules to achieve super-resolution imaging. However, these super-resolution techniques limit most of their applications to the 2D imaging of fixed or dead samples due to the high laser power needed or slow speed for the localization process. Extended from 2D imaging, light sheet microscopy has been proven to have a lot of applications on 3D imaging at much better spatiotemporal resolutions due to its intrinsic optical sectioning and high imaging speed. Herein, we combine the advantage of localization microscopy and light-sheet microscopy to have super-resolved cellular imaging in 3D across large field of view. With high-density labeled spontaneous blinking fluorophore and wide-field detection of light-sheet microscopy, these allow us to construct 3D super-resolution multi-cellular imaging at high speed ( minutes) by light-sheet single-molecule localization microscopy.
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
NASA Astrophysics Data System (ADS)
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Detection of non-classical space-time correlations with a novel type of single-photon camera.
Just, Felix; Filipenko, Mykhaylo; Cavanna, Andrea; Michel, Thilo; Gleixner, Thomas; Taheri, Michael; Vallerga, John; Campbell, Michael; Tick, Timo; Anton, Gisela; Chekhova, Maria V; Leuchs, Gerd
2014-07-14
During the last decades, multi-pixel detectors have been developed capable of registering single photons. The newly developed hybrid photon detector camera has a remarkable property that it has not only spatial but also temporal resolution. In this work, we apply this device to the detection of non-classical light from spontaneous parametric down-conversion and use two-photon correlations for the absolute calibration of its quantum efficiency.
Multi-color pyrometry imaging system and method of operating the same
Estevadeordal, Jordi; Nirmalan, Nirm Velumylum; Tralshawala, Nilesh; Bailey, Jeremy Clyde
2017-03-21
A multi-color pyrometry imaging system for a high-temperature asset includes at least one viewing port in optical communication with at least one high-temperature component of the high-temperature asset. The system also includes at least one camera device in optical communication with the at least one viewing port. The at least one camera device includes a camera enclosure and at least one camera aperture defined in the camera enclosure, The at least one camera aperture is in optical communication with the at least one viewing port. The at least one camera device also includes a multi-color filtering mechanism coupled to the enclosure. The multi-color filtering mechanism is configured to sequentially transmit photons within a first predetermined wavelength band and transmit photons within a second predetermined wavelength band that is different than the first predetermined wavelength band.
NASA Astrophysics Data System (ADS)
Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare
2017-11-01
This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.
The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars
NASA Astrophysics Data System (ADS)
Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.
2014-04-01
The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.
Single-Molecule Real-Time 3D Imaging of the Transcription Cycle by Modulation Interferometry.
Wang, Guanshi; Hauver, Jesse; Thomas, Zachary; Darst, Seth A; Pertsinidis, Alexandros
2016-12-15
Many essential cellular processes, such as gene control, employ elaborate mechanisms involving the coordination of large, multi-component molecular assemblies. Few structural biology tools presently have the combined spatial-temporal resolution and molecular specificity required to capture the movement, conformational changes, and subunit association-dissociation kinetics, three fundamental elements of how such intricate molecular machines work. Here, we report a 3D single-molecule super-resolution imaging study using modulation interferometry and phase-sensitive detection that achieves <2 nm axial localization precision, well below the few-nanometer-sized individual protein components. To illustrate the capability of this technique in probing the dynamics of complex macromolecular machines, we visualize the movement of individual multi-subunit E. coli RNA polymerases through the complete transcription cycle, dissect the kinetics of the initiation-elongation transition, and determine the fate of σ 70 initiation factors during promoter escape. Modulation interferometry sets the stage for single-molecule studies of several hitherto difficult-to-investigate multi-molecular transactions that underlie genome regulation. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Wang, Zhanshan; Wei, Lai; Liu, Dongxiao; Cao, Leifeng; Gu, Yuqiu
2018-03-01
Multi-channel Kirkpatrick-Baez (KB) microscopes, which have better resolution and collection efficiency than pinhole cameras, have been widely used in laser inertial confinement fusion to diagnose time evolution of the target implosion. In this study, a tandem multi-channel KB microscope was developed to have sixteen imaging channels with the precise control of spatial resolution and image intervals. This precise control was created using a coarse assembly of mirror pairs with high-accuracy optical prisms, followed by precise adjustment in real-time x-ray imaging experiments. The multilayers coated on the KB mirrors were designed to have substantially the same reflectivity to obtain a uniform brightness of different images for laser-plasma temperature analysis. The study provides a practicable method to achieve the optimum performance of the microscope for future high-resolution applications in inertial confinement fusion experiments.
A Visual Servoing-Based Method for ProCam Systems Calibration
Berry, Francois; Aider, Omar Ait; Mosnier, Jeremie
2013-01-01
Projector-camera systems are currently used in a wide field of applications, such as 3D reconstruction and augmented reality, and can provide accurate measurements, depending on the configuration and calibration. Frequently, the calibration task is divided into two steps: camera calibration followed by projector calibration. The latter still poses certain problems that are not easy to solve, such as the difficulty in obtaining a set of 2D–3D points to compute the projection matrix between the projector and the world. Existing methods are either not sufficiently accurate or not flexible. We propose an easy and automatic method to calibrate such systems that consists in projecting a calibration pattern and superimposing it automatically on a known printed pattern. The projected pattern is provided by a virtual camera observing a virtual pattern in an OpenGL model. The projector displays what the virtual camera visualizes. Thus, the projected pattern can be controlled and superimposed on the printed one with the aid of visual servoing. Our experimental results compare favorably with those of other methods considering both usability and accuracy. PMID:24084121
Time-lapse photogrammetry in geomorphic studies
NASA Astrophysics Data System (ADS)
Eltner, Anette; Kaiser, Andreas
2017-04-01
Image based approaches to reconstruct the earth surface (Structure from Motion - SfM) are establishing as a standard technology for high resolution topographic data. This is amongst other advantages due to the comparatively ease of use and flexibility of data generation. Furthermore, the increased spatial resolution led to its implementation at a vast range of applications from sub-mm to tens-of-km scale. Almost fully automatic calculation of referenced digital elevation models allows for a significant increase of temporal resolution, as well, potentially up to sub-second scales. Thereby, the setup of a time-lapse multi-camera system is necessary and different aspects need to be considered: The camera array has to be temporary stable or potential movements need to be compensated by temporary stable reference targets/areas. The stability of the internal camera geometry has to be considered due to a usually significantly lower amount of images of the scene, and thus redundancy for parameter estimation, compared to more common SfM applications. Depending on the speed of surface change, synchronisation has to be very accurate. Due to the usual application in the field, changing environmental conditions important for lighting and visual range are also crucial factors to keep in mind. Besides these important considerations much potential is comprised by time-lapse photogrammetry. The integration of multi-sensor systems, e.g. using thermal cameras, enables the potential detection of other processes not visible with RGB-images solely. Furthermore, the implementation of low-cost sensors allows for a significant increase of areal coverage and their setup at locations, where a loss of the system cannot be ruled out. The usage of micro-computers offers smart camera triggering, e.g. acquiring images with increased frequency controlled by a rainfall-triggered sensor. In addition these micro-computers can enable on-site data processing, e.g. recognition of increased surface movement, and thus might be used as warning system in the case of natural hazards. A large variety of applications are suitable with time-lapse photogrammetry, i.e. change detection of all sorts; e.g. volumetric alterations, movement tracking or roughness changes. The multi-camera systems can be used for slope investigations, soil studies, glacier observation, snow cover measurement, volcanic surveillance or plant growth monitoring. A conceptual workflow is introduced highlighting the limits and potentials of time-lapse photogrammetry.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu
2004-05-01
Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.
Color image guided depth image super resolution using fusion filter
NASA Astrophysics Data System (ADS)
He, Jin; Liang, Bin; He, Ying; Yang, Jun
2018-04-01
Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.
NASA Astrophysics Data System (ADS)
Petrou, Zisis I.; Xian, Yang; Tian, YingLi
2018-04-01
Estimation of sea ice motion at fine scales is important for a number of regional and local level applications, including modeling of sea ice distribution, ocean-atmosphere and climate dynamics, as well as safe navigation and sea operations. In this study, we propose an optical flow and super-resolution approach to accurately estimate motion from remote sensing images at a higher spatial resolution than the original data. First, an external example learning-based super-resolution method is applied on the original images to generate higher resolution versions. Then, an optical flow approach is applied on the higher resolution images, identifying sparse correspondences and interpolating them to extract a dense motion vector field with continuous values and subpixel accuracies. Our proposed approach is successfully evaluated on passive microwave, optical, and Synthetic Aperture Radar data, proving appropriate for multi-sensor applications and different spatial resolutions. The approach estimates motion with similar or higher accuracy than the original data, while increasing the spatial resolution of up to eight times. In addition, the adopted optical flow component outperforms a state-of-the-art pattern matching method. Overall, the proposed approach results in accurate motion vectors with unprecedented spatial resolutions of up to 1.5 km for passive microwave data covering the entire Arctic and 20 m for radar data, and proves promising for numerous scientific and operational applications.
Novel 3D imaging techniques for improved understanding of planetary surface geomorphology.
NASA Astrophysics Data System (ADS)
Muller, Jan-Peter
2015-04-01
Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the past decade for Mars and the Moon, especially in 3D imaging of surface shape (down to resolutions of 75cm) and subsequent correction for terrain relief of imagery from orbiting and co-registration of lander and rover robotic images. We present some of the recent highlights including 3D modelling of surface shape from the ESA Mars Express HRSC (High Resolution Stereo Camera), see [1], [2] at 30-100m grid-spacing; and then co-registered to HRSC using a resolution cascade of 20m DTMs from NASA MRO stereo-CTX and 0.75m DTMs from MRO stereo-HiRISE [3]. This has opened our eyes to the formation mechanisms of megaflooding events, such as the formation of Iani Vallis and the upstream blocky terrain, to crater lakes and receding valley cuts [4]. A comparable set of products is now available for the Moon from LROC-WA at 100m [5] and LROC-NA at 1m [6]. Recently, a very novel technique for the super-resolution restoration (SRR) of stacks of images has been developed at UCL [7]. First examples shown will be of the entire MER-A Spirit rover traverse taking a stack of 25cm HiRISE to generate a corridor of SRR images along the rover traverse of 5cm imagery of unresolved features such as rocks, created as a consequence of meteoritic bombardment, ridge and valley features. This SRR technique will allow us for ˜400 areas on Mars (where 5 or more HiRISE images have been captured) and similar numbers on the Moon to resolve sub-pixel features. Examples will be shown of how these SRR images can be employed to assist with the better understanding of surface geomorphology. Acknowledgements: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under PRoViDE grant agreement n° 312377. Partial support is also provided from the STFC 'MSSL Consolidated Grant' ST/K000977/1. References: [1] Gwinner, K., F. et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007, 2010; [2] Gwinner, K., F. et al. (2015) MarsExpress High Resolution Stereo Camera (HRSC) Multi-orbit Data Products: Methodology, Mapping Concepts and Performance for the first Quadrangle (MC-11E). Geophysical Research Abstracts, Vol. 17, EGU2015-13832; [3] Kim, J., & Muller, J. (2009). Multi-resolution topographic data extraction from Martian stereo imagery. Planetary and Space Science, 57, 2095-2112. doi:10.1016/j.pss.2009.09.024; [4] Warner, N. H., Gupta, S., Kim, J.-R., Muller, J.-P., Le Corre, L., Morley, J., et al. (2011). Constraints on the origin and evolution of Iani Chaos, Mars. Journal of Geophysical Research, 116(E6), E06003. doi:10.1029/2010JE003787; [5] Fok, H. S., Shum, C. K., Yi, Y., Araki, H., Ping, J., Williams, J. G., et al. (2011). Accuracy assessment of lunar topography models. Earth Planets Space, 63, 15-23. doi:10.5047/eps.2010.08.005; [6] Haase, I., Oberst, J., Scholten, F., Wählisch, M., Gläser, P., Karachevtseva, I., & Robinson, M. S. (2012). Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography - Haase - 2012 - Journal of Geophysical Research: Planets (1991-2012). Journal of Geophysical Research, 117, E00H20. doi:10.1029/2011JE003908; [7] Tao, Y., Muller, J.-P. (2015) Supporting lander and rover operation: a novel super-resolution restoration technique. Geophysical Research Abstracts, Vol. 17, EGU2015-6925
A High Spatial Resolution Depth Sensing Method Based on Binocular Structured Light
Yao, Huimin; Ge, Chenyang; Xue, Jianru; Zheng, Nanning
2017-01-01
Depth information has been used in many fields because of its low cost and easy availability, since the Microsoft Kinect was released. However, the Kinect and Kinect-like RGB-D sensors show limited performance in certain applications and place high demands on accuracy and robustness of depth information. In this paper, we propose a depth sensing system that contains a laser projector similar to that used in the Kinect, and two infrared cameras located on both sides of the laser projector, to obtain higher spatial resolution depth information. We apply the block-matching algorithm to estimate the disparity. To improve the spatial resolution, we reduce the size of matching blocks, but smaller matching blocks generate lower matching precision. To address this problem, we combine two matching modes (binocular mode and monocular mode) in the disparity estimation process. Experimental results show that our method can obtain higher spatial resolution depth without loss of the quality of the range image, compared with the Kinect. Furthermore, our algorithm is implemented on a low-cost hardware platform, and the system can support the resolution of 1280 × 960, and up to a speed of 60 frames per second, for depth image sequences. PMID:28397759
Super-Joule heating in graphene and silver nanowire network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maize, Kerry; Das, Suprem R.; Sadeque, Sajia
Transistors, sensors, and transparent conductors based on randomly assembled nanowire networks rely on multi-component percolation for unique and distinctive applications in flexible electronics, biochemical sensing, and solar cells. While conduction models for 1-D and 1-D/2-D networks have been developed, typically assuming linear electronic transport and self-heating, the model has not been validated by direct high-resolution characterization of coupled electronic pathways and thermal response. In this letter, we show the occurrence of nonlinear “super-Joule” self-heating at the transport bottlenecks in networks of silver nanowires and silver nanowire/single layer graphene hybrid using high resolution thermoreflectance (TR) imaging. TR images at the microscopicmore » self-heating hotspots within nanowire network and nanowire/graphene hybrid network devices with submicron spatial resolution are used to infer electrical current pathways. The results encourage a fundamental reevaluation of transport models for network-based percolating conductors.« less
NASA Astrophysics Data System (ADS)
Vedyaykin, A. D.; Gorbunov, V. V.; Sabantsev, A. V.; Polinovskaya, V. S.; Vishnyakov, I. E.; Melnikov, A. S.; Serdobintsev, P. Yu; Khodorkovskii, M. A.
2015-11-01
Localization microscopy allows visualization of biological structures with resolution well below the diffraction limit. Localization microscopy was used to study FtsZ organization in Escherichia coli previously in combination with fluorescent protein labeling, but the fact that fluorescent chimeric protein was unable to rescue temperature-sensitive ftsZ mutants suggests that obtained images may not represent native FtsZ structures faithfully. Indirect immunolabeling of FtsZ not only overcomes this problem, but also allows the use of the powerful visualization methods arsenal available for different structures in fixed cells. In this work we simultaneously obtained super-resolution images of FtsZ structures and diffraction-limited or super-resolution images of DNA and cell surface in E. coli, which allows for the study of the spatial arrangement of FtsZ structures with respect to the nucleoid positions and septum formation.
Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji
2016-01-01
For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348
Li, Yiming; Ishitsuka, Yuji; Hedde, Per Niklas; Nienhaus, G Ulrich
2013-06-25
In localization-based super-resolution microscopy, individual fluorescent markers are stochastically photoactivated and subsequently localized within a series of camera frames, yielding a final image with a resolution far beyond the diffraction limit. Yet, before localization can be performed, the subregions within the frames where the individual molecules are present have to be identified-oftentimes in the presence of high background. In this work, we address the importance of reliable molecule identification for the quality of the final reconstructed super-resolution image. We present a fast and robust algorithm (a-livePALM) that vastly improves the molecule detection efficiency while minimizing false assignments that can lead to image artifacts.
A multi-channel setup to study fractures in scintillators
NASA Astrophysics Data System (ADS)
Tantot, A.; Bouard, C.; Briche, R.; Lefèvre, G.; Manier, B.; Zaïm, N.; Deschanel, S.; Vanel, L.; Di Stefano, P. C. F.
2016-12-01
To investigate fractoluminescence in scintillating crystals used for particle detection, we have developed a multi-channel setup built around samples of double-cleavage drilled compression (DCDC) geometry in a controllable atmosphere. The setup allows the continuous digitization over hours of various parameters, including the applied load, and the compressive strain of the sample, as well as the acoustic emission. Emitted visible light is recorded with nanosecond resolution, and crack propagation is monitored using infrared lighting and camera. An example of application to \\text{B}{{\\text{i}}4}\\text{G}{{\\text{e}}3}{{\\text{O}}12} (BGO) is provided.
A multi-channel coronal spectrophotometer.
NASA Technical Reports Server (NTRS)
Landman, D. A.; Orrall, F. Q.; Zane, R.
1973-01-01
We describe a new multi-channel coronal spectrophotometer system, presently being installed at Mees Solar Observatory, Mount Haleakala, Maui. The apparatus is designed to record and interpret intensities from many sections of the visible and near-visible spectral regions simultaneously, with relatively high spatial and temporal resolution. The detector, a thermoelectrically cooled silicon vidicon camera tube, has its central target area divided into a rectangular array of about 100,000 pixels and is read out in a slow-scan (about 2 sec/frame) mode. Instrument functioning is entirely under PDP 11/45 computer control, and interfacing is via the CAMAC system.
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
SuperHERO: Design of a New Hard X-Ray Focusing Telescope
NASA Technical Reports Server (NTRS)
Gaskin, Jessica; Elsner, Ronald; Ramsey, Brian; Wilson-Hodge, Colleen; Tennant, Allyn; Christe, Steven; Shih, Albert; Kiranmayee, Kilaru; Swartz, Douglas; Seller, Paul;
2015-01-01
SuperHERO is a hard x-ray (20-75 keV) balloon-borne telescope, currently in its proposal phase, that will utilize high angular-resolution grazing-incidence optics, coupled to novel CdTe multi-pixel, fine-pitch (250 micrometers) detectors. The high-resolution electroformed-nickel, grazing-incidence optics were developed at MSFC, and the detectors were developed at the Rutherford Appleton Laboratory in the UK, and are being readied for flight at GSFC. SuperHERO will use two active pointing systems; one for carrying out astronomical observations and another for solar observations during the same flight. The telescope will reside on a light-weight, carbon-composite structure that will integrate the Wallops Arc Second Pointer into its frame, for arcsecond or better pointing. This configuration will allow for Long Duration Balloon flights that can last up to 4 weeks. This next generation design, which is based on the High Energy Replicated Optics (HERO) and HERO to Explore the Sun (HEROES) payloads, will be discussed, with emphasis on the core telescope components.
Orthogonal strip HPGe planar SmartPET detectors in Compton configuration
NASA Astrophysics Data System (ADS)
Boston, H. C.; Gillam, J.; Boston, A. J.; Cooper, R. J.; Cresswell, J.; Grint, A. N.; Mather, A. R.; Nolan, P. J.; Scraggs, D. P.; Turk, G.; Hall, C. J.; Lazarus, I.; Berry, A.; Beveridge, T.; Lewis, R.
2007-10-01
The evolution of Germanium detector technology over the last decade has lead to the possibility that they can be employed in medical and security imaging. The potential of excellent energy resolution coupled with good position information that Germanium affords removes the necessity for mechanical collimators that would be required in a conventional gamma camera system. By removing this constraint, the overall dose to the patient can be reduced or the throughput of the system can be increased. An additional benefit of excellent energy resolution is that tight gates can be placed on energies from either a multi-lined gamma source or from multi-nuclide sources increasing the number of sources that can be used in medical imaging. In terms of security imaging, segmented Germanium gives directionality and excellent spectroscopic information.
Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing
NASA Astrophysics Data System (ADS)
Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.
2018-01-01
Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.
SuperHERO: the next generation hard x-ray HEROES telescope
NASA Astrophysics Data System (ADS)
Gaskin, Jessica A.; Christe, Steven D.; Elsner, Ronald F.; Kilaru, Kiranmayee; Ramsey, Brian D.; Seller, Paul; Shih, Albert Y.; Stuchlik, David W.; Swartz, Douglas A.; Tennant, Allyn F.; Weddendorf, Bruce; Wilson, Matthew D.; Wilson-Hodge, Colleen A.
2014-07-01
SuperHERO is a new high-resolution, Long Duration Balloon-capable, hard-x-ray (20-75 keV) focusing telescope for making novel astrophysics and heliophysics observations. The SuperHERO payload, currently in its proposal phase, is being developed jointly by the Astrophysics Office at NASA Marshall Space Flight Center and the Solar Physics Laboratory and the Wallops Flight Facility at NASA Goddard Space Flight Center. SuperHERO is a follow-on payload to the High Energy Replicated Optics to Explore the Sun (HEROES) balloon-borne telescope that recently flew from Fort Sumner, NM in September of 2013, and will utilize many of the same features. Significant enhancements to the HEROES payload will be made, including the addition of optics, novel solid-state multi-pixel CdTe detectors, integration of the Wallops Arc-Second Pointer and a significantly lighter gondola suitable for Long Duration Flights.
The Role of the Modern Planetarium as an Effective Tool in Astronomy Education and Public Outreach
NASA Astrophysics Data System (ADS)
Albin, Edward F.
2016-01-01
As the planetarium approaches its 100th anniversary, today's planetarium educator must reflect on the role of such technology in contemporary astronomy education and outreach. The projection planetarium saw "first light" in 1923 at the Carl Zeiss factory in Jena, Germany. During the 20th century, the concept of a star projector beneath a dome flourished as an extraordinary device for the teaching of astronomy. The evolution of digital technology over the past twenty years has dramatically changed the perception / utilization of the planetarium. The vast majority of modern star theaters have shifted entirely to fulldome digital projection systems, abandoning the once ubiquitous electromechanical star projector altogether. These systems have evolved into ultra-high resolution theaters, capable of projecting imagery, videos, and any web-based media onto the dome. Such capability has rendered the planetarium as a multi-disciplinary tool, broadening its educational appeal to a wide variety of fields -- including life sciences, the humanities, and even entertainment venues. However, we suggest that what is at the heart of the planetarium appeal is having a theater adept at projecting a beautiful / accurate star-field. To this end, our facility chose to keep / maintain its aging Zeiss V star projector while adding fulldome digital capability. Such a hybrid approach provides an excellent compromise between presenting state of the art multimedia while at the same time maintaining the ability to render a stunning night sky. In addition, our facility maintains two portable StarLab planetariums for outreach purposes, one unit with a classic electromechanical star projector and the other having a relatively inexpensive fulldome projection system. With a combination of these technologies, it is possible for the planetarium to be an effective tool for astronomical education / outreach well into the 21st century.
NASA Astrophysics Data System (ADS)
Chen, Xuanze; Liu, Yujia; Yang, Xusan; Wang, Tingting; Alonas, Eric; Santangelo, Philip J.; Ren, Qiushi; Xi, Peng
2013-02-01
Fluorescent microscopy has become an essential tool to study biological molecules, pathways and events in living cells, tissues and animals. Meanwhile even the most advanced confocal microscopy can only yield optical resolution approaching Abbe diffraction limit of 200 nm. This is still larger than many subcellular structures, which are too small to be resolved in detail. These limitations have driven the development of super-resolution optical imaging methodologies over the past decade. In stimulated emission depletion (STED) microscopy, the excitation focus is overlapped by an intense doughnut-shaped spot to instantly de-excite markers from their fluorescent state to the ground state by stimulated emission. This effectively eliminates the periphery of the Point Spread Function (PSF), resulting in a narrower focal region, or super-resolution. Scanning a sharpened spot through the specimen renders images with sub-diffraction resolution. Multi-color STED imaging can present important structural and functional information for protein-protein interaction. In this work, we presented a two-color, synchronization-free STED microscopy with a Ti:Sapphire oscillator. The excitation wavelengths were 532nm and 635nm, respectively. With pump power of 4.6 W and sample irradiance of 310 mW, we achieved super-resolution as high as 71 nm. Human respiratory syncytial virus (hRSV) proteins were imaged with our two-color CW STED for co-localization analysis.
Low-cost structured-light based 3D capture system design
NASA Astrophysics Data System (ADS)
Dong, Jing; Bengtson, Kurt R.; Robinson, Barrett F.; Allebach, Jan P.
2014-03-01
Most of the 3D capture products currently in the market are high-end and pricey. They are not targeted for consumers, but rather for research, medical, or industrial usage. Very few aim to provide a solution for home and small business applications. Our goal is to fill in this gap by only using low-cost components to build a 3D capture system that can satisfy the needs of this market segment. In this paper, we present a low-cost 3D capture system based on the structured-light method. The system is built around the HP TopShot LaserJet Pro M275. For our capture device, we use the 8.0 Mpixel camera that is part of the M275. We augment this hardware with two 3M MPro 150 VGA (640 × 480) pocket projectors. We also describe an analytical approach to predicting the achievable resolution of the reconstructed 3D object based on differentials and small signal theory, and an experimental procedure for validating that the system under test meets the specifications for reconstructed object resolution that are predicted by our analytical model. By comparing our experimental measurements from the camera-projector system with the simulation results based on the model for this system, we conclude that our prototype system has been correctly configured and calibrated. We also conclude that with the analytical models, we have an effective means for specifying system parameters to achieve a given target resolution for the reconstructed object.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-02
... specific to the Carrier Super Modular Multi-System (SMMSi) variable refrigerant flow (VRF) multi-split... in this notice to test and rate its SMMSi VRF multi-split commercial heat pumps. DATES: This Decision... its SMMSi VRF multi-split products. Carrier must use the alternate test procedure provided in this...
MISR at 15: Multiple Perspectives on Our Changing Earth
NASA Astrophysics Data System (ADS)
Diner, D. J.; Ackerman, T. P.; Braverman, A. J.; Bruegge, C. J.; Chopping, M. J.; Clothiaux, E. E.; Davies, R.; Di Girolamo, L.; Garay, M. J.; Jovanovic, V. M.; Kahn, R. A.; Kalashnikova, O.; Knyazikhin, Y.; Liu, Y.; Marchand, R.; Martonchik, J. V.; Muller, J. P.; Nolin, A. W.; Pinty, B.; Verstraete, M. M.; Wu, D. L.
2014-12-01
Launched aboard NASA's Terra satellite in December 1999, the Multi-angle Imaging SpectroRadiometer (MISR) instrument has opened new vistas in remote sensing of our home planet. Its 9 pushbroom cameras provide as many view angles ranging from 70 degrees forward to 70 degrees backward along Terra's flight track, in four visible and near-infrared spectral bands. MISR's well-calibrated, accurately co-registered, and moderately high spatial resolution radiance images have been coupled with novel data processing algorithms to mine the information content of angular reflectance anisotropy and multi-camera stereophotogrammetry, enabling new perspectives on the 3-D structure and dynamics of Earth's atmosphere and surface in support of climate and environmental research. Beginning with "first light" in February 2000, the nearly 15-year (and counting) MISR observational record provides an unprecedented data set with applications to multiple disciplines, documenting regional, global, short-term, and long-term changes in aerosol optical depths, aerosol type, near-surface particulate pollution, spectral top-of-atmosphere and surface albedos, aerosol plume-top and cloud-top heights, height-resolved cloud fractions, atmospheric motion vectors, and the structure of vegetated and ice-covered terrains. Recent computational advances include aerosol retrievals at finer spatial resolution than previously possible, and production of near-real time tropospheric winds with a latency of less than 3 hours, making possible for the first time the assimilation of MISR data into weather forecast models. In addition, recent algorithmic and technological developments provide the means of using and acquiring multi-angular data in new ways, such as the application of optical tomography to map 3-D atmospheric structure; building smaller multi-angle instruments in the future; and extending the multi-angular imaging methodology to the ultraviolet, shortwave infrared, and polarimetric realms. Such advances promise further enhancements to the observational power of the remote sensing approaches that MISR has pioneered.
NASA Astrophysics Data System (ADS)
Chung, C.; Nagol, J. R.; Tao, X.; Anand, A.; Dempewolf, J.
2015-12-01
Increasing agricultural production while at the same time preserving the environment has become a challenging task. There is a need for new approaches for use of multi-scale and multi-source remote sensing data as well as ground based measurements for mapping and monitoring crop and ecosystem state to support decision making by governmental and non-governmental organizations for sustainable agricultural development. High resolution sub-meter imagery plays an important role in such an integrative framework of landscape monitoring. It helps link the ground based data to more easily available coarser resolution data, facilitating calibration and validation of derived remote sensing products. Here we present a hierarchical Object Based Image Analysis (OBIA) approach to classify sub-meter imagery. The primary reason for choosing OBIA is to accommodate pixel sizes smaller than the object or class of interest. Especially in non-homogeneous savannah regions of Tanzania, this is an important concern and the traditional pixel based spectral signature approach often fails. Ortho-rectified, calibrated, pan sharpened 0.5 meter resolution data acquired from DigitalGlobe's WorldView-2 satellite sensor was used for this purpose. Multi-scale hierarchical segmentation was performed using multi-resolution segmentation approach to facilitate the use of texture, neighborhood context, and the relationship between super and sub objects for training and classification. eCognition, a commonly used OBIA software program, was used for this purpose. Both decision tree and random forest approaches for classification were tested. The Kappa index agreement for both algorithms surpassed the 85%. The results demonstrate that using hierarchical OBIA can effectively and accurately discriminate classes at even LCCS-3 legend.
Photo-elastic stress analysis of initial alignment archwires.
Badran, Serene A; Orr, John F; Stevenson, Mike; Burden, Donald J
2003-04-01
Photo-elastic models replicating a lower arch with a moderate degree of lower incisor crowding and a palatally displaced maxillary canine were used to evaluate the stresses transmitted to the roots of the teeth by initial alignment archwires. Six initial alignment archwires were compared, two multi-strand stainless steel wires, two non-super-elastic (stabilized martensitic form) nickel titanium wires, and two stress-induced super-elastic (austenitic active) nickel titanium wires. Three specimens of each archwire type were tested. Analysis of the photo-elastic fringe patterns, in the medium supporting the teeth, revealed that the non-super-elastic nickel titanium archwires produced the highest shear stresses (P = 0.001). However, the shear stresses generated by the super-elastic alignment archwires and the multi-strand stainless steel archwires were very similar (P = 1.00). These results show that even in situations where large deflections of initial alignment archwires are required, super-elastic archwires do not appear to have any marked advantage over multi-strand stainless steel alignment archwires in terms of the stresses transferred to the roots of the teeth.
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
2015-08-18
techniques of measuring energy loss due to enve- lope inefficiencies from the built environment. A multi-sensor hardware device attached to the roof of a...at this installa- tion, recommends specific energy conservation measures (ECMs), and quantifies significant potential return on investment. ERDC/CERL...to several thousand square feet, total building square feet was used as a metric to measure the cost effectiveness of handheld versus mobile
NASA Technical Reports Server (NTRS)
1998-01-01
Under a Jet Propulsion Laboratory SBIR (Small Business Innovative Research), Cambridge Research and Instrumentation Inc., developed a new class of filters for the construction of small, low-cost multispectral imagers. The VariSpec liquid crystal enables users to obtain multi-spectral, ultra-high resolution images using a monochrome CCD (charge coupled device) camera. Application areas include biomedical imaging, remote sensing, and machine vision.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
NASA Astrophysics Data System (ADS)
Strömberg, Tomas; Saager, Rolf B.; Kennedy, Gordon T.; Fredriksson, Ingemar; Salerud, Göran; Durkin, Anthony J.; Larsson, Marcus
2018-02-01
Spatial frequency domain imaging (SFDI) utilizes a digital light processing (DLP) projector for illuminating turbid media with sinusoidal patterns. The tissue absorption (μa) and reduced scattering coefficient (μ,s) are calculated by analyzing the modulation transfer function for at least two spatial frequencies. We evaluated different illumination strategies with a red, green and blue light emitting diodes (LED) in the DLP, while imaging with a filter mosaic camera, XiSpec, with 16 different multi-wavelength sensitive pixels in the 470-630 nm wavelength range. Data were compared to SFDI by a multispectral camera setup (MSI) consisting of four cameras with bandpass filters centered at 475, 560, 580 and 650 nm. A pointwise system for comprehensive microcirculation analysis was used (EPOS) for comparison. A 5-min arterial occlusion and release protocol on the forearm of a Caucasian male with fair skin was analyzed by fitting the absorption spectra of the chromophores HbO2, Hb and melanin to the estimatedμa. The tissue fractions of red blood cells (fRBC), melanin (/mel) and the Hb oxygenation (S02 ) were calculated at baseline, end of occlusion, early after release and late after release. EPOS results showed a decrease in S02 during the occlusion and hyperemia during release (S02 = 40%, 5%, 80% and 51%). The fRBC showed an increase during occlusion and release phases. The best MSI resemblance to the EPOS was for green LED illumination (S02 = 53%, 9%, 82%, 65%). Several illumination and analysis strategies using the XiSpec gave un-physiological results (e.g. negative S02 ). XiSpec with green LED illumination gave the expected change in /RBC , while the dynamics in S02 were less than those for EPOS. These results may be explained by the calculation of modulation using an illumination and detector setup with a broad spectral transmission bandwidth, with considerable variation in μa of included chromophores. Approaches for either reducing the effective bandwidth of the XiSpec filters or by including their characteristic in a light transport model for SFDI modulation, are proposed.
Evaluating RGB photogrammetry and multi-temporal digital surface models for detecting soil erosion
NASA Astrophysics Data System (ADS)
Anders, Niels; Keesstra, Saskia; Seeger, Manuel
2013-04-01
Photogrammetry is a widely used tool for generating high-resolution digital surface models. Unmanned Aerial Vehicles (UAVs), equipped with a Red Green Blue (RGB) camera, have great potential in quickly acquiring multi-temporal high-resolution orthophotos and surface models. Such datasets would ease the monitoring of geomorphological processes, such as local soil erosion and rill formation after heavy rainfall events. In this study we test a photogrammetric setup to determine data requirements for soil erosion studies with UAVs. We used a rainfall simulator (5 m2) and above a rig with attached a Panasonic GX1 16 megapixel digital camera and 20mm lens. The soil material in the simulator consisted of loamy sand at an angle of 5 degrees. Stereo pair images were taken before and after rainfall simulation with 75-85% overlap. Acquired images were automatically mosaicked to create high-resolution orthorectified images and digital surface models (DSM). We resampled the DSM to different spatial resolutions to analyze the effect of cell size to the accuracy of measured rill depth and soil loss estimations, and determined an optimal cell size (thus flight altitude). Furthermore, the high spatial accuracy of the acquired surface models allows further analysis of rill formation and channel initiation related to e.g. surface roughness. We suggest implementing near-infrared and temperature sensors to combine soil moisture and soil physical properties with surface morphology for future investigations.
Multi-Angle View of the Canary Islands
NASA Technical Reports Server (NTRS)
2000-01-01
A multi-angle view of the Canary Islands in a dust storm, 29 February 2000. At left is a true-color image taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. This image was captured by the MISR camera looking at a 70.5-degree angle to the surface, ahead of the spacecraft. The middle image was taken by the MISR downward-looking (nadir) camera, and the right image is from the aftward 70.5-degree camera. The images are reproduced using the same radiometric scale, so variations in brightness, color, and contrast represent true variations in surface and atmospheric reflectance with angle. Windblown dust from the Sahara Desert is apparent in all three images, and is much brighter in the oblique views. This illustrates how MISR's oblique imaging capability makes the instrument a sensitive detector of dust and other particles in the atmosphere. Data for all channels are presented in a Space Oblique Mercator map projection to facilitate their co-registration. The images are about 400 km (250 miles)wide, with a spatial resolution of about 1.1 kilometers (1,200 yards). North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
Super-resolved Parallel MRI by Spatiotemporal Encoding
Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio
2016-01-01
Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293
Multiframe super resolution reconstruction method based on light field angular images
NASA Astrophysics Data System (ADS)
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
User-assisted visual search and tracking across distributed multi-camera networks
NASA Astrophysics Data System (ADS)
Raja, Yogesh; Gong, Shaogang; Xiang, Tao
2011-11-01
Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cuddy-Walsh, SG; University of Ottawa Heart Institute; Wells, RG
2014-08-15
Myocardial perfusion imaging (MPI) with Single Photon Emission Computed Tomography (SPECT) is invaluable in the diagnosis and management of heart disease. It provides essential information on myocardial blood flow and ischemia. Multi-pinhole dedicated cardiac-SPECT cameras offer improved count sensitivity, and spatial and energy resolutions over parallel-hole camera designs however variable sensitivity across the field-of-view (FOV) can lead to position-dependent noise variations. Since MPI evaluates differences in the signal-to-noise ratio, noise variations in the camera could significantly impact the sensitivity of the test for ischemia. We evaluated the noise characteristics of GE Healthcare's Discovery NM530c camera with a goal of optimizingmore » the accuracy of our patient assessment and thereby improving outcomes. Theoretical sensitivity maps of the camera FOV, including attenuation effects, were estimated analytically based on the distance and angle between the spatial position of a given voxel and each pinhole. The standard deviation in counts, σ was inferred for each voxel position from the square root of the sensitivity mapped at that position. Noise was measured experimentally from repeated (N=16) acquisitions of a uniform spherical Tc-99m-water phantom. The mean (μ) and standard deviation (σ) were calculated for each voxel position in the reconstructed FOV. Noise increased ∼2.1× across a 12 cm sphere. A correlation of 0.53 is seen when experimental noise is compared with theory suggesting that ∼53% of the noise is attributed to the combined effects of attenuation and the multi-pinhole geometry. Further investigations are warranted to determine the clinical impact of the position-dependent noise variation.« less
Software defined multi-spectral imaging for Arctic sensor networks
NASA Astrophysics Data System (ADS)
Siewert, Sam; Angoth, Vivek; Krishnamurthy, Ramnarayan; Mani, Karthikeyan; Mock, Kenrick; Singh, Surjith B.; Srivistava, Saurav; Wagner, Chris; Claus, Ryan; Vis, Matthew Demi
2016-05-01
Availability of off-the-shelf infrared sensors combined with high definition visible cameras has made possible the construction of a Software Defined Multi-Spectral Imager (SDMSI) combining long-wave, near-infrared and visible imaging. The SDMSI requires a real-time embedded processor to fuse images and to create real-time depth maps for opportunistic uplink in sensor networks. Researchers at Embry Riddle Aeronautical University working with University of Alaska Anchorage at the Arctic Domain Awareness Center and the University of Colorado Boulder have built several versions of a low-cost drop-in-place SDMSI to test alternatives for power efficient image fusion. The SDMSI is intended for use in field applications including marine security, search and rescue operations and environmental surveys in the Arctic region. Based on Arctic marine sensor network mission goals, the team has designed the SDMSI to include features to rank images based on saliency and to provide on camera fusion and depth mapping. A major challenge has been the design of the camera computing system to operate within a 10 to 20 Watt power budget. This paper presents a power analysis of three options: 1) multi-core, 2) field programmable gate array with multi-core, and 3) graphics processing units with multi-core. For each test, power consumed for common fusion workloads has been measured at a range of frame rates and resolutions. Detailed analyses from our power efficiency comparison for workloads specific to stereo depth mapping and sensor fusion are summarized. Preliminary mission feasibility results from testing with off-the-shelf long-wave infrared and visible cameras in Alaska and Arizona are also summarized to demonstrate the value of the SDMSI for applications such as ice tracking, ocean color, soil moisture, animal and marine vessel detection and tracking. The goal is to select the most power efficient solution for the SDMSI for use on UAVs (Unoccupied Aerial Vehicles) and other drop-in-place installations in the Arctic. The prototype selected will be field tested in Alaska in the summer of 2016.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures
NASA Astrophysics Data System (ADS)
Perfetti, L.; Polari, C.; Fassi, F.
2018-05-01
Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.
Simultaneous digital super-resolution and nonuniformity correction for infrared imaging systems.
Meza, Pablo; Machuca, Guillermo; Torres, Sergio; Martin, Cesar San; Vera, Esteban
2015-07-20
In this article, we present a novel algorithm to achieve simultaneous digital super-resolution and nonuniformity correction from a sequence of infrared images. We propose to use spatial regularization terms that exploit nonlocal means and the absence of spatial correlation between the scene and the nonuniformity noise sources. We derive an iterative optimization algorithm based on a gradient descent minimization strategy. Results from infrared image sequences corrupted with simulated and real fixed-pattern noise show a competitive performance compared with state-of-the-art methods. A qualitative analysis on the experimental results obtained with images from a variety of infrared cameras indicates that the proposed method provides super-resolution images with significantly less fixed-pattern noise.
Ghost detection and removal based on super-pixel grouping in exposure fusion
NASA Astrophysics Data System (ADS)
Jiang, Shenyu; Xu, Zhihai; Li, Qi; Chen, Yueting; Feng, Huajun
2014-09-01
A novel multi-exposure images fusion method for dynamic scenes is proposed. The commonly used techniques for high dynamic range (HDR) imaging are based on the combination of multiple differently exposed images of the same scene. The drawback of these methods is that ghosting artifacts will be introduced into the final HDR image if the scene is not static. In this paper, a super-pixel grouping based method is proposed to detect the ghost in the image sequences. We introduce the zero mean normalized cross correlation (ZNCC) as a measure of similarity between a given exposure image and the reference. The calculation of ZNCC is implemented in super-pixel level, and the super-pixels which have low correlation with the reference are excluded by adjusting the weight maps for fusion. Without any prior information on camera response function or exposure settings, the proposed method generates low dynamic range (LDR) images which can be shown on conventional display devices directly with details preserving and ghost effects reduced. Experimental results show that the proposed method generates high quality images which have less ghost artifacts and provide a better visual quality than previous approaches.
Resolution enhancement of tri-stereo remote sensing images by super resolution methods
NASA Astrophysics Data System (ADS)
Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif
2016-10-01
Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.
A small field of view camera for hybrid gamma and optical imaging
NASA Astrophysics Data System (ADS)
Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.
2014-12-01
The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.
Barnacle Bill in Super Resolution from Super Panorama
1998-07-03
"Barnacle Bill" is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. This view of Barnacle Bill was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of these rocks and what it might tell us about their mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The composites consist of 7 frames in the right eye and 8 frames in the left eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01409
The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics
NASA Astrophysics Data System (ADS)
Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.
2003-04-01
The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Development of electronic cinema projectors
NASA Astrophysics Data System (ADS)
Glenn, William E.
2001-03-01
All of the components for the electronic cinema are now commercially available. Sony has a high definition progressively scanned 24 frame per second electronic cinema camera. This can be recorded digitally on tape or film on hard drives in RAID recorders. Much of the post production processing is now done digitally by scanning film, processing it digitally, and recording it on film for release. Fiber links and satellites can transmit cinema program material to theaters in real time. RAID or tape recorders can play programs for viewing at a much lower cost than storage on film. Two companies now have electronic cinema projectors on the market. Of all of the components, the electronic cinema projector is the most challenging. Achieving the resolution, light, output, contrast ratio, and color rendition all at the same time without visible artifacts is a difficult task. Film itself is, of course, a form of light-valve. However, electronically modulated light uses other techniques rather than changes in density to control the light. The optical techniques that have been the basis for many electronic light-valves have been under development for over 100 years. Many of these techniques are based on optical diffraction to modulate the light. This paper will trace the history of these techniques and show how they may be extended to produce electronic cinema projectors in the future.
Fast high resolution reconstruction in multi-slice and multi-view cMRI
NASA Astrophysics Data System (ADS)
Velasco Toledo, Nelson; Romero Castro, Eduardo
2015-01-01
Cardiac magnetic resonance imaging (cMRI) is an useful tool in diagnosis, prognosis and research since it functionally tracks the heart structure. Although useful, this imaging technique is limited in spatial resolution because heart is a constant moving organ, also there are other non controled conditions such as patient movements and volumetric changes during apnea periods when data is acquired, those conditions limit the time to capture high quality information. This paper presents a very fast and simple strategy to reconstruct high resolution 3D images from a set of low resolution series of 2D images. The strategy is based on an information reallocation algorithm which uses the DICOM header to relocate voxel intensities in a regular grid. An interpolation method is applied to fill empty places with estimated data, the interpolation resamples the low resolution information to estimate the missing information. As a final step a gaussian filter that denoises the final result. A reconstructed image evaluation is performed using as a reference a super-resolution reconstructed image. The evaluation reveals that the method maintains the general heart structure with a small loss in detailed information (edge sharpening and blurring), some artifacts related with input information quality are detected. The proposed method requires low time and computational resources.
Super resolution PLIF demonstrated in turbulent jet flows seeded with I2
NASA Astrophysics Data System (ADS)
Xu, Wenjiang; Liu, Ning; Ma, Lin
2018-05-01
Planar laser induced fluorescence (PLIF) represents an indispensable tool for flow and flame imaging. However, the PLIF technique suffers from limited spatial resolution or blurring in many situations, which restricts its applicability and capability. This work describes a new method, named SR-PLIF (super-resolution PLIF), to overcome these limitations and enhance the capability of PLIF. The method uses PLIF images captured simultaneously from two (or more) orientations to reconstruct a final PLIF image with resolution enhanced or blurring removed. This paper reports the development of the reconstruction algorithm, and the experimental demonstration of the SR-PLIF method both with controlled samples and with turbulent flows seeded with iodine vapor. Using controlled samples with two cameras, the spatial resolution in the best case was improved from 0.06 mm in the projections to 0.03 mm in the SR image, in terms of the spreading width of a sharp edge. With turbulent flows, an image sharpness measure was developed to quantify the spatial resolution, and SR reconstruction with two cameras can effectively improve the spatial resolution compared to the projections in terms of the sharpness measure.
NASA Astrophysics Data System (ADS)
Ilisie, V.; Giménez-Alventosa, V.; Moliner, L.; Sánchez, F.; González, A. J.; Rodríguez-Álvarez, M. J.; Benlloch, J. M.
2018-07-01
Current PET detectors have a very low sensitivity, of the order of a few percent. One of the reasons is the fact that Compton interactions are rejected. If an event involves multiple Compton scattering and the total deposited energy lays within the photoelectric peak, then an energy-weighted centroid is the given output for the coordinates of the reconstructed interaction point. This introduces distortion in the final reconstructed image. The aim of our work is to prove that Compton events are a very rich source of additional information as one can improve the resolution of the detector and implicitly the final reconstructed image. This could be a real breakthrough for PET detector technology as one should be able to obtain better results with less patient radiation. Using a PET as a double Compton camera, by means of Compton cone matching i.e., Compton cones coming from the same event should be compatible, is applied to discard randoms, patient scattered events and also, to perform a correct matching among events with multiple coincidences. In order to fully benefit experimentally from Compton events using monolithic scintillators a multi-layer configuration is needed and a good time-of-flight resolution.
A portable low-cost 3D point cloud acquiring method based on structure light
NASA Astrophysics Data System (ADS)
Gui, Li; Zheng, Shunyi; Huang, Xia; Zhao, Like; Ma, Hao; Ge, Chao; Tang, Qiuxia
2018-03-01
A fast and low-cost method of acquiring 3D point cloud data is proposed in this paper, which can solve the problems of lack of texture information and low efficiency of acquiring point cloud data with only one pair of cheap cameras and projector. Firstly, we put forward a scene adaptive design method of random encoding pattern, that is, a coding pattern is projected onto the target surface in order to form texture information, which is favorable for image matching. Subsequently, we design an efficient dense matching algorithm that fits the projected texture. After the optimization of global algorithm and multi-kernel parallel development with the fusion of hardware and software, a fast acquisition system of point-cloud data is accomplished. Through the evaluation of point cloud accuracy, the results show that point cloud acquired by the method proposed in this paper has higher precision. What`s more, the scanning speed meets the demand of dynamic occasion and has better practical application value.
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. From the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH; the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera; and the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. At the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
2009-05-08
CAPE CANAVERAL, Fla. – On Launch Pad 39A at NASA's Kennedy Space Center in Florida, space shuttle Atlantis' payload bay is filled with hardware for the STS-125 mission to service NASA's Hubble Space Telescope. From the bottom are the Flight Support System with the Soft Capture mechanism and Multi-Use Lightweight Equipment Carrier with the Science Instrument Command and Data Handling Unit, or SIC&DH. At center is the Orbital Replacement Unit Carrier with the Cosmic Origins Spectrograph, or COS, and an IMAX 3D camera. At top is the Super Lightweight Interchangeable Carrier with the Wide Field Camera 3. Atlantis' crew will service NASA's Hubble Space Telescope for the fifth and final time. The flight will include five spacewalks during which astronauts will refurbish and upgrade the telescope with state-of-the-art science instruments. As a result, Hubble's capabilities will be expanded and its operational lifespan extended through at least 2014. Photo credit: NASA/Kim Shiflett
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Prochaska, Travis; Shectman, Stephen A.; Hammond, Randolph P.; Barkhouser, Robert H.; DePoy, D. L.; Marshall, J. L.
2012-09-01
We describe the conceptual optomechanical design for GMACS, a wide-field, multi-object, moderate-resolution optical spectrograph for the Giant Magellan Telescope (GMT). GMACS is a candidate first-light instrument for the GMT and will be one of several instruments housed in the Gregorian Instrument Rotator (GIR) located at the Gregorian focus. The instrument samples a 9 arcminute x 18 arcminute field of view providing two resolution modes (i.e, low resolution, R ~ 2000, and moderate resolution, R ~ 4000) over a 3700 Å to 10200 Å wavelength range. To minimize the size of the optics, four fold mirrors at the GMT focal plane redirect the full field into four individual "arms", that each comprises a double spectrograph with a red and blue channel. Hence, each arm samples a 4.5 arcminute x 9 arcminute field of view. The optical layout naturally leads to three separate optomechanical assemblies: a focal plane assembly, and two identical optics modules. The focal plane assembly contains the last element of the telescope's wide-field corrector, slit-mask, tent-mirror assembly, and slit-mask magazine. Each of the two optics modules supports two of the four instrument arms and houses the aft-optics (i.e. collimators, dichroics, gratings, and cameras). A grating exchange mechanism, and articulated gratings and cameras facilitate multiple resolution modes. In this paper we describe the details of the GMACS optomechanical design, including the requirements and considerations leading to the design, mechanism details, optics mounts, and predicted flexure performance.
Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case
NASA Astrophysics Data System (ADS)
Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann
2017-04-01
Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.
High resolution optical DNA mapping
NASA Astrophysics Data System (ADS)
Baday, Murat
Many types of diseases including cancer and autism are associated with copy-number variations in the genome. Most of these variations could not be identified with existing sequencing and optical DNA mapping methods. We have developed Multi-color Super-resolution technique, with potential for high throughput and low cost, which can allow us to recognize more of these variations. Our technique has made 10--fold improvement in the resolution of optical DNA mapping. Using a 180 kb BAC clone as a model system, we resolved dense patterns from 108 fluorescent labels of two different colors representing two different sequence-motifs. Overall, a detailed DNA map with 100 bp resolution was achieved, which has the potential to reveal detailed information about genetic variance and to facilitate medical diagnosis of genetic disease.
Super-resolution reconstruction of MR image with a novel residual learning network algorithm
NASA Astrophysics Data System (ADS)
Shi, Jun; Liu, Qingping; Wang, Chaofeng; Zhang, Qi; Ying, Shihui; Xu, Haoyu
2018-04-01
Spatial resolution is one of the key parameters of magnetic resonance imaging (MRI). The image super-resolution (SR) technique offers an alternative approach to improve the spatial resolution of MRI due to its simplicity. Convolutional neural networks (CNN)-based SR algorithms have achieved state-of-the-art performance, in which the global residual learning (GRL) strategy is now commonly used due to its effectiveness for learning image details for SR. However, the partial loss of image details usually happens in a very deep network due to the degradation problem. In this work, we propose a novel residual learning-based SR algorithm for MRI, which combines both multi-scale GRL and shallow network block-based local residual learning (LRL). The proposed LRL module works effectively in capturing high-frequency details by learning local residuals. One simulated MRI dataset and two real MRI datasets have been used to evaluate our algorithm. The experimental results show that the proposed SR algorithm achieves superior performance to all of the other compared CNN-based SR algorithms in this work.
The compatibility of consumer DLP projectors with time-sequential stereoscopic 3D visualisation
NASA Astrophysics Data System (ADS)
Woods, Andrew J.; Rourke, Tegan
2007-02-01
A range of advertised "Stereo-Ready" DLP projectors are now available in the market which allow high-quality flickerfree stereoscopic 3D visualization using the time-sequential stereoscopic display method. The ability to use a single projector for stereoscopic viewing offers a range of advantages, including extremely good stereoscopic alignment, and in some cases, portability. It has also recently become known that some consumer DLP projectors can be used for timesequential stereoscopic visualization, however it was not well understood which projectors are compatible and incompatible, what display modes (frequency and resolution) are compatible, and what stereoscopic display quality attributes are important. We conducted a study to test a wide range of projectors for stereoscopic compatibility. This paper reports on the testing of 45 consumer DLP projectors of widely different specifications (brand, resolution, brightness, etc). The projectors were tested for stereoscopic compatibility with various video formats (PAL, NTSC, 480P, 576P, and various VGA resolutions) and video input connections (composite, SVideo, component, and VGA). Fifteen projectors were found to work well at up to 85Hz stereo in VGA mode. Twenty three projectors would work at 60Hz stereo in VGA mode.
Automatic multi-camera calibration for deployable positioning systems
NASA Astrophysics Data System (ADS)
Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan
2012-06-01
Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.
Quality status display for a vibration welding process
Spicer, John Patrick; Abell, Jeffrey A.; Wincek, Michael Anthony; Chakraborty, Debejyo; Bracey, Jennifer; Wang, Hui; Tavora, Peter W.; Davis, Jeffrey S.; Hutchinson, Daniel C.; Reardon, Ronald L.; Utz, Shawn
2017-03-28
A system includes a host machine and a status projector. The host machine is in electrical communication with a collection of sensors and with a welding controller that generates control signals for controlling the welding horn. The host machine is configured to execute a method to thereby process the sensory and control signals, as well as predict a quality status of a weld that is formed using the welding horn, including identifying any suspect welds. The host machine then activates the status projector to illuminate the suspect welds. This may occur directly on the welds using a laser projector, or on a surface of the work piece in proximity to the welds. The system and method may be used in the ultrasonic welding of battery tabs of a multi-cell battery pack in a particular embodiment. The welding horn and welding controller may also be part of the system.
A subjective evaluation of high-chroma color with wide color-gamut display
NASA Astrophysics Data System (ADS)
Kishimoto, Junko; Yamaguchi, Masahiro; Ohyama, Nagaaki
2009-01-01
Displays tends to expand its color gamut, such as multi-primary color display, Adobe RGB and so on. Therefore displays got possible to display high chroma colors. However sometimes, we feel unnatural some for the image which only expanded chroma. Appropriate gamut mapping method to expand color gamut is not proposed very much. We are attempting preferred expanded color reproduction on wide color gamut display utilizing high chroma colors effectively. As a first step, we have conducted an experiment to investigate the psychological effect of color schemes including highly saturated colors. We used the six-primary-color projector that we have developed for the presentation of test colors. The six-primary-color projector's gamut volume in CIELAB space is about 1.8 times larger than the normal RGB projector. We conducted a subjective evaluation experiment using the SD (Semantic Differential) technique to find the quantitative psychological effect of high chroma colors.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-03-02
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods.
Lu, Liang; Qi, Lin; Luo, Yisong; Jiao, Hengchao; Dong, Junyu
2018-01-01
Multi-spectral photometric stereo can recover pixel-wise surface normal from a single RGB image. The difficulty lies in that the intensity in each channel is the tangle of illumination, albedo and camera response; thus, an initial estimate of the normal is required in optimization-based solutions. In this paper, we propose to make a rough depth estimation using the deep convolutional neural network (CNN) instead of using depth sensors or binocular stereo devices. Since high-resolution ground-truth data is expensive to obtain, we designed a network and trained it with rendered images of synthetic 3D objects. We use the model to predict initial normal of real-world objects and iteratively optimize the fine-scale geometry in the multi-spectral photometric stereo framework. The experimental results illustrate the improvement of the proposed method compared with existing methods. PMID:29498703
Controllable 3D Display System Based on Frontal Projection Lenticular Screen
NASA Astrophysics Data System (ADS)
Feng, Q.; Sang, X.; Yu, X.; Gao, X.; Wang, P.; Li, C.; Zhao, T.
2014-08-01
A novel auto-stereoscopic three-dimensional (3D) projection display system based on the frontal projection lenticular screen is demonstrated. It can provide high real 3D experiences and the freedom of interaction. In the demonstrated system, the content can be changed and the dense of viewing points can be freely adjusted according to the viewers' demand. The high dense viewing points can provide smooth motion parallax and larger image depth without blurry. The basic principle of stereoscopic display is described firstly. Then, design architectures including hardware and software are demonstrated. The system consists of a frontal projection lenticular screen, an optimally designed projector-array and a set of multi-channel image processors. The parameters of the frontal projection lenticular screen are based on the demand of viewing such as the viewing distance and the width of view zones. Each projector is arranged on an adjustable platform. The set of multi-channel image processors are made up of six PCs. One of them is used as the main controller, the other five client PCs can process 30 channel signals and transmit them to the projector-array. Then a natural 3D scene will be perceived based on the frontal projection lenticular screen with more than 1.5 m image depth in real time. The control section is presented in detail, including parallax adjustment, system synchronization, distortion correction, etc. Experimental results demonstrate the effectiveness of this novel controllable 3D display system.
A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry
NASA Astrophysics Data System (ADS)
Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.
2018-03-01
Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Super-Resolution of Multi-Pixel and Sub-Pixel Images for the SDI
1993-06-08
where the phase of the transmitted signal is not needed. The Wigner - Ville distribution ( WVD ) of a real signal s(t), associated with the complex...B. Boashash, 0. P. Kenny and H. J. Whitehouse, "Radar imaging using the Wigner - Ville distribution ", in Real-Time Signal Processing, J. P. Letellier...analytic signal z(t), is a time- frequency distribution defined as-’- 00 W(tf) Z (~t + ) t- -)exp(-i2nft) . (45) Note that the WVD is the double Fourier
Three-dimensional nanometre localization of nanoparticles to enhance super-resolution microscopy
NASA Astrophysics Data System (ADS)
Bon, Pierre; Bourg, Nicolas; Lécart, Sandrine; Monneret, Serge; Fort, Emmanuel; Wenger, Jérôme; Lévêque-Fort, Sandrine
2015-07-01
Meeting the nanometre resolution promised by super-resolution microscopy techniques (pointillist: PALM, STORM, scanning: STED) requires stabilizing the sample drifts in real time during the whole acquisition process. Metal nanoparticles are excellent probes to track the lateral drifts as they provide crisp and photostable information. However, achieving nanometre axial super-localization is still a major challenge, as diffraction imposes large depths-of-fields. Here we demonstrate fast full three-dimensional nanometre super-localization of gold nanoparticles through simultaneous intensity and phase imaging with a wavefront-sensing camera based on quadriwave lateral shearing interferometry. We show how to combine the intensity and phase information to provide the key to the third axial dimension. Presently, we demonstrate even in the occurrence of large three-dimensional fluctuations of several microns, unprecedented sub-nanometre localization accuracies down to 0.7 nm in lateral and 2.7 nm in axial directions at 50 frames per second. We demonstrate that nanoscale stabilization greatly enhances the image quality and resolution in direct stochastic optical reconstruction microscopy imaging.
2009-08-01
caractéristiques directionnelles dépendent beaucoup de la fréquence. Les mesures des courbes de réponse en tension d’émission du projecteur et des diagrammes ...Development Knowledge and Information Management FFT Fast fourier transform HF MMPP High frequency multi-mode pipe projector kHz kilohertz MMPP
Airborne net-centric multi-INT sensor control, display, fusion, and exploitation systems
NASA Astrophysics Data System (ADS)
Linne von Berg, Dale C.; Lee, John N.; Kruer, Melvin R.; Duncan, Michael D.; Olchowski, Fred M.; Allman, Eric; Howard, Grant
2004-08-01
The NRL Optical Sciences Division has initiated a multi-year effort to develop and demonstrate an airborne net-centric suite of multi-intelligence (multi-INT) sensors and exploitation systems for real-time target detection and targeting product dissemination. The goal of this Net-centric Multi-Intelligence Fusion Targeting Initiative (NCMIFTI) is to develop an airborne real-time intelligence gathering and targeting system that can be used to detect concealed, camouflaged, and mobile targets. The multi-INT sensor suite will include high-resolution visible/infrared (EO/IR) dual-band cameras, hyperspectral imaging (HSI) sensors in the visible-to-near infrared, short-wave and long-wave infrared (VNIR/SWIR/LWIR) bands, Synthetic Aperture Radar (SAR), electronics intelligence sensors (ELINT), and off-board networked sensors. Other sensors are also being considered for inclusion in the suite to address unique target detection needs. Integrating a suite of multi-INT sensors on a single platform should optimize real-time fusion of the on-board sensor streams, thereby improving the detection probability and reducing the false alarms that occur in reconnaissance systems that use single-sensor types on separate platforms, or that use independent target detection algorithms on multiple sensors. In addition to the integration and fusion of the multi-INT sensors, the effort is establishing an open-systems net-centric architecture that will provide a modular "plug and play" capability for additional sensors and system components and provide distributed connectivity to multiple sites for remote system control and exploitation.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
SFR test fixture for hemispherical and hyperhemispherical camera systems
NASA Astrophysics Data System (ADS)
Tamkin, John M.
2017-08-01
Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.
Nanometric depth resolution from multi-focal images in microscopy.
Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H
2011-07-06
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.
Nanometric depth resolution from multi-focal images in microscopy
Dalgarno, Heather I. C.; Dalgarno, Paul A.; Dada, Adetunmise C.; Towers, Catherine E.; Gibson, Gavin J.; Parton, Richard M.; Davis, Ilan; Warburton, Richard J.; Greenaway, Alan H.
2011-01-01
We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels. PMID:21247948
NASA Astrophysics Data System (ADS)
Kirby, Richard; Whitaker, Ross
2016-09-01
In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.
Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration
NASA Astrophysics Data System (ADS)
Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.
2017-08-01
The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.
Chekanov, S. V.; Beydler, M.; Kotwal, A. V.; ...
2017-06-13
This paper describes simulations of detector response to multi-TeV physics at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed geant4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments are described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. Furthermore, the granularity requirements for calorimetrymore » are investigated using the two-particle spatial resolution achieved for hadron showers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chekanov, S. V.; Beydler, M.; Kotwal, A. V.
This paper describes simulations of detector response to multi-TeV physics at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed geant4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments are described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. Furthermore, the granularity requirements for calorimetrymore » are investigated using the two-particle spatial resolution achieved for hadron showers.« less
Super-resolution processing for multi-functional LPI waveforms
NASA Astrophysics Data System (ADS)
Li, Zhengzheng; Zhang, Yan; Wang, Shang; Cai, Jingxiao
2014-05-01
Super-resolution (SR) is a radar processing technique closely related to the pulse compression (or correlation receiver). There are many super-resolution algorithms developed for the improved range resolution and reduced sidelobe contaminations. Traditionally, the waveforms used for the SR have been either phase-coding (such as LKP3 code, Barker code) or the frequency modulation (chirp, or nonlinear frequency modulation). There are, however, an important class of waveforms which are either random in nature (such as random noise waveform), or randomly modulated for multiple function operations (such as the ADS-B radar signals in [1]). These waveforms have the advantages of low-probability-of-intercept (LPI). If the existing SR techniques can be applied to these waveforms, there will be much more flexibility for using these waveforms in actual sensing missions. Also, SR usually has great advantage that the final output (as estimation of ground truth) is largely independent of the waveform. Such benefits are attractive to many important primary radar applications. In this paper the general introduction of the SR algorithms are provided first, and some implementation considerations are discussed. The selected algorithms are applied to the typical LPI waveforms, and the results are discussed. It is observed that SR algorithms can be reliably used for LPI waveforms, on the other hand, practical considerations should be kept in mind in order to obtain the optimal estimation results.
Super Resolution Image of Yogi
NASA Technical Reports Server (NTRS)
1997-01-01
Yogi is a meter-size rock about 5 meters northwest of the Mars Pathfinder lander and was the second rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This mosaic shows super resolution techniques applied to the second APXS target rock, which was poorly illuminated in the rover's forward camera view taken before the instrument was deployed. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.
This mosaic of Yogi was produced by combining four 'Super Pan' frames taken with the IMP camera. This composite color mosaic consists of 7 frames from the right eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. This panchromatic frame was then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. Shadows were processed separately from the rest of the rock and combined with the rest of the scene to bring out details in the shadow of Yogi that would be too dark to view at the same time as the sunlit surfaces.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).Restoration Of MEX SRC Images For Improved Topography: A New Image Product
NASA Astrophysics Data System (ADS)
Duxbury, T. C.
2012-12-01
Surface topography is an important constraint when investigating the evolution of solar system bodies. Topography is typically obtained from stereo photogrammetric or photometric (shape from shading) analyses of overlapping / stereo images and from laser / radar altimetry data. The ESA Mars Express Mission [1] carries a Super Resolution Channel (SRC) as part of the High Resolution Stereo Camera (HRSC) [2]. The SRC can build up overlapping / stereo coverage of Mars, Phobos and Deimos by viewing the surfaces from different orbits. The derivation of high precision topography data from the SRC raw images is degraded because the camera is out of focus. The point spread function (PSF) is multi-peaked, covering tens of pixels. After registering and co-adding hundreds of star images, an accurate SRC PSF was reconstructed and is being used to restore the SRC images to near blur free quality. The restored images offer a factor of about 3 in improved geometric accuracy as well as identifying the smallest of features to significantly improve the stereo photogrammetric accuracy in producing digital elevation models. The difference between blurred and restored images provides a new derived image product that can provide improved feature recognition to increase spatial resolution and topographic accuracy of derived elevation models. Acknowledgements: This research was funded by the NASA Mars Express Participating Scientist Program. [1] Chicarro, et al., ESA SP 1291(2009) [2] Neukum, et al., ESA SP 1291 (2009). A raw SRC image (h4235.003) of a Martian crater within Gale crater (the MSL landing site) is shown in the upper left and the restored image is shown in the lower left. A raw image (h0715.004) of Phobos is shown in the upper right and the difference between the raw and restored images, a new derived image data product, is shown in the lower right. The lower images, resulting from an image restoration process, significantly improve feature recognition for improved derived topographic accuracy.
Coincidence electron/ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin
2015-05-01
A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Optimal design and critical analysis of a high resolution video plenoptic demonstrator
NASA Astrophysics Data System (ADS)
Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne
2011-03-01
A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.
NASA Astrophysics Data System (ADS)
Tsuji, Takao; Hara, Ryoichi; Oyama, Tsutomu; Yasuda, Keiichiro
A super distributed energy system is a future energy system in which the large part of its demand is fed by a huge number of distributed generators. At one time some nodes in the super distributed energy system behave as load, however, at other times they behave as generator - the characteristic of each node depends on the customers' decision. In such situation, it is very difficult to regulate voltage profile over the system due to the complexity of power flows. This paper proposes a novel control method of distributed generators that can achieve the autonomous decentralized voltage profile regulation by using multi-agent technology. The proposed multi-agent system employs two types of agent; a control agent and a mobile agent. Control agents generate or consume reactive power to regulate the voltage profile of neighboring nodes and mobile agents transmit the information necessary for VQ-control among the control agents. The proposed control method is tested through numerical simulations.
A Multi-Sensor Aerogeophysical Study of Afghanistan
2007-01-01
magnetometer coupled with an Applied Physics 539 3-axis fluxgate mag- netometer for compensation of the aircraft field; • an Applanix DSS 301 digital...survey. DATA COlleCTION AND PROCeSSINg Photogrammetry More than 65,000 high-resolution photogram- metric images were collected using an Applanix Digital...HSI L-Band Polarimetric Imaging Radar KGPS Dual Gravity Meters Common Sensor Bomb-bay Pallet Applanix DSS Camera Sensor Suite • Magnetometer • Gravity
Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography
NASA Astrophysics Data System (ADS)
Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy
2014-09-01
A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Liulin; Webb, Ian K.; Garimella, Sandilya V. B.
Ion mobility (IM) separations have a broad range of analytical applications, but insufficient resolution limits many applications. Here we report on traveling wave (TW) ion mobility (IM) separations in a Serpentine Ultra-long Path with Extended Routing (SUPER) Structures for Lossless Ion Manipulations (SLIM) module in conjunction with mass spectrometry (MS). The extended routing utilized multiple passes was facilitated by the introduction of a lossless ion switch at the end of the ion path that either directed ions to the MS detector or to another pass through the serpentine separation region, providing theoretically unlimited TWIM path lengths. Ions were confined inmore » the SLIM by rf fields in conjunction with a DC guard bias, enabling essentially lossless TW transmission over greatly extended paths (e.g., ~1094 meters over 81 passes through the 13.5 m serpentine path). In this multi-pass SUPER TWIM provided resolution approximately proportional to the square root of the number of passes (or path length). More than 30-fold higher IM resolution for Agilent tuning mix m/z 622 and 922 ions (~340 vs. ~10) was achieved for 40 passes compared to commercially available drift tube IM and other TWIM-based platforms. An initial evaluation of the isomeric sugars Lacto-N-hexaose and Lacto-N-neohexaose showed the isomeric structures to be baseline resolved, and a new conformational feature for Lacto-N-neohexaose was revealed after 9 passes. The new SLIM SUPER high resolution TWIM platform has broad utility in conjunction with MS and is expected to enable a broad range of previously challenging or intractable separations.« less
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2012-01-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Video auto stitching in multicamera surveillance system
NASA Astrophysics Data System (ADS)
He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang
2011-12-01
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.
Report Of The HST Strategy Panel: A Strategy For Recovery
1991-01-01
orbit change out: the Wide Field/Planetary Camera II (WFPC II), the Near-Infrared Camera and Multi- Object Spectrometer (NICMOS) and the Space ...are the Space Telescope Imaging Spectrograph (STB), the Near-Infrared Camera and Multi- Object Spectrom- eter (NICMOS), and the second Wide Field and...expected to fail to lock due to duplicity was 20%; on- orbit data indicates that 10% may be a better estimate, but the guide stars were preselected
High-speed multi-exposure laser speckle contrast imaging with a single-photon counting camera
Dragojević, Tanja; Bronzi, Danilo; Varma, Hari M.; Valdes, Claudia P.; Castellvi, Clara; Villa, Federica; Tosi, Alberto; Justicia, Carles; Zappa, Franco; Durduran, Turgut
2015-01-01
Laser speckle contrast imaging (LSCI) has emerged as a valuable tool for cerebral blood flow (CBF) imaging. We present a multi-exposure laser speckle imaging (MESI) method which uses a high-frame rate acquisition with a negligible inter-frame dead time to mimic multiple exposures in a single-shot acquisition series. Our approach takes advantage of the noise-free readout and high-sensitivity of a complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diode (SPAD) array to provide real-time speckle contrast measurement with high temporal resolution and accuracy. To demonstrate its feasibility, we provide comparisons between in vivo measurements with both the standard and the new approach performed on a mouse brain, in identical conditions. PMID:26309751
A practical approach for active camera coordination based on a fusion-driven multi-agent system
NASA Astrophysics Data System (ADS)
Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.
2014-04-01
In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
Fabricating High-Resolution X-Ray Collimators
NASA Technical Reports Server (NTRS)
Appleby, Michael; Atkinson, James E.; Fraser, Iain; Klinger, Jill
2008-01-01
A process and method for fabricating multi-grid, high-resolution rotating modulation collimators for arcsecond and sub-arcsecond x-ray and gamma-ray imaging involves photochemical machining and precision stack lamination. The special fixturing and etching techniques that have been developed are used for the fabrication of multiple high-resolution grids on a single array substrate. This technology has application in solar and astrophysics and in a number of medical imaging applications including mammography, computed tomography (CT), single photon emission computed tomography (SPECT), and gamma cameras used in nuclear medicine. This collimator improvement can also be used in non-destructive testing, hydrodynamic weapons testing, and microbeam radiation therapy.
Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera
NASA Astrophysics Data System (ADS)
Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.
2017-12-01
From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.
Structural analysis of herpes simplex virus by optical super-resolution imaging
NASA Astrophysics Data System (ADS)
Laine, Romain F.; Albecka, Anna; van de Linde, Sebastian; Rees, Eric J.; Crump, Colin M.; Kaminski, Clemens F.
2015-01-01
Herpes simplex virus type-1 (HSV-1) is one of the most widespread pathogens among humans. Although the structure of HSV-1 has been extensively investigated, the precise organization of tegument and envelope proteins remains elusive. Here we use super-resolution imaging by direct stochastic optical reconstruction microscopy (dSTORM) in combination with a model-based analysis of single-molecule localization data, to determine the position of protein layers within virus particles. We resolve different protein layers within individual HSV-1 particles using multi-colour dSTORM imaging and discriminate envelope-anchored glycoproteins from tegument proteins, both in purified virions and in virions present in infected cells. Precise characterization of HSV-1 structure was achieved by particle averaging of purified viruses and model-based analysis of the radial distribution of the tegument proteins VP16, VP1/2 and pUL37, and envelope protein gD. From this data, we propose a model of the protein organization inside the tegument.
Optimized protocol for combined PALM-dSTORM imaging.
Glushonkov, O; Réal, E; Boutant, E; Mély, Y; Didier, P
2018-06-08
Multi-colour super-resolution localization microscopy is an efficient technique to study a variety of intracellular processes, including protein-protein interactions. This technique requires specific labels that display transition between fluorescent and non-fluorescent states under given conditions. For the most commonly used label types, photoactivatable fluorescent proteins and organic fluorophores, these conditions are different, making experiments that combine both labels difficult. Here, we demonstrate that changing the standard imaging buffer of thiols/oxygen scavenging system, used for organic fluorophores, to the commercial mounting medium Vectashield increased the number of photons emitted by the fluorescent protein mEos2 and enhanced the photoconversion rate between its green and red forms. In addition, the photophysical properties of organic fluorophores remained unaltered with respect to the standard imaging buffer. The use of Vectashield together with our optimized protocol for correction of sample drift and chromatic aberrations enabled us to perform two-colour 3D super-resolution imaging of the nucleolus and resolve its three compartments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rilling, M; Centre de Recherche sur le Cancer, Hôtel-Dieu de Québec, Quebec City, QC; Département de radio-oncologie, CHU de Québec, Quebec City, QC
2015-06-15
Purpose: The purpose of this work is to simulate a multi-focus plenoptic camera used as the measuring device in a real-time three-dimensional scintillation dosimeter. Simulating and optimizing this realistic optical system will bridge the technological gap between concept validation and a clinically viable tool that can provide highly efficient, accurate and precise measurements for dynamic radiotherapy techniques. Methods: The experimental prototype, previously developed for proof of concept purposes, uses an off-the-shelf multi-focus plenoptic camera. With an array of interleaved microlenses of different focal lengths, this camera records spatial and angular information of light emitted by a plastic scintillator volume. Themore » three distinct microlens focal lengths were determined experimentally for use as baseline parameters by measuring image-to-object magnification for different distances in object space. A simulated plenoptic system was implemented using the non-sequential ray tracing software Zemax: this tool allows complete simulation of multiple optical paths by modeling interactions at interfaces such as scatter, diffraction, reflection and refraction. The active sensor was modeled based on the camera manufacturer specifications by a 2048×2048, 5 µm-pixel pitch sensor. Planar light sources, simulating the plastic scintillator volume, were employed for ray tracing simulations. Results: The microlens focal lengths were determined to be 384, 327 and 290 µm. A realistic multi-focus plenoptic system, with independently defined and optimizable specifications, was fully simulated. A f/2.9 and 54 mm-focal length Double Gauss objective was modeled as the system’s main lens. A three-focal length hexagonal microlens array of 250-µm thickness was designed, acting as an image-relay system between the main lens and sensor. Conclusion: Simulation of a fully modeled multi-focus plenoptic camera enables the decoupled optimization of the main lens and microlens specifications. This work leads the way to improving the 3D dosimeter’s achievable resolution, efficiency and build for providing a quality assurance tool fully meeting clinical needs. M.R. is financially supported by a Master’s Canada Graduate Scholarship from the NSERC. This research is also supported by the NSERC Industrial Research Chair in Optical Design.« less
A new omni-directional multi-camera system for high resolution surveillance
NASA Astrophysics Data System (ADS)
Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2014-05-01
Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.
A mobile laboratory for surface and subsurface imaging in geo-hazard monitoring activity
NASA Astrophysics Data System (ADS)
Cornacchia, Carmela; Bavusi, Massimo; Loperte, Antonio; Pergola, Nicola; Pignatti, Stefano; Ponzo, Felice; Lapenna, Vincenzo
2010-05-01
A new research infrastructure for supporting ground-based remote sensing observations in the different phases of georisk management cycle is presented. This instrumental facility has been designed and realised by TeRN, a public-private consortium on Earth Observations and Natural Risks, in the frame of the project "ImpresAmbiente" funded by Italian Ministry of Research and University. The new infrastructure is equipped with ground-based sensors (hyperspectral cameras, thermal cameras, laser scanning and electromagnetic antennae) able to remotely map physical parameters and/or earth-surface properties (temperature, soil moisture, land cover, etc…) and to illuminate near-surface geological structures (fault, groundwater tables, landslide bodies etc...). Furthermore, the system can be used for non-invasive investigations of architectonic buildings and civil infrastructures (bridges, tunnel, road pavements, etc...) interested by natural and man-made hazards. The hyperspectral cameras can acquire high resolution images of earth-surface and cultural objects. They are operating in the Visible Near InfraRed (0.4÷1.0μm) with 1600 spatial pixel and 3.7nm of spectral sampling and in the Short Wave InfraRed (1.3÷2.5µm) spectral region with 320 spatial pixel and 5nm of spectral sampling. The IR cameras are operating in the Medium Wavelength InfraRed (3÷5µm; 640x512; NETD< 20 mK) and in the Very Long Wavelength InfraRed region (7.7÷11.5 µm; 320x256; NETD<25 mK) with a frame rate higher than 100Hz and are both equipped with a set of optical filters in order to operate in multi-spectral configuration. The technological innovation of ground-based laser scanning equipment has led to an increased resolution performances of surveys with applications in several field, as geology, architecture, environmental monitoring and cultural heritage. As a consequence, laser data can be useful integrated with traditional monitoring techniques. The Laser Scanner is characterized by very high data acquisition repetition rate up to 500.000 pxl/sec with a range resolution of 0.1 mm, vertical and horizontal FoV of 310° and 360° respectively with a resolution of 0.0018°. The system is also equipped with a metric camera allows to georeference the high resolution images acquired. The electromagnetic sensors allow to obtain in near real time high-resolution 2D and 3D subsurface tomographic images. The main components are a fully automatic resistivity meter for DC electrical surveys (resistivity) and Induced Polarization, a Ground Penetrating Radar with antennas covering range for 400 MHz to 1.5 GHz and a gradiometric magnetometric system. All the sensors can be installed on a mobile van and remotely controlled using wi-fi technologies. An all-time network connection capability is guaranteed by a self-configurable satellite link for data communication, which allows to transmit in near-real time experimental data coming from the field surveys and to share other geospatial information. This ICT facility is well suited for emergency response activities during and after catastrophic events. Sensor synergy, multi-temporal and multi-scale resolutions of surface and sub-surface imaging are the key technical features of this instrumental facility. Finally, in this work we shortly present some first preliminary results obtained during the emergence phase of Abruzzo earthquake (Central Italy).
An overview of instrumentation for the Large Binocular Telescope
NASA Astrophysics Data System (ADS)
Wagner, R. Mark
2012-09-01
An overview of instrumentation for the Large Binocular Telescope (LBT) is presented. Optical instrumentation includes the Large Binocular Camera (LBC), a pair of wide-field (27' x 27') mosaic CCD imagers at the prime focus, and the Multi-Object Double Spectrograph (MODS), a pair of dual-beam blue-red optimized long-slit spectrographs mounted at the left and right direct F/15 Gregorian foci incorporating multiple slit masks for multi-object spectroscopy over a 6' field and spectral resolutions of up to 2000. Infrared instrumentation includes the LBT Near-IR Spectroscopic Utility with Camera and Integral Field Unit for Extragalactic Research (LUCI), a modular near-infrared (0.9-2.5 μm) imager and spectrograph pair mounted at the left and right front bent F/15 Gregorian foci and designed for seeing-limited (FOV: 4' × 4') imaging, long-slit spectroscopy, and multiobject spectroscopy utilizing cooled slit masks and diffraction limited (FOV: 0'.5 × 0'.5) imaging and long-slit spectroscopy. Strategic instruments under development that can utilize the full 23-m baseline of the LBT include an interferometric cryogenic beam combiner with near-infrared and thermal-infrared instruments for Fizeau imaging and nulling interferometry (LBTI) and an optical bench near-infrared beam combiner utilizing multi-conjugate adaptive optics for high angular resolution and sensitivity (LINC-NIRVANA). LBTI is currently undergoing commissioning on the LBT and utilizing the installed adaptive secondary mirrors in both single- sided and two-sided beam combination modes. In addition, a fiber-fed bench spectrograph (PEPSI) capable of ultra high resolution spectroscopy and spectropolarimetry (R = 40,000-300,000) will be available as a principal investigator instrument. Over the past four years the LBC pair, LUCI1, and MODS1 have been commissioned and are now scheduled for routine partner science observations. The delivery of both LUCI2 and MODS2 is anticipated before the end of 2012. The availability of all these instruments mounted simultaneously on the LBT permits unique science, flexible scheduling, and improved operational support.
Performance of the Tachyon Time-of-Flight PET Camera
NASA Astrophysics Data System (ADS)
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Performance of the Tachyon Time-of-Flight PET Camera.
Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W
2015-02-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm 2 side of 6.15 × 6.15 × 25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.
2015-01-01
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057
Performance of the Tachyon Time-of-Flight PET Camera
Peng, Q.; Choong, W. -S.; Vu, C.; ...
2015-01-23
We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm 2 side of 6.15 ×6.15 ×25 mm 3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according tomore » the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less
Details of Layers in Victoria Crater's Cape St. Vincent
NASA Technical Reports Server (NTRS)
2007-01-01
NASA's Mars Exploration Rover Opportunity rover spent about 300 sols (Martian days) during 2006 and 2007 traversing the rim of Victoria Crater. Besides looking for a good place to enter the crater, the rover obtained images of rock outcrops exposed at several cliffs along the way. The cliff in this image from Opportunity's panoramic camera (Pancam) is informally named Cape St. Vincent. It is a promontory approximately 12 meters (39 feet) tall on the northern rim of Victoria crater, near the farthest point along the rover's traverse around the rim. Layers seen in Cape St. Vincent have proven to be among the best examples of meter scale cross-bedding observed on Mars to date. Cross-bedding is a geologic term for rock layers which are inclined relative to the horizontal and which are indicative of ancient sand dune deposits. In order to get a better look at these outcrops, Pancam 'super-resolution' imaging techniques were utilized. Super-resolution is a type of imaging mode which acquires many pictures of the same target to reconstruct a digital image at a higher resolution than is native to the camera. These super-resolution images have allowed scientists to discern that the rocks at Victoria Crater once represented a large dune field, not unlike the Sahara desert on Earth, and that this dune field migrated with an ancient wind flowing from the north to the south across the region. Other rover chemical and mineral measurements have shown that many of the ancient sand dunes studied in Meridiani Planum were modified by surface and subsurface liquid water long ago. This is a Mars Exploration Rover Opportunity Panoramic Camera image acquired on sol 1167 (May 7, 2007), and was constructed from a mathematical combination of 16 different blue filter (480 nm) images.Projection Mapping User Interface for Disabled People
Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827
Projection Mapping User Interface for Disabled People.
Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis
2018-01-01
Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.
Projection display market trends
NASA Astrophysics Data System (ADS)
Mentley, David E.
1997-05-01
The projection display industry is now a multi-billion dollar market comprising an expanding variety of technologies and applications. Growth is being driven by a combination of high volume consumer products and high value business demand. After many years of marginal, but steady performance improvements, essentially all types of projectors have crossed the threshold of acceptability and are now facing accelerated continuing growth. Overall worldwide unit sales of all types of projection displays for all applications will nearly double from 1.6 million units in 1996 to 2.8 million units in 2002. By value at the end user price, the global projector market will grow modestly from 6.3 billion dollars in 1996 to 7.7 billion dollars in 2002. Consumer television will represent the largest share of unit consumption over this time period; in 1996, this application represents 72 percent of the total unit volume. The second major application category for projection displays is the business or presentation projector, representing only 14 percent of the unit shipment total in 1996, but 50 percent of the value.
Tumor Lysing Genetically Engineered T Cells Loaded with Multi-Modal Imaging Agents
NASA Astrophysics Data System (ADS)
Bhatnagar, Parijat; Alauddin, Mian; Bankson, James A.; Kirui, Dickson; Seifi, Payam; Huls, Helen; Lee, Dean A.; Babakhani, Aydin; Ferrari, Mauro; Li, King C.; Cooper, Laurence J. N.
2014-03-01
Genetically-modified T cells expressing chimeric antigen receptors (CAR) exert anti-tumor effect by identifying tumor-associated antigen (TAA), independent of major histocompatibility complex. For maximal efficacy and safety of adoptively transferred cells, imaging their biodistribution is critical. This will determine if cells home to the tumor and assist in moderating cell dose. Here, T cells are modified to express CAR. An efficient, non-toxic process with potential for cGMP compliance is developed for loading high cell number with multi-modal (PET-MRI) contrast agents (Super Paramagnetic Iron Oxide Nanoparticles - Copper-64; SPION-64Cu). This can now be potentially used for 64Cu-based whole-body PET to detect T cell accumulation region with high-sensitivity, followed by SPION-based MRI of these regions for high-resolution anatomically correlated images of T cells. CD19-specific-CAR+SPIONpos T cells effectively target in vitro CD19+ lymphoma.
The California All-sky Meteor Surveillance (CAMS) System
NASA Astrophysics Data System (ADS)
Gural, P. S.
2011-01-01
A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.
Dynamically re-configurable CMOS imagers for an active vision system
NASA Technical Reports Server (NTRS)
Yang, Guang (Inventor); Pain, Bedabrata (Inventor)
2005-01-01
A vision system is disclosed. The system includes a pixel array, at least one multi-resolution window operation circuit, and a pixel averaging circuit. The pixel array has an array of pixels configured to receive light signals from an image having at least one tracking target. The multi-resolution window operation circuits are configured to process the image. Each of the multi-resolution window operation circuits processes each tracking target within a particular multi-resolution window. The pixel averaging circuit is configured to sample and average pixels within the particular multi-resolution window.
Design of the high resolution optical instrument for the Pleiades HR Earth observation satellites
NASA Astrophysics Data System (ADS)
Lamard, Jean-Luc; Gaudin-Delrieu, Catherine; Valentini, David; Renard, Christophe; Tournier, Thierry; Laherrere, Jean-Marc
2017-11-01
As part of its contribution to Earth observation from space, ALCATEL SPACE designed, built and tested the High Resolution cameras for the European intelligence satellites HELIOS I and II. Through these programmes, ALCATEL SPACE enjoys an international reputation. Its capability and experience in High Resolution instrumentation is recognised by the most customers. Coming after the SPOT program, it was decided to go ahead with the PLEIADES HR program. PLEIADES HR is the optical high resolution component of a larger optical and radar multi-sensors system : ORFEO, which is developed in cooperation between France and Italy for dual Civilian and Defense use. ALCATEL SPACE has been entrusted by CNES with the development of the high resolution camera of the Earth observation satellites PLEIADES HR. The first optical satellite of the PLEIADES HR constellation will be launched in mid-2008, the second will follow in 2009. To minimize the development costs, a mini satellite approach has been selected, leading to a compact concept for the camera design. The paper describes the design and performance budgets of this novel high resolution and large field of view optical instrument with emphasis on the technological features. This new generation of camera represents a breakthrough in comparison with the previous SPOT cameras owing to a significant step in on-ground resolution, which approaches the capabilities of aerial photography. Recent advances in detector technology, optical fabrication and electronics make it possible for the PLEIADES HR camera to achieve their image quality performance goals while staying within weight and size restrictions normally considered suitable only for much lower performance systems. This camera design delivers superior performance using an innovative low power, low mass, scalable architecture, which provides a versatile approach for a variety of imaging requirements and allows for a wide number of possibilities of accommodation with a mini-satellite class platform.
Design and development of an airborne multispectral imaging system
NASA Astrophysics Data System (ADS)
Kulkarni, Rahul R.; Bachnak, Rafic; Lyle, Stacey; Steidley, Carl W.
2002-08-01
Advances in imaging technology and sensors have made airborne remote sensing systems viable for many applications that require reasonably good resolution at low cost. Digital cameras are making their mark on the market by providing high resolution at very high rates. This paper describes an aircraft-mounted imaging system (AMIS) that is being designed and developed at Texas A&M University-Corpus Christi (A&M-CC) with the support of a grant from NASA. The approach is to first develop and test a one-camera system that will be upgraded into a five-camera system that offers multi-spectral capabilities. AMIS will be low cost, rugged, portable and has its own battery power source. Its immediate use will be to acquire images of the Coastal area in the Gulf of Mexico for a variety of studies covering vast spectra from near ultraviolet region to near infrared region. This paper describes AMIS and its characteristics, discusses the process for selecting the major components, and presents the progress.
Enhanced LWIR NUC using an uncooled microbolometer camera
NASA Astrophysics Data System (ADS)
LaVeigne, Joe; Franks, Greg; Sparkman, Kevin; Prewarski, Marcus; Nehring, Brian
2011-06-01
Performing a good non-uniformity correction is a key part of achieving optimal performance from an infrared scene projector, and the best NUC is performed in the band of interest for the sensor being tested. While cooled, large format MWIR cameras are readily available and have been successfully used to perform NUC, similar cooled, large format LWIR cameras are not as common and are prohibitively expensive. Large format uncooled cameras are far more available and affordable, but present a range of challenges in practical use for performing NUC on an IRSP. Some of these challenges were discussed in a previous paper. In this discussion, we report results from a continuing development program to use a microbolometer camera to perform LWIR NUC on an IRSP. Camera instability and temporal response and thermal resolution were the main problems, and have been solved by the implementation of several compensation strategies as well as hardware used to stabilize the camera. In addition, other processes have been developed to allow iterative improvement as well as supporting changes of the post-NUC lookup table without requiring re-collection of the pre-NUC data with the new LUT in use.
Location and Geologic Setting for the Three U.S. Mars Landers
NASA Technical Reports Server (NTRS)
Parker, T. J.; Kirk, R. L.
1999-01-01
Super resolution of the horizon at both Viking landing sites has revealed "new" features we use for triangulation, similar to the approach used during the Mars Pathfinder Mission. We propose alternative landing site locations for both landers for which we believe the confidence is very high. Super resolution of VL-1 images also reveals some of the drift material at the site to consist of gravel-size deposits. Since our proposed location for VL-2 is NOT on the Mie ejecta blanket, the blocky surface around the lander may represent the meter-scale texture of "smooth palins" in the region. The Viking Lander panchromatic images typically offer more repeat coverage than does the IMP on Mars Pathfinder, due to the longer duration of these landed missions. Sub-pixel offsets, necessary for super resolution to work, appear to be attributable to thermal effects on the lander and settling of the lander over time. Due to the greater repeat coverage (particularly in the near and mid-fields) and all-panchromatic images, the gain in resolution by super resolution processing is better for Viking than it is with most IMP image sequences. This enhances the study of textural details near the lander and enables the identification rock and surface textures at greater distances from the lander. Discernment of stereo in super resolution im-ages is possible to great distances from the lander, but is limited by the non-rotating baseline between the two cameras and the shorter height of the cameras above the ground compared to IMP. With super resolution, details of horizon features, such as blockiness and crater rim shapes, may be better correlated with Orbiter images. A number of horizon features - craters and ridges - were identified at VL-1 during the misison, and a few hils and subtle ridges were identified at VL-2. We have added a few "new" horizon features for triangulation at the VL-2 landing site in Utopia Planitia. These features were used for independent triangulation with features visible in Viking Orbiter and MGS MOC images, though the actual location of VL-1 lies in a data dropout in the MOC image of the area. Additional information is contained in the original extended abstract.
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2016-10-01
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Sakabe, N; Sakabe, K; Sasaki, K
2004-01-01
Galaxy is a Weissenberg-type high-speed high-resolution and highly accurate fully automatic data-collection system using two cylindrical IP-cassettes each with a radius of 400 mm and a width of 450 mm. It was originally developed for static three-dimensional analysis using X-ray diffraction and was installed on bending-magnet beamline BL6C at the Photon Factory. It was found, however, that Galaxy was also very useful for time-resolved protein crystallography on a time scale of minutes. This has prompted us to design a new IP-conveyor-belt Weissenberg-mode data-collection system called Super Galaxy for time-resolved crystallography with improved time and crystallographic resolution over that achievable with Galaxy. Super Galaxy was designed with a half-cylinder-shaped cassette with a radius of 420 mm and a width of 690 mm. Using 1.0 A incident X-rays, these dimensions correspond to a maximum resolutions of 0.71 A in the vertical direction and 1.58 A in the horizontal. Upper and lower screens can be used to set the frame size of the recorded image. This function is useful not only to reduce the frame exchange time but also to save disk space on the data server. The use of an IP-conveyor-belt and many IP-readers make Super Galaxy well suited for time-resolved, monochromatic X-ray crystallography at a very intense third-generation SR beamline. Here, Galaxy and a conceptual design for Super Galaxy are described, and their suitability for use as data-collection systems for macromolecular time-resolved monochromatic X-ray crystallography are compared.
Detecting Multi-scale Structures in Chandra Images of Centaurus A
NASA Astrophysics Data System (ADS)
Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.
1999-12-01
Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.
Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)
NASA Astrophysics Data System (ADS)
Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.
1993-01-01
The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.
The Cosmic Evolution Through UV Spectroscopy (CETUS) Probe Mission Concept
NASA Astrophysics Data System (ADS)
Danchi, William; Heap, Sara; Woodruff, Robert; Hull, Anthony; Kendrick, Stephen E.; Purves, Lloyd; McCandliss, Stephan; Kelly Dodson, Greg Mehle, James Burge, Martin Valente, Michael Rhee, Walter Smith, Michael Choi, Eric Stoneking
2018-01-01
CETUS is a mission concept for an all-UV telescope with 3 scientific instruments: a wide-field camera, a wide-field multi-object spectrograph, and a point-source high-resolution and medium resolution spectrograph. It is primarily intended to work with other survey telescopes in the 2020’s (e.g. E-ROSITA (X-ray), LSST, Subaru, WFIRST (optical-near-IR), SKA (radio) to solve major, outstanding problems in astrophysics. In this poster presentation, we give an overview of CETUS key science goals and a progress report on the CETUS mission and instrument design.
Re-scan confocal microscopy: scanning twice for better resolution.
De Luca, Giulia M R; Breedijk, Ronald M P; Brandt, Rick A J; Zeelenberg, Christiaan H C; de Jong, Babette E; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A; Stallinga, Sjoerd; Manders, Erik M M
2013-01-01
We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required.
Multi-pulse shadowgraphic RGB illumination and detection for flow tracking
NASA Astrophysics Data System (ADS)
Menser, Jan; Schneider, Florian; Dreier, Thomas; Kaiser, Sebastian A.
2018-06-01
This work demonstrates the application of a multi-color LED and a consumer color camera for visualizing phase boundaries in two-phase flows, in particular for particle tracking velocimetry. The LED emits a sequence of short light pulses, red, green, then blue (RGB), and through its color-filter array, the camera captures all three pulses on a single RGB frame. In a backlit configuration, liquid droplets appear as shadows in each color channel. Color reversal and color cross-talk correction yield a series of three frozen-flow images that can be used for further analysis, e.g., determining the droplet velocity by particle tracking. Three example flows are presented, solid particles suspended in water, the penetrating front of a gasoline direct-injection spray, and the liquid break-up region of an "air-assisted" nozzle. Because of the shadowgraphic arrangement, long path lengths through scattering media lower image contrast, while visualization of phase boundaries with high resolution is a strength of this method. Apart from a pulse-and-delay generator, the overall system cost is very low.
Recent advancements in system design for miniaturized MEMS-based laser projectors
NASA Astrophysics Data System (ADS)
Scholles, M.; Frommhagen, K.; Gerwig, Ch.; Knobbe, J.; Lakner, H.; Schlebusch, D.; Schwarzenberg, M.; Vogel, U.
2008-02-01
Laser projection systems that use the flying spot principle and which are based on a single MEMS micro scanning mirrors are a very promising way to build ultra-compact projectors that may fit into mobile devices. First demonstrators that show the feasibility of this approach and the applicability of the micro scanning mirror developed by Fraunhofer IPMS for these systems have already been presented. However, a number of items still have to be resolved until miniaturized laser projectors are ready for the market. This contribution describes progress on several different items, each of them of major importance for laser projection systems. First of all, the overall performance of the system has been increased from VGA resolution to SVGA (800×600 pixels) with easy connection to a PC via DVI interface or by using the projector as embedded system with direct camera interface. Secondly, the degree of integration of the electronics has been enhanced by design of an application specific analog front end IC for the micro scanning mirror. It has been fabricated in a special high voltage technology and does not only allow to generate driving signals for the scanning mirror with amplitudes of up to 200V but also integrates position detection of the mirror by several methods. Thirdly, first results concerning Speckle reduction have been achieved, which is necessary for generation of images with high quality. Other aspects include laser modulation and solutions regarding projection on tilted screens which is possible because of the unlimited depth of focus.
Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-target camera tracking, hand-off and display LDRD 158819 final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert J.
2014-10-01
Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less
Multi-Task Learning with Low Rank Attribute Embedding for Multi-Camera Person Re-Identification.
Su, Chi; Yang, Fan; Zhang, Shiliang; Tian, Qi; Davis, Larry Steven; Gao, Wen
2018-05-01
We propose Multi-Task Learning with Low Rank Attribute Embedding (MTL-LORAE) to address the problem of person re-identification on multi-cameras. Re-identifications on different cameras are considered as related tasks, which allows the shared information among different tasks to be explored to improve the re-identification accuracy. The MTL-LORAE framework integrates low-level features with mid-level attributes as the descriptions for persons. To improve the accuracy of such description, we introduce the low-rank attribute embedding, which maps original binary attributes into a continuous space utilizing the correlative relationship between each pair of attributes. In this way, inaccurate attributes are rectified and missing attributes are recovered. The resulting objective function is constructed with an attribute embedding error and a quadratic loss concerning class labels. It is solved by an alternating optimization strategy. The proposed MTL-LORAE is tested on four datasets and is validated to outperform the existing methods with significant margins.
NASA Astrophysics Data System (ADS)
Kelly, M. A.; Boldt, J.; Wilson, J. P.; Yee, J. H.; Stoffler, R.
2017-12-01
The multi-spectral STereo Atmospheric Remote Sensing (STARS) concept has the objective to provide high-spatial and -temporal-resolution observations of 3D cloud structures related to hurricane development and other severe weather events. The rapid evolution of severe weather demonstrates a critical need for mesoscale observations of severe weather dynamics, but such observations are rare, particularly over the ocean where extratropical and tropical cyclones can undergo explosive development. Coincident space-based measurements of wind velocity and cloud properties at the mesoscale remain a great challenge, but are critically needed to improve the understanding and prediction of severe weather and cyclogenesis. STARS employs a mature stereoscopic imaging technique on two satellites (e.g. two CubeSats, two hosted payloads) to simultaneously retrieve cloud motion vectors (CMVs), cloud-top temperatures (CTTs), and cloud geometric heights (CGHs) from multi-angle, multi-spectral observations of cloud features. STARS is a pushbroom system based on separate wide-field-of-view co-boresighted multi-spectral cameras in the visible, midwave infrared (MWIR), and longwave infrared (LWIR) with high spatial resolution (better than 1 km). The visible system is based on a pan-chromatic, low-light imager to resolve cloud structures under nighttime illumination down to ¼ moon. The MWIR instrument, which is being developed as a NASA ESTO Instrument Incubator Program (IIP) project, is based on recent advances in MWIR detector technology that requires only modest cooling. The STARS payload provides flexible options for spaceflight due to its low size, weight, power (SWaP) and very modest cooling requirements. STARS also meets AF operational requirements for cloud characterization and theater weather imagery. In this paper, an overview of the STARS concept, including the high-level sensor design, the concept of operations, and measurement capability will be presented.
Spectrally Shaped DP-16QAM Super-Channel Transmission with Multi-Channel Digital Back-Propagation
Maher, Robert; Xu, Tianhua; Galdino, Lidia; Sato, Masaki; Alvarado, Alex; Shi, Kai; Savory, Seb J.; Thomsen, Benn C.; Killey, Robert I.; Bayvel, Polina
2015-01-01
The achievable transmission capacity of conventional optical fibre communication systems is limited by nonlinear distortions due to the Kerr effect and the difficulty in modulating the optical field to effectively use the available fibre bandwidth. In order to achieve a high information spectral density (ISD), while simultaneously maintaining transmission reach, multi-channel fibre nonlinearity compensation and spectrally efficient data encoding must be utilised. In this work, we use a single coherent super-receiver to simultaneously receive a DP-16QAM super-channel, consisting of seven spectrally shaped 10GBd sub-carriers spaced at the Nyquist frequency. Effective nonlinearity mitigation is achieved using multi-channel digital back-propagation (MC-DBP) and this technique is combined with an optimised forward error correction implementation to demonstrate a record gain in transmission reach of 85%; increasing the maximum transmission distance from 3190 km to 5890 km, with an ISD of 6.60 b/s/Hz. In addition, this report outlines for the first time, the sensitivity of MC-DBP gain to linear transmission line impairments and defines a trade-off between performance and complexity. PMID:25645457
Feng, Lei; Fang, Hui; Zhou, Wei-Jun; Huang, Min; He, Yong
2006-09-01
Site-specific variable nitrogen application is one of the major precision crop production management operations. Obtaining sufficient crop nitrogen stress information is essential for achieving effective site-specific nitrogen applications. The present paper describes the development of a multi-spectral nitrogen deficiency sensor, which uses three channels (green, red, near-infrared) of crop images to determine the nitrogen level of canola. This sensor assesses the nitrogen stress by means of estimated SPAD value of the canola based on canola canopy reflectance sensed using three channels (green, red, near-infrared) of the multi-spectral camera. The core of this investigation is the calibration methods between the multi-spectral references and the nitrogen levels in crops measured using a SPAD 502 chlorophyll meter. Based on the results obtained from this study, it can be concluded that a multi-spectral CCD camera can provide sufficient information to perform reasonable SPAD values estimation during field operations.
Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array
NASA Astrophysics Data System (ADS)
Houben, Sebastian
2015-03-01
The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.
Photodetectors for the Advanced Gamma-ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Wagner, Robert G.; Advanced Gamma-ray Imaging System AGIS Collaboration
2010-03-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation very high energy gamma-ray observatory. Design goals include an order of magnitude better sensitivity, better angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Each telescope is equipped with a camera that detects and records the Cherenkov-light flashes from air showers. The camera is comprised of a pixelated focal plane of blue sensitive and fast (nanosecond) photon detectors that detect the photon signal and convert it into an electrical one. Given the scale of AGIS, the camera must be reliable and cost effective. The Schwarzschild-Couder optical design yields a smaller plate scale than present-day Cherenkov telescopes, enabling the use of more compact, multi-pixel devices, including multianode photomultipliers or Geiger avalanche photodiodes. We present the conceptual design of the focal plane for the camera and results from testing candidate! focal plane sensors.
Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique
NASA Astrophysics Data System (ADS)
Michaels, Joshua A.
With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.
Beats: Video Monitors and Cameras.
ERIC Educational Resources Information Center
Worth, Frazier
1996-01-01
Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)
Two-photon speckle illumination for super-resolution microscopy.
Negash, Awoke; Labouesse, Simon; Chaumet, Patrick C; Belkebir, Kamal; Giovannini, Hugues; Allain, Marc; Idier, Jérôme; Sentenac, Anne
2018-06-01
We present a numerical study of a microscopy setup in which the sample is illuminated with uncontrolled speckle patterns and the two-photon excitation fluorescence is collected on a camera. We show that, using a simple deconvolution algorithm for processing the speckle low-resolution images, this wide-field imaging technique exhibits resolution significantly better than that of two-photon excitation scanning microscopy or one-photon excitation bright-field microscopy.
NASA Astrophysics Data System (ADS)
Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling
2018-06-01
Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.
MSE spectrograph optical design: a novel pupil slicing technique
NASA Astrophysics Data System (ADS)
Spanò, P.
2014-07-01
The Maunakea Spectroscopic Explorer shall be mainly devoted to perform deep, wide-field, spectroscopic surveys at spectral resolutions from ~2000 to ~20000, at visible and near-infrared wavelengths. Simultaneous spectral coverage at low resolution is required, while at high resolution only selected windows can be covered. Moreover, very high multiplexing (3200 objects) must be obtained at low resolution. At higher resolutions a decreased number of objects (~800) can be observed. To meet such high demanding requirements, a fiber-fed multi-object spectrograph concept has been designed by pupil-slicing the collimated beam, followed by multiple dispersive and camera optics. Different resolution modes are obtained by introducing anamorphic lenslets in front of the fiber arrays. The spectrograph is able to switch between three resolution modes (2000, 6500, 20000) by removing the anamorphic lenses and exchanging gratings. Camera lenses are fixed in place to increase stability. To enhance throughput, VPH first-order gratings has been preferred over echelle gratings. Moreover, throughput is kept high over all wavelength ranges by splitting light into more arms by dichroic beamsplitters and optimizing efficiency for each channel by proper selection of glass materials, coatings, and grating parameters.
A Gradient Optimization Approach to Adaptive Multi-Robot Control
2009-09-01
implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul
3D Rainbow Particle Tracking Velocimetry
NASA Astrophysics Data System (ADS)
Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang
2017-11-01
A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.
Phase-stepped fringe projection by rotation about the camera's perspective center.
Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J
2011-09-12
A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.
North Twin Peak in super resolution
NASA Technical Reports Server (NTRS)
1997-01-01
This pair of images shows the result of taking a sequence of 25 identical exposures from the Imager for Mars Pathfinder (IMP) of the northern Twin Peak, with small camera motions, and processing them with the Super-Resolution algorithm developed at NASA's Ames Research Center.
The upper image is a representative input image, scaled up by a factor of five, with the pixel edges smoothed out for a fair comparison. The lower image allows significantly finer detail to be resolved.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.The super-resolution research was conducted by Peter Cheeseman, Bob Kanefsky, Robin Hanson, and John Stutz of NASA's Ames Research Center, Mountain View, CA. More information on this technology is available on the Ames Super Resolution home page athttp://ic-www.arc.nasa.gov/ic/projects/bayes-group/ group/super-res/Dai, Meiling; Yang, Fujun; He, Xiaoyuan
2012-04-20
A simple but effective fringe projection profilometry is proposed to measure 3D shape by using one snapshot color sinusoidal fringe pattern. One color fringe pattern encoded with a sinusoidal fringe (as red component) and one uniform intensity pattern (as blue component) is projected by a digital video projector, and the deformed fringe pattern is recorded by a color CCD camera. The captured color fringe pattern is separated into its RGB components and division operation is applied to red and blue channels to reduce the variable reflection intensity. Shape information of the tested object is decoded by applying an arcsine algorithm on the normalized fringe pattern with subpixel resolution. In the case of fringe discontinuities caused by height steps, or spatially isolated surfaces, the separated blue component is binarized and used for correcting the phase demodulation. A simple and robust method is also introduced to compensate for nonlinear intensity response of the digital video projector. The experimental results demonstrate the validity of the proposed method.
NASA Astrophysics Data System (ADS)
Zasso, A.; Argentini, T.; Bayati, I.; Belloli, M.; Rocchi, D.
2017-12-01
The super long fjord crossings in E39 Norwegian project pose new challenges to long span bridge design and construction technology. Proposed solutions should consider the adoption of bridge deck with super long spans or floating solutions for at least one of the towers, due to the relevant fjord depth. At the same time, the exposed fjord environment, possibly facing the open ocean, calls for higher aerodynamic stability performances. In relation to this scenario, the present paper addresses two topics: 1) the aerodynamic advantages of multi-box deck sections in terms of aeroelastic stability, and 2) an experimental setup in a wind tunnel able to simulate the aeroelastic bridge response including the wave forcing on the floating.
Re-scan confocal microscopy: scanning twice for better resolution
De Luca, Giulia M.R.; Breedijk, Ronald M.P.; Brandt, Rick A.J.; Zeelenberg, Christiaan H.C.; de Jong, Babette E.; Timmermans, Wendy; Azar, Leila Nahidi; Hoebe, Ron A.; Stallinga, Sjoerd; Manders, Erik M.M.
2013-01-01
We present a new super-resolution technique, Re-scan Confocal Microscopy (RCM), based on standard confocal microscopy extended with an optical (re-scanning) unit that projects the image directly on a CCD-camera. This new microscope has improved lateral resolution and strongly improved sensitivity while maintaining the sectioning capability of a standard confocal microscope. This simple technology is typically useful for biological applications where the combination high-resolution and high-sensitivity is required. PMID:24298422
Adding polarimetric imaging to depth map using improved light field camera 2.0 structure
NASA Astrophysics Data System (ADS)
Zhang, Xuanzhe; Yang, Yi; Du, Shaojun; Cao, Yu
2017-06-01
Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an "eyes array", with 3 or more polarization imaging "glasses" in front of each "eye". Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring "eyes", while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.
Generation of mechanical interference fringes by multi-photon counting
NASA Astrophysics Data System (ADS)
Ringbauer, M.; Weinhold, T. J.; Howard, L. A.; White, A. G.; Vanner, M. R.
2018-05-01
Exploring the quantum behaviour of macroscopic objects provides an intriguing avenue to study the foundations of physics and to develop a suite of quantum-enhanced technologies. One prominent path of study is provided by quantum optomechanics which utilizes the tools of quantum optics to control the motion of macroscopic mechanical resonators. Despite excellent recent progress, the preparation of mechanical quantum superposition states remains outstanding due to weak coupling and thermal decoherence. Here we present a novel optomechanical scheme that significantly relaxes these requirements allowing the preparation of quantum superposition states of motion of a mechanical resonator by exploiting the nonlinearity of multi-photon quantum measurements. Our method is capable of generating non-classical mechanical states without the need for strong single-photon coupling, is resilient against optical loss, and offers more favourable scaling against initial mechanical thermal occupation than existing schemes. Moreover, our approach allows the generation of larger superposition states by projecting the optical field onto NOON states. We experimentally demonstrate this multi-photon-counting technique on a mechanical thermal state in the classical limit and observe interference fringes in the mechanical position distribution that show phase super-resolution. This opens a feasible route to explore and exploit quantum phenomena at a macroscopic scale.
A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig
NASA Astrophysics Data System (ADS)
Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming
2018-06-01
This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
New learning based super-resolution: use of DWT and IGMRF prior.
Gajjar, Prakash P; Joshi, Manjunath V
2010-05-01
In this paper, we propose a new learning-based approach for super-resolving an image captured at low spatial resolution. Given the low spatial resolution test image and a database consisting of low and high spatial resolution images, we obtain super-resolution for the test image. We first obtain an initial high-resolution (HR) estimate by learning the high-frequency details from the available database. A new discrete wavelet transform (DWT) based approach is proposed for learning that uses a set of low-resolution (LR) images and their corresponding HR versions. Since the super-resolution is an ill-posed problem, we obtain the final solution using a regularization framework. The LR image is modeled as the aliased and noisy version of the corresponding HR image, and the aliasing matrix entries are estimated using the test image and the initial HR estimate. The prior model for the super-resolved image is chosen as an Inhomogeneous Gaussian Markov random field (IGMRF) and the model parameters are estimated using the same initial HR estimate. A maximum a posteriori (MAP) estimation is used to arrive at the cost function which is minimized using a simple gradient descent approach. We demonstrate the effectiveness of the proposed approach by conducting the experiments on gray scale as well as on color images. The method is compared with the standard interpolation technique and also with existing learning-based approaches. The proposed approach can be used in applications such as wildlife sensor networks, remote surveillance where the memory, the transmission bandwidth, and the camera cost are the main constraints.
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
MPGD for breast cancer prevention: a high resolution and low dose radiation medical imaging
NASA Astrophysics Data System (ADS)
Gutierrez, R. M.; Cerquera, E. A.; Mañana, G.
2012-07-01
Early detection of small calcifications in mammograms is considered the best preventive tool of breast cancer. However, existing digital mammography with relatively low radiation skin exposure has limited accessibility and insufficient spatial resolution for small calcification detection. Micro Pattern Gaseous Detectors (MPGD) and associated technologies, increasingly provide new information useful to generate images of microscopic structures and make more accessible cutting edge technology for medical imaging and many other applications. In this work we foresee and develop an application for the new information provided by a MPGD camera in the form of highly controlled images with high dynamical resolution. We present a new Super Detail Image (S-DI) that efficiently profits of this new information provided by the MPGD camera to obtain very high spatial resolution images. Therefore, the method presented in this work shows that the MPGD camera with SD-I, can produce mammograms with the necessary spatial resolution to detect microcalcifications. It would substantially increase efficiency and accessibility of screening mammography to highly improve breast cancer prevention.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
2D-3D registration using gradient-based MI for image guided surgery systems
NASA Astrophysics Data System (ADS)
Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James
2011-03-01
Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.
HIGH-ENERGY X-RAY PINHOLE CAMERA FOR HIGH-RESOLUTION ELECTRON BEAM SIZE MEASUREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, B.; Morgan, J.; Lee, S.H.
The Advanced Photon Source (APS) is developing a multi-bend achromat (MBA) lattice based storage ring as the next major upgrade, featuring a 20-fold reduction in emittance. Combining the reduction of beta functions, the electron beam sizes at bend magnet sources may be reduced to reach 5 – 10 µm for 10% vertical coupling. The x-ray pinhole camera currently used for beam size monitoring will not be adequate for the new task. By increasing the operating photon energy to 120 – 200 keV, the pinhole camera’s resolution is expected to reach below 4 µm. The peak height of the pinhole imagemore » will be used to monitor relative changes of the beam sizes and enable the feedback control of the emittance. We present the simulation and the design of a beam size monitor for the APS storage ring.« less
NASA Astrophysics Data System (ADS)
Aun, Carlos E.; de Campos Ferraz, Jussara; Silva Kfouri, Luciana
1998-04-01
Previous researches have discussed the importance of sealing the internal surface of the root canal after preparing it for posts or dowels, avoiding tubuli contamination by the oral environment. The purpose of this study was to investigate the effects of Neodymium-Yttrium-Aluminum-Garnet laser irradiation, associated or not with another materials, on the root inner walls after post space preparation. Forty single rooted endodontically treated teeth had theirs filings partially removed for prosthetics restoration, divided into 8 groups which received a coat of the following materials: group A: Copalite vanish; group B: Copalite vanish and laser; group C: Scothbond Multi-Purpose; group D: Scothbond Multi-Purpose and laser; group E: methylcianoacrilate; group F: methylcianocrilate and laser; group G: laser only; group H: control. The roots were placed in methylene blue dye and transversally cutted, then submitted to the analysis in the profile projector. So far we could observe that the Nd:Yag laser was able to enhance the sealing properties of the Scothbond Multi-Purpose.
Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A
2017-01-01
Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.
A single pixel camera video ophthalmoscope
NASA Astrophysics Data System (ADS)
Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.
2017-02-01
There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.
Multi-frame image processing with panning cameras and moving subjects
NASA Astrophysics Data System (ADS)
Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric
2014-06-01
Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.
The Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR)
NASA Astrophysics Data System (ADS)
Peterson, Bradley M.; Fischer, Debra; LUVOIR Science and Technology Definition Team
2017-01-01
LUVOIR is one of four potential large mission concepts for which the NASA Astrophysics Division has commissioned studies by Science and Technology Definition Teams (STDTs) drawn from the astronomical community. LUVOIR will have an 8 to16-m segmented primary mirror and operate at the Sun-Earth L2 point. It will be designed to support a broad range of astrophysics and exoplanet studies. The notional initial complement of instruments will include 1) a high-performance optical/NIR coronagraph with imaging and spectroscopic capability, 2) a UV imager and spectrograph with high spectral resolution and multi-object capability, 3) a high-definition wide-field optical/NIR camera, and 4) a multi-resolution optical/NIR spectrograph. LUVOIR will be designed for extreme stability to support unprecedented spatial resolution and coronagraphy. It is intended to be a long-lifetime facility that is both serviceable and upgradable. This is the first report by the LUVOIR STDT to the community on the top-level architectures we are studying, including preliminary capabilities of a mission with those parameters. The STDT seeks feedback from the astronomical community for key science investigations that can be undertaken with the notional instrument suite and to identify desirable capabilities that will enable additional key science.
Online Mapping and Perception Algorithms for Multi-robot Teams Operating in Urban Environments
2015-01-01
each method on a 2.53 GHz Intel i5 laptop. All our algorithms are hand-optimized, implemented in Java and single threaded. To determine which algorithm...approach would be to label all the pixels in the image with an x, y, z point. However, the angular resolution of the camera is finer than that of the...edge criterion. That is, each edge is either present or absent. In [42], edge existence is further screened by a fixed threshold for angular
RESOURCESAT-2: a mission for Earth resources management
NASA Astrophysics Data System (ADS)
Venkata Rao, M.; Gupta, J. P.; Rattan, Ram; Thyagarajan, K.
2006-12-01
The Indian Space Research Organisation (ISRO) has established an operational Remote sensing satellite system by launching its first satellite, IRS-1A in 1988, followed by a series of IRS spacecraft. The IRS-1C/1D satellites with their unique combination of Payloads have taken a lead position in the Global remote sensing scenario. Realising the growing User demands for the "Multi" level approach in terms of Spatial, Spectral, Temporal and Radiometric resolutions, ISRO identified the Resourcesat as a continuity as well as improved RS Satellite. The Resourcesat-1 (IRS-P6) was launched in October 2003 using PSLV launch vehicle and it is in operational service. Resourcesat-2 is its follow-on Mission scheduled for launch in 2008. Each Resourcesat satellite carries three Electro-optical cameras as its payload - LISS-3, LISS-4 and AWIFS. All the three are multi-spectral push-broom scanners with linear array CCDs as Detectors. LISS-3 and AWIFS operate in four identical spectral bands in the VIS-NIR-SWIR range while LISS-4 is a high resolution camera with three spectral bands in VIS-NIR range. In order to meet the stringent requirements of band-to-band registration and platform stability, several improvements have been incorporated in the mainframe Bus configuration like wide field Star trackers, precision Gyroscopes, on-board GPS receiver etc,. The Resourcesat data finds its application in several areas like agricultural crop discrimination and monitoring, crop acreage/yield estimation, precision farming, water resources, forest mapping, Rural infrastructure development, disaster management etc,. to name a few. A brief description of the Payload cameras, spacecraft bus elements and operational modes and few applications are presented.
Evaluating planetary digital terrain models-The HRSC DTM test
Heipke, C.; Oberst, J.; Albertz, J.; Attwenger, M.; Dorninger, P.; Dorrer, E.; Ewe, M.; Gehrke, S.; Gwinner, K.; Hirschmuller, H.; Kim, J.R.; Kirk, R.L.; Mayer, H.; Muller, Jan-Peter; Rengarajan, R.; Rentsch, M.; Schmidt, R.; Scholten, F.; Shan, J.; Spiegel, M.; Wahlisch, M.; Neukum, G.
2007-01-01
The High Resolution Stereo Camera (HRSC) has been orbiting the planet Mars since January 2004 onboard the European Space Agency (ESA) Mars Express mission and delivers imagery which is being used for topographic mapping of the planet. The HRSC team has conducted a systematic inter-comparison of different alternatives for the production of high resolution digital terrain models (DTMs) from the multi look HRSC push broom imagery. Based on carefully chosen test sites the test participants have produced DTMs which have been subsequently analysed in a quantitative and a qualitative manner. This paper reports on the results obtained in this test. ?? 2007 Elsevier Ltd. All rights reserved.
Calibration of the venµs super-spectral camera
NASA Astrophysics Data System (ADS)
Topaz, Jeremy; Sprecher, Tuvia; Tinto, Francesc; Echeto, Pierre; Hagolle, Olivier
2017-11-01
A high-resolution super-spectral camera is being developed by Elbit Systems in Israel for the joint CNES- Israel Space Agency satellite, VENμS (Vegetation and Environment monitoring on a new Micro-Satellite). This camera will have 12 narrow spectral bands in the Visible/NIR region and will give images with 5.3 m resolution from an altitude of 720 km, with an orbit which allows a two-day revisit interval for a number of selected sites distributed over some two-thirds of the earth's surface. The swath width will be 27 km at this altitude. To ensure the high radiometric and geometric accuracy needed to fully exploit such multiple data sampling, careful attention is given in the design to maximize characteristics such as signal-to-noise ratio (SNR), spectral band accuracy, stray light rejection, inter- band pixel-to-pixel registration, etc. For the same reasons, accurate calibration of all the principle characteristics is essential, and this presents some major challenges. The methods planned to achieve the required level of calibration are presented following a brief description of the system design. A fuller description of the system design is given in [2], [3] and [4].
3D Digital Surveying and Modelling of Cave Geometry: Application to Paleolithic Rock Art.
González-Aguilera, Diego; Muñoz-Nieto, Angel; Gómez-Lahoz, Javier; Herrero-Pascual, Jesus; Gutierrez-Alonso, Gabriel
2009-01-01
3D digital surveying and modelling of cave geometry represents a relevant approach for research, management and preservation of our cultural and geological legacy. In this paper, a multi-sensor approach based on a terrestrial laser scanner, a high-resolution digital camera and a total station is presented. Two emblematic caves of Paleolithic human occupation and situated in northern Spain, "Las Caldas" and "Peña de Candamo", have been chosen to put in practise this approach. As a result, an integral and multi-scalable 3D model is generated which may allow other scientists, pre-historians, geologists…, to work on two different levels, integrating different Paleolithic Art datasets: (1) a basic level based on the accurate and metric support provided by the laser scanner; and (2) a advanced level using the range and image-based modelling.
Scheins, J J; Vahedipour, K; Pietrzyk, U; Shah, N J
2015-12-21
For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation time is further reduced by using simultaneous multi-threading (SMT). A global speedup factor of 11 without SMT and above 100 with SMT has been achieved for the improved CPU-based implementation while obtaining equivalent numerical results.
NASA Astrophysics Data System (ADS)
Scheins, J. J.; Vahedipour, K.; Pietrzyk, U.; Shah, N. J.
2015-12-01
For high-resolution, iterative 3D PET image reconstruction the efficient implementation of forward-backward projectors is essential to minimise the calculation time. Mathematically, the projectors are summarised as a system response matrix (SRM) whose elements define the contribution of image voxels to lines-of-response (LORs). In fact, the SRM easily comprises billions of non-zero matrix elements to evaluate the tremendous number of LORs as provided by state-of-the-art PET scanners. Hence, the performance of iterative algorithms, e.g. maximum-likelihood-expectation-maximisation (MLEM), suffers from severe computational problems due to the intensive memory access and huge number of floating point operations. Here, symmetries occupy a key role in terms of efficient implementation. They reduce the amount of independent SRM elements, thus allowing for a significant matrix compression according to the number of exploitable symmetries. With our previous work, the PET REconstruction Software TOolkit (PRESTO), very high compression factors (>300) are demonstrated by using specific non-Cartesian voxel patterns involving discrete polar symmetries. In this way, a pre-calculated memory-resident SRM using complex volume-of-intersection calculations can be achieved. However, our original ray-driven implementation suffers from addressing voxels, projection data and SRM elements in disfavoured memory access patterns. As a consequence, a rather limited numerical throughput is observed due to the massive waste of memory bandwidth and inefficient usage of cache respectively. In this work, an advantageous symmetry-driven evaluation of the forward-backward projectors is proposed to overcome these inefficiencies. The polar symmetries applied in PRESTO suggest a novel organisation of image data and LOR projection data in memory to enable an efficient single instruction multiple data vectorisation, i.e. simultaneous use of any SRM element for symmetric LORs. In addition, the calculation time is further reduced by using simultaneous multi-threading (SMT). A global speedup factor of 11 without SMT and above 100 with SMT has been achieved for the improved CPU-based implementation while obtaining equivalent numerical results.
Krempien, Robert; Hoppe, Harald; Kahrs, Lüder; Daeuber, Sascha; Schorr, Oliver; Eggers, Georg; Bischof, Marc; Munter, Marc W; Debus, Juergen; Harms, Wolfgang
2008-03-01
The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation.
BRDF-dependent accuracy of array-projection-based 3D sensors.
Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther
2017-03-10
In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iverson, Adam; Carlson, Carl; Young, Jason
2013-07-08
The diagnostic needs of any dynamic loading platform present unique technical challenges that must be addressed in order to accurately measure in situ material properties in an extreme environment. The IMPULSE platform (IMPact system for Ultrafast Synchrotron Experiments) at the Advanced Photon Source (APS) is no exception and, in fact, may be more challenging, as the imaging diagnostics must be synchronized to both the experiment and the 60 ps wide x-ray bunches produced at APS. The technical challenges of time-resolved x-ray diffraction imaging and high-resolution multi-frame phase contrast imaging (PCI) are described in this paper. Example data from recent IMPULSEmore » experiments are shown to illustrate the advances and evolution of these diagnostics with a focus on comparing the performance of two intensified CCD cameras and their suitability for multi-frame PCI. The continued development of these diagnostics is fundamentally important to IMPULSE and many other loading platforms and will benefit future facilities such as the Dynamic Compression Sector at APS and MaRIE at Los Alamos National Laboratory.« less
Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera
NASA Astrophysics Data System (ADS)
Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert
2018-03-01
Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.
Running of scalar spectral index in multi-field inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gong, Jinn-Ouk, E-mail: jinn-ouk.gong@apctp.org
We compute the running of the scalar spectral index in general multi-field slow-roll inflation. By incorporating explicit momentum dependence at the moment of horizon crossing, we can find the running straightforwardly. At the same time, we can distinguish the contributions from the quasi de Sitter background and the super-horizon evolution of the field fluctuations.
Multi-acoustic lens design methodology for a low cost C-scan photoacoustic imaging camera
NASA Astrophysics Data System (ADS)
Chinni, Bhargava; Han, Zichao; Brown, Nicholas; Vallejo, Pedro; Jacobs, Tess; Knox, Wayne; Dogra, Vikram; Rao, Navalgund
2016-03-01
We have designed and implemented a novel acoustic lens based focusing technology into a prototype photoacoustic imaging camera. All photoacoustically generated waves from laser exposed absorbers within a small volume get focused simultaneously by the lens onto an image plane. We use a multi-element ultrasound transducer array to capture the focused photoacoustic signals. Acoustic lens eliminates the need for expensive data acquisition hardware systems, is faster compared to electronic focusing and enables real-time image reconstruction. Using this photoacoustic imaging camera, we have imaged more than 150 several centimeter size ex-vivo human prostate, kidney and thyroid specimens with a millimeter resolution for cancer detection. In this paper, we share our lens design strategy and how we evaluate the resulting quality metrics (on and off axis point spread function, depth of field and modulation transfer function) through simulation. An advanced toolbox in MATLAB was adapted and used for simulating a two-dimensional gridded model that incorporates realistic photoacoustic signal generation and acoustic wave propagation through the lens with medium properties defined on each grid point. Two dimensional point spread functions have been generated and compared with experiments to demonstrate the utility of our design strategy. Finally we present results from work in progress on the use of two lens system aimed at further improving some of the quality metrics of our system.
Ultra-compact imaging system based on multi-aperture architecture
NASA Astrophysics Data System (ADS)
Meyer, Julia; Brückner, Andreas; Leitel, Robert; Dannberg, Peter; Bräuer, Andreas; Tünnermann, Andreas
2011-03-01
As a matter of course, cameras are integrated in the field of information and communication technology. It can be observed, that there is a trend that those cameras get smaller and at the same time cheaper. Because single aperture have a limit of miniaturization, while simultaneously keeping the same space-bandwidth-product and transmitting a wide field of view, there is a need of new ideas like the multi aperture optical systems. In the proposed camera system the image is formed with many different channels each consisting of four microlenses which are arranged one after another in different microlens arrays. A partial image which fits together with the neighbouring one is formed in every single channel, so that a real erect image is generated and a conventional image sensor can be used. The microoptical fabrication process and the assembly are well established and can be carried out on wafer-level. Laser writing is used for the fabrication of the masks. UV-lithography, a reflow process and UV-molding is needed for the fabrication of the apertures and the lenses. The developed system is very small in terms of both length and lateral dimensions and has a VGA resolution and a diagonal field of view of 65 degrees. This microoptical vision system is appropriate for being implemented in electronic devices such as webcams integrated in notebookdisplays.
Zheng, Jiabei; Fessler, Jeffrey A; Chan, Heang-Ping
2017-01-01
Purpose Digital forward and back projectors play a significant role in iterative image reconstruction. The accuracy of the projector affects the quality of the reconstructed images. Digital breast tomosynthesis (DBT) often uses the ray-tracing (RT) projector that ignores finite detector element size. This paper proposes a modified version of the separable footprint (SF) projector, called the segmented separable footprint (SG) projector, that calculates efficiently the Radon transform mean value over each detector element. The SG projector is specifically designed for DBT reconstruction because of the large height-to-width ratio of the voxels generally used in DBT. This study evaluates the effectiveness of the SG projector in reducing projection error and improving DBT reconstruction quality. Methods We quantitatively compared the projection error of the RT and the SG projector at different locations and their performance in regular and subpixel DBT reconstruction. Subpixel reconstructions used finer voxels in the imaged volume than the detector pixel size. Subpixel reconstruction with RT projector uses interpolated projection views as input to provide adequate coverage of the finer voxel grid with the traced rays. Subpixel reconstruction with the SG projector, however, uses the measured projection views without interpolation. We simulated DBT projections of a test phantom using CatSim (GE Global Research, Niskayuna, NY) under idealized imaging conditions without noise and blur, to analyze the effects of the projectors and subpixel reconstruction without other image degrading factors. The phantom contained an array of horizontal and vertical line pair patterns (1 to 9.5 line pairs/mm) and pairs of closely spaced spheres (diameters 0.053 to 0.5 mm) embedded at the mid-plane of a 5-cm-thick breast-tissue-equivalent uniform volume. The images were reconstructed with regular simultaneous algebraic reconstruction technique (SART) and subpixel SART using different projectors. The resolution and contrast of the test objects in the reconstructed images and the computation times were compared under different reconstruction conditions. Results The SG projector reduced the projector error by 1 to 2 orders of magnitude at most locations. In the worst case, the SG projector still reduced the projection error by about 50%. In the DBT reconstructed slices parallel to the detector plane, the SG projector not only increased the contrast of the line pairs and spheres, but also produced more smooth and continuous reconstructed images whereas the discrete and sparse nature of the RT projector caused artifacts appearing as patterned noise. For subpixel reconstruction, the SG projector significantly increased object contrast and computation speed, especially for high subpixel ratios, compared with the RT projector implemented with accelerated Siddon’s algorithm. The difference in the depth resolution among the projectors is negligible under the conditions studied. Our results also demonstrated that subpixel reconstruction can improve the spatial resolution of the reconstructed images, and can exceed the Nyquist limit of the detector under some conditions. Conclusions The SG projector was more accurate and faster than the RT projector. The SG projector also substantially reduced computation time and improved the image quality for the tomosynthesized images with and without subpixel reconstruction. PMID:28058719
NASA Astrophysics Data System (ADS)
Anders, Niels; Keesstra, Saskia; Masselink, Rens
2014-05-01
Unmanned Aerial System (UAS) are becoming popular tools in the geosciences due to improving technology and processing/analysis techniques. They can potentially fill the gap between spaceborne or manned aircraft remote sensing and terrestrial remote sensing, both in terms of spatial and temporal resolution. In this study we analyze a multi-temporal data set that was acquired with a fixed-wing UAS in an agricultural catchment (2 sq. km) in Navarra, Spain. The goal of this study is to register soil erosion activity after one year of agricultural activity. The aircraft was equipped with a Panasonic GX1 16MP pocket camera with a 20 mm lens to capture normal JPEG RGB images. The data set consisted of two sets of imagery acquired in the end of February in 2013 and 2014 after harvesting. The raw images were processed using Agisoft Photoscan Pro which includes the structure-from-motion (SfM) and multi-view stereopsis (MVS) algorithms producing digital surface models and orthophotos of both data sets. A discussion is presented that is focused on the suitability of multi-temporal UAS data and SfM/MVS processing for quantifying soil loss, mapping the distribution of eroded materials and analyzing re-occurrences of rill patterns after plowing.
Massive stereo-based DTM production for Mars on cloud computers
NASA Astrophysics Data System (ADS)
Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Xiong, Si-Ting; Putri, A. R. D.; Walter, S. H. G.; Veitch-Michaelis, J.; Yershov, V.
2018-05-01
Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system.
Demonstration of in-vivo Multi-Probe Tracker Based on a Si/CdTe Semiconductor Compton Camera
NASA Astrophysics Data System (ADS)
Takeda, Shin'ichiro; Odaka, Hirokazu; Ishikawa, Shin-nosuke; Watanabe, Shin; Aono, Hiroyuki; Takahashi, Tadayuki; Kanayama, Yousuke; Hiromura, Makoto; Enomoto, Shuichi
2012-02-01
By using a prototype Compton camera consisting of silicon (Si) and cadmium telluride (CdTe) semiconductor detectors, originally developed for the ASTRO-H satellite mission, an experiment involving imaging multiple radiopharmaceuticals injected into a living mouse was conducted to study its feasibility for medical imaging. The accumulation of both iodinated (131I) methylnorcholestenol and 85Sr into the mouse's organs was simultaneously imaged by the prototype. This result implies that the Compton camera is expected to become a multi-probe tracker available in nuclear medicine and small animal imaging.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
Comparison of a Fixed-Wing and Multi-Rotor Uav for Environmental Mapping Applications: a Case Study
NASA Astrophysics Data System (ADS)
Boon, M. A.; Drijfhout, A. P.; Tesfamichael, S.
2017-08-01
The advent and evolution of Unmanned Aerial Vehicles (UAVs) and photogrammetric techniques has provided the possibility for on-demand high-resolution environmental mapping. Orthoimages and three dimensional products such as Digital Surface Models (DSMs) are derived from the UAV imagery which is amongst the most important spatial information tools for environmental planning. The two main types of UAVs in the commercial market are fixed-wing and multi-rotor. Both have their advantages and disadvantages including their suitability for certain applications. Fixed-wing UAVs normally have longer flight endurance capabilities while multi-rotors can provide for stable image capturing and easy vertical take-off and landing. Therefore, the objective of this study is to assess the performance of a fixed-wing versus a multi-rotor UAV for environmental mapping applications by conducting a specific case study. The aerial mapping of the Cors-Air model aircraft field which includes a wetland ecosystem was undertaken on the same day with a Skywalker fixed-wing UAV and a Raven X8 multi-rotor UAV equipped with similar sensor specifications (digital RGB camera) under the same weather conditions. We compared the derived datasets by applying the DTMs for basic environmental mapping purposes such as slope and contour mapping including utilising the orthoimages for identification of anthropogenic disturbances. The ground spatial resolution obtained was slightly higher for the multi-rotor probably due to a slower flight speed and more images. The results in terms of the overall precision of the data was noticeably less accurate for the fixed-wing. In contrast, orthoimages derived from the two systems showed small variations. The multi-rotor imagery provided better representation of vegetation although the fixed-wing data was sufficient for the identification of environmental factors such as anthropogenic disturbances. Differences were observed utilising the respective DTMs for the mapping of the wetland slope and contour mapping including the representation of hydrological features within the wetland. Factors such as cost, maintenance and flight time is in favour of the Skywalker fixed-wing. The multi-rotor on the other hand is more favourable in terms of data accuracy including for precision environmental planning purposes although the quality of the data of the fixed-wing is satisfactory for most environmental mapping applications.
PN-CCD camera for XMM: performance of high time resolution/bright source operating modes
NASA Astrophysics Data System (ADS)
Kendziorra, Eckhard; Bihler, Edgar; Grubmiller, Willy; Kretschmar, Baerbel; Kuster, Markus; Pflueger, Bernhard; Staubert, Ruediger; Braeuninger, Heinrich W.; Briel, Ulrich G.; Meidinger, Norbert; Pfeffermann, Elmar; Reppin, Claus; Stoetter, Diana; Strueder, Lothar; Holl, Peter; Kemmer, Josef; Soltau, Heike; von Zanthier, Christoph
1997-10-01
The pn-CCD camera is developed as one of the focal plane instruments for the European photon imaging camera (EPIC) on board the x-ray multi mirror (XMM) mission to be launched in 1999. The detector consists of four quadrants of three pn-CCDs each, which are integrated on one silicon wafer. Each CCD has 200 by 64 pixels (150 micrometer by 150 micrometers) with 280 micrometers depletion depth. One CCD of a quadrant is read out at a time, while the four quadrants can be processed independently of each other. In standard imaging mode the CCDs are read out sequentially every 70 ms. Observations of point sources brighter than 1 mCrab will be effected by photon pile- up. However, special operating modes can be used to observe bright sources up to 150 mCrab in timing mode with 30 microseconds time resolution and very bright sources up to several crab in burst mode with 7 microseconds time resolution. We have tested one quadrant of the EPIC pn-CCD camera at line energies from 0.52 keV to 17.4 keV at the long beam test facility Panter in the focus of the qualification mirror module for XMM. In order to test the time resolution of the system, a mechanical chopper was used to periodically modulate the beam intensity. Pulse periods down to 0.7 ms were generated. This paper describes the performance of the pn-CCD detector in timing and burst readout modes with special emphasis on energy and time resolution.
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chekanov, S. V.; Beydler, M.; Kotwal, A. V.
This paper describes simulations of detector response to multi-TeV physics at the Future Circular Collider (FCC-hh) or Super proton-proton Collider (SppC) which aim to collide proton beams with a centre-of-mass energy of 100 TeV. The unprecedented energy regime of these future experiments imposes new requirements on detector technologies which can be studied using the detailed GEANT4 simulations presented in this paper. The initial performance of a detector designed for physics studies at the FCC-hh or SppC experiments is described with an emphasis on measurements of single particles up to 33 TeV in transverse momentum. The reconstruction of hadronic jets hasmore » also been studied in the transverse momentum range from 50 GeV to 26 TeV. The granularity requirements for calorimetry are investigated using the two-particle spatial resolution achieved for hadron showers.« less
NASA Astrophysics Data System (ADS)
Messineo, Maria; Figer, Donald F.; Davies, Ben; Kudritzki, R. P.; Rich, R. Michael; MacKenty, John; Trombley, Christine
2010-01-01
We present Hubble Space Telescope/Near-Infrared Camera and Multi-Object Spectrometer photometry, and low-resolution K-band spectra of the GLIMPSE9 stellar cluster. The newly obtained color-magnitude diagram shows a cluster sequence with H - KS = ~1 mag, indicating an interstellar extinction A _K_s = 1.6 ± 0.2 mag. The spectra of the three brightest stars show deep CO band heads, which indicate red supergiants with spectral type M1-M2. Two 09-B2 supergiants are also identified, which yield a spectrophotometric distance of 4.2 ± 0.4 kpc. Presuming that the population is coeval, we derive an age between 15 and 27 Myr, and a total cluster mass of 1600 ± 400 M sun, integrated down to 1 M sun. In the vicinity of GLIMPSE9 are several H II regions and supernova remnants, all of which (including GLIMPSE9) are probably associated with a giant molecular cloud (GMC) in the inner galaxy. GLIMPSE9 probably represents one episode of massive star formation in this GMC. We have identified several other candidate stellar clusters of the same complex.
Oblique Aerial Photography Tool for Building Inspection and Damage Assessment
NASA Astrophysics Data System (ADS)
Murtiyoso, A.; Remondino, F.; Rupnik, E.; Nex, F.; Grussenmeyer, P.
2014-11-01
Aerial photography has a long history of being employed for mapping purposes due to some of its main advantages, including large area imaging from above and minimization of field work. Since few years multi-camera aerial systems are becoming a practical sensor technology across a growing geospatial market, as complementary to the traditional vertical views. Multi-camera aerial systems capture not only the conventional nadir views, but also tilted images at the same time. In this paper, a particular use of such imagery in the field of building inspection as well as disaster assessment is addressed. The main idea is to inspect a building from four cardinal directions by using monoplotting functionalities. The developed application allows to measure building height and distances and to digitize man-made structures, creating 3D surfaces and building models. The realized GUI is capable of identifying a building from several oblique points of views, as well as calculates the approximate height of buildings, ground distances and basic vectorization. The geometric accuracy of the results remains a function of several parameters, namely image resolution, quality of available parameters (DEM, calibration and orientation values), user expertise and measuring capability.
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Smeets, Julien; Roellinghoff, Frauke; Janssens, Guillaume; Perali, Irene; Celani, Andrea; Fiorini, Carlo; Freud, Nicolas; Testa, Etienne; Prieels, Damien
2016-01-01
More and more camera concepts are being investigated to try and seize the opportunity of instantaneous range verification of proton therapy treatments offered by prompt gammas emitted along the proton tracks. Focusing on one-dimensional imaging with a passive collimator, the present study experimentally compared in combination with the first, clinically compatible, dedicated camera device the performances of instances of the two main options: a knife-edge slit (KES) and a multi-parallel slit (MPS) design. These two options were experimentally assessed in this specific context as they were previously demonstrated through analytical and numerical studies to allow similar performances in terms of Bragg peak retrieval precision and spatial resolution in a general context. Both collimators were prototyped according to the conclusions of Monte Carlo optimization studies under constraints of equal weight (40 mm tungsten alloy equivalent thickness) and of the specificities of the camera device under consideration (in particular 4 mm segmentation along beam axis and no time-of-flight discrimination, both of which less favorable to the MPS performance than to the KES one). Acquisitions of proton pencil beams of 100, 160, and 230 MeV in a PMMA target revealed that, in order to reach a given level of statistical precision on Bragg peak depth retrieval, the KES collimator requires only half the dose the present MPS collimator needs, making the KES collimator a preferred option for a compact camera device aimed at imaging only the Bragg peak position. On the other hand, the present MPS collimator proves more effective at retrieving the entrance of the beam in the target in the context of an extended camera device aimed at imaging the whole proton track within the patient.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Example-based super-resolution for single-image analysis from the Chang'e-1 Mission
NASA Astrophysics Data System (ADS)
Wu, Fan-Lu; Wang, Xiang-Jun
2016-11-01
Due to the low spatial resolution of images taken from the Chang'e-1 (CE-1) orbiter, the details of the lunar surface are blurred and lost. Considering the limited spatial resolution of image data obtained by a CCD camera on CE-1, an example-based super-resolution (SR) algorithm is employed to obtain high-resolution (HR) images. SR reconstruction is important for the application of image data to increase the resolution of images. In this article, a novel example-based algorithm is proposed to implement SR reconstruction by single-image analysis, and the computational cost is reduced compared to other example-based SR methods. The results show that this method can enhance the resolution of images using SR and recover detailed information about the lunar surface. Thus it can be used for surveying HR terrain and geological features. Moreover, the algorithm is significant for the HR processing of remotely sensed images obtained by other imaging systems.
Image intensification; Proceedings of the Meeting, Los Angeles, CA, Jan. 17, 18, 1989
NASA Astrophysics Data System (ADS)
Csorba, Illes P.
Various papers on image intensification are presented. Individual topics discussed include: status of high-speed optical detector technologies, super second generation imge intensifier, gated image intensifiers and applications, resistive-anode position-sensing photomultiplier tube operational modeling, undersea imaging and target detection with gated image intensifier tubes, image intensifier modules for use with commercially available solid state cameras, specifying the components of an intensified solid state television camera, superconducting IR focal plane arrays, one-inch TV camera tube with very high resolution capacity, CCD-Digicon detector system performance parameters, high-resolution X-ray imaging device, high-output technology microchannel plate, preconditioning of microchannel plate stacks, recent advances in small-pore microchannel plate technology, performance of long-life curved channel microchannel plates, low-noise microchannel plates, development of a quartz envelope heater.
NASA Astrophysics Data System (ADS)
Göhler, Benjamin; Lutzmann, Peter
2017-10-01
Primarily, a laser gated-viewing (GV) system provides range-gated 2D images without any range resolution within the range gate. By combining two GV images with slightly different gate positions, 3D information within a part of the range gate can be obtained. The depth resolution is higher (super-resolution) than the minimal gate shift step size in a tomographic sequence of the scene. For a state-of-the-art system with a typical frame rate of 20 Hz, the time difference between the two required GV images is 50 ms which may be too long in a dynamic scenario with moving objects. Therefore, we have applied this approach to the reset and signal level images of a new short-wave infrared (SWIR) GV camera whose read-out integrated circuit supports correlated double sampling (CDS) actually intended for the reduction of kTC noise (reset noise). These images are extracted from only one single laser pulse with a marginal time difference in between. The SWIR GV camera consists of 640 x 512 avalanche photodiodes based on mercury cadmium telluride with a pixel pitch of 15 μm. A Q-switched, flash lamp pumped solid-state laser with 1.57 μm wavelength (OPO), 52 mJ pulse energy after beam shaping, 7 ns pulse length and 20 Hz pulse repetition frequency is used for flash illumination. In this paper, the experimental set-up is described and the operating principle of CDS is explained. The method of deriving super-resolution depth information from a GV system by using CDS is introduced and optimized. Further, the range accuracy is estimated from measured image data.
Performance Assessment and Geometric Calibration of RESOURCESAT-2
NASA Astrophysics Data System (ADS)
Radhadevi, P. V.; Solanki, S. S.; Akilan, A.; Jyothi, M. V.; Nagasubramanian, V.
2016-06-01
Resourcesat-2 (RS-2) has successfully completed five years of operations in its orbit. This satellite has multi-resolution and multi-spectral capabilities in a single platform. A continuous and autonomous co-registration, geo-location and radiometric calibration of image data from different sensors with widely varying view angles and resolution was one of the challenges of RS-2 data processing. On-orbit geometric performance of RS-2 sensors has been widely assessed and calibrated during the initial phase operations. Since then, as an ongoing activity, various geometric performance data are being generated periodically. This is performed with sites of dense ground control points (GCPs). These parameters are correlated to the direct geo-location accuracy of the RS-2 sensors and are monitored and validated to maintain the performance. This paper brings out the geometric accuracy assessment, calibration and validation done for about 500 datasets of RS-2. The objectives of this study are to ensure the best absolute and relative location accuracy of different cameras, location performance with payload steering and co-registration of multiple bands. This is done using a viewing geometry model, given ephemeris and attitude data, precise camera geometry and datum transformation. In the model, the forward and reverse transformations between the coordinate systems associated with the focal plane, payload, body, orbit and ground are rigorously and explicitly defined. System level tests using comparisons to ground check points have validated the operational geo-location accuracy performance and the stability of the calibration parameters.
Atmospheric Science Data Center
2013-04-16
... faint greenish hue in the multi-angle composite. This subtle effect suggests that the nadir camera is observing more of the brighter ... energy and water at the Earth's surface, and for preserving biodiversity. The Multi-angle Imaging SpectroRadiometer observes the daylit ...
Hydrophobic duck feathers and their simulation on textile substrates for water repellent treatment.
Liu, Yuyang; Chen, Xianqiong; Xin, J H
2008-12-01
Inspired by the non-wetting phenomena of duck feathers, the water repellent property of duck feathers was studied at the nanoscale. The microstructures of the duck feather were investigated by a scanning electron microscope (SEM) imaging method through a step-by-step magnifying procedure. The SEM results show that duck feathers have a multi-scale structure and that this multi-scale structure as well as the preening oil are responsible for their super hydrophobic behavior. The microstructures of the duck feather were simulated on textile substrates using the biopolymer chitosan as building blocks through a novel surface solution precipitation (SSP) method, and then the textile substrates were further modified with a silicone compound to achieve low surface energy. The resultant textiles exhibit super water repellent properties, thus providing a simple bionic way to create super hydrophobic surfaces on soft substrates using flexible material as building blocks.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
NASA Astrophysics Data System (ADS)
Chetty, S.; Field, L. A.
2013-12-01
The Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally respectful materials that when deployed will increase the albedo, enhancing the formation and/preservation of multi-year ice. Small scale deployments using various materials have been done in Canada, California's Sierra Nevada Mountains and a pond in Minnesota to test the albedo performance and environmental characteristics of these materials. SWIMS is a sophisticated autonomous sensor system being developed to measure the albedo, weather, water temperature and other environmental parameters. The system (SWIMS) employs low cost, high accuracy/precision sensors, high resolution cameras, and an extreme environment command and data handling computer system using satellite and terrestrial wireless communication. The entire system is solar powered with redundant battery backup on a floating buoy platform engineered for low temperature (-40C) and high wind conditions. The system also incorporates tilt sensors, sonar based ice thickness sensors and a weather station. To keep the costs low, each SWIMS unit measures incoming and reflected radiation from the four quadrants around the buoy. This allows data from four sets of sensors, cameras, weather station, water temperature probe to be collected and transmitted by a single on-board solar powered computer. This presentation covers the technical, logistical and cost challenges in designing, developing and deploying these stations in remote, extreme environments. Image captured by camera #3 of setting sun on the SWIMS station One of the images captured by SWIMS Camera #4
NASA Astrophysics Data System (ADS)
Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.
2014-06-01
This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.
A novel multi-stimuli responsive gelator based on D-gluconic acetal and its potential applications.
Guan, Xidong; Fan, Kaiqi; Gao, Tongyang; Ma, Anping; Zhang, Bao; Song, Jian
2016-01-18
We construct a simple-structured super gelator with multi-stimuli responsive properties, among which anion responsiveness follows the Hofmeister series in a non-aqueous system. Versatile applications such as being rheological and self-healing agents, waste water treatment, spilled oil recovery and flexible optical device manufacture are integrated into a single organogelator, which was rarely reported.
A method for the real-time construction of a full parallax light field
NASA Astrophysics Data System (ADS)
Tanaka, Kenji; Aoki, Soko
2006-02-01
We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.
The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design
NASA Astrophysics Data System (ADS)
Riza, Nabeel A.
2017-02-01
Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.
NASA Astrophysics Data System (ADS)
Chen, Chun-Jen; Wu, Wen-Hong; Huang, Kuo-Cheng
2009-08-01
A multi-function lens test instrument is report in this paper. This system can evaluate the image resolution, image quality, depth of field, image distortion and light intensity distribution of the tested lens by changing the tested patterns. This system consists of a tested lens, a CCD camera, a linear motorized stage, a system fixture, an observer LCD monitor, and a notebook for pattern providing. The LCD monitor displays a serious of specified tested patterns sent by the notebook. Then each displayed pattern goes through the tested lens and images in the CCD camera sensor. Consequently, the system can evaluate the performance of the tested lens by analyzing the image of CCD camera with special designed software. The major advantage of this system is that it can complete whole test quickly without interruption due to part replacement, because the tested patterns are statically displayed on monitor and controlled by the notebook.
A projective surgical navigation system for cancer resection
NASA Astrophysics Data System (ADS)
Gan, Qi; Shao, Pengfei; Wang, Dong; Ye, Jian; Zhang, Zeshu; Wang, Xinrui; Xu, Ronald
2016-03-01
Near infrared (NIR) fluorescence imaging technique can provide precise and real-time information about tumor location during a cancer resection surgery. However, many intraoperative fluorescence imaging systems are based on wearable devices or stand-alone displays, leading to distraction of the surgeons and suboptimal outcome. To overcome these limitations, we design a projective fluorescence imaging system for surgical navigation. The system consists of a LED excitation light source, a monochromatic CCD camera, a host computer, a mini projector and a CMOS camera. A software program is written by C++ to call OpenCV functions for calibrating and correcting fluorescence images captured by the CCD camera upon excitation illumination of the LED source. The images are projected back to the surgical field by the mini projector. Imaging performance of this projective navigation system is characterized in a tumor simulating phantom. Image-guided surgical resection is demonstrated in an ex-vivo chicken tissue model. In all the experiments, the projected images by the projector match well with the locations of fluorescence emission. Our experimental results indicate that the proposed projective navigation system can be a powerful tool for pre-operative surgical planning, intraoperative surgical guidance, and postoperative assessment of surgical outcome. We have integrated the optoelectronic elements into a compact and miniaturized system in preparation for further clinical validation.
Super-pixel extraction based on multi-channel pulse coupled neural network
NASA Astrophysics Data System (ADS)
Xu, GuangZhu; Hu, Song; Zhang, Liu; Zhao, JingJing; Fu, YunXia; Lei, BangJun
2018-04-01
Super-pixel extraction techniques group pixels to form over-segmented image blocks according to the similarity among pixels. Compared with the traditional pixel-based methods, the image descripting method based on super-pixel has advantages of less calculation, being easy to perceive, and has been widely used in image processing and computer vision applications. Pulse coupled neural network (PCNN) is a biologically inspired model, which stems from the phenomenon of synchronous pulse release in the visual cortex of cats. Each PCNN neuron can correspond to a pixel of an input image, and the dynamic firing pattern of each neuron contains both the pixel feature information and its context spatial structural information. In this paper, a new color super-pixel extraction algorithm based on multi-channel pulse coupled neural network (MPCNN) was proposed. The algorithm adopted the block dividing idea of SLIC algorithm, and the image was divided into blocks with same size first. Then, for each image block, the adjacent pixels of each seed with similar color were classified as a group, named a super-pixel. At last, post-processing was adopted for those pixels or pixel blocks which had not been grouped. Experiments show that the proposed method can adjust the number of superpixel and segmentation precision by setting parameters, and has good potential for super-pixel extraction.
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
NASA Astrophysics Data System (ADS)
Yao, Wei; van Aardt, Jan; Messinger, David
2017-05-01
The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.
Distributed Sensing and Processing for Multi-Camera Networks
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.
Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.
Touchscreen everywhere: on transferring a normal planar surface to a touch-sensitive display.
Dai, Jingwen; Chung, Chi-Kit Ronald
2014-08-01
We address how a human-computer interface with small device size, large display, and touch-input facility can be made possible by a mere projector and camera. The realization is through the use of a properly embedded structured light sensing scheme that enables a regular light-colored table surface to serve the dual roles of both a projection screen and a touch-sensitive display surface. A random binary pattern is employed to code structured light in pixel accuracy, which is embedded into the regular projection display in a way that the user perceives only regular display but not the structured pattern hidden in the display. With the projection display on the table surface being imaged by a camera, the observed image data, plus the known projection content, can work together to probe the 3-D workspace immediately above the table surface, like deciding if there is a finger present and if the finger touches the table surface, and if so, at what position on the table surface the contact is made. All the decisions hinge upon a careful calibration of the projector-camera-table surface system, intelligent segmentation of the hand in the image data, and exploitation of the homography mapping existing between the projector's display panel and the camera's image plane. Extensive experimentation including evaluation of the display quality, hand segmentation accuracy, touch detection accuracy, trajectory tracking accuracy, multitouch capability and system efficiency are shown to illustrate the feasibility of the proposed realization.
NASA Astrophysics Data System (ADS)
El-Wakil, S. A.; Abulwafa, Essam M.; Elhanbaly, Atalla A.
2017-07-01
Based on Sagdeev pseudo-potential and phase-portrait, the dynamics of four-component dust plasma with non-extensively distributed electrons and ions are investigated. Three distinct types of nonlinear waves, namely, soliton, double layer, and super-soliton, have been found. The basic features of such waves are high sensitivity to Mach number, non-extensive parameter, and dust temperature ratio. It is found that the multi-component plasma is a necessary condition for super-soliton's existence, having a wider amplitude and a larger width than the regular soliton. Super-solitons may also exist when the Sagdeev pseudo-potential curves admit at least four extrema and two roots. In our multi-component plasma system, the super-solitons can be found by increasing the Mach number and the non-extensive parameter beyond those of double-layers. On the contrary, the super-soliton can be produced by decreasing the dust temperature ratio. The conditions of the onset of such nonlinear waves and its merging to regular solitons have been studied. This work shows that the obtained nonlinear waves are found to exist only in the super-sonic Mach number regime. The obtained results may be of wide relevance in the field of space plasma and may also be helpful to better understand the nonlinear fluctuations in the Auroral-zone of the Earth's magnetosphere.
3D Digital Surveying and Modelling of Cave Geometry: Application to Paleolithic Rock Art
González-Aguilera, Diego; Muñoz-Nieto, Angel; Gómez-Lahoz, Javier; Herrero-Pascual, Jesus; Gutierrez-Alonso, Gabriel
2009-01-01
3D digital surveying and modelling of cave geometry represents a relevant approach for research, management and preservation of our cultural and geological legacy. In this paper, a multi-sensor approach based on a terrestrial laser scanner, a high-resolution digital camera and a total station is presented. Two emblematic caves of Paleolithic human occupation and situated in northern Spain, “Las Caldas” and “Peña de Candamo”, have been chosen to put in practise this approach. As a result, an integral and multi-scalable 3D model is generated which may allow other scientists, pre-historians, geologists…, to work on two different levels, integrating different Paleolithic Art datasets: (1) a basic level based on the accurate and metric support provided by the laser scanner; and (2) a advanced level using the range and image-based modelling. PMID:22399958
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Toward quantum plasmonic networks
Holtfrerich, M. W.; Dowran, M.; Davidson, R.; ...
2016-08-30
Here, we demonstrate the transduction of macroscopic quantum entanglement by independent, distant plasmonic structures embedded in separate thin silver films. In particular, we show that the plasmon-mediated transmission through each film conserves spatially dependent, entangled quantum images, opening the door for the implementation of parallel quantum protocols, super-resolution imaging, and quantum plasmonic sensing geometries at the nanoscale level. The conservation of quantum information by the transduction process shows that continuous variable multi-mode entanglement is momentarily transferred from entangled beams of light to the space-like separated, completely independent plasmonic structures, thus providing a first important step toward establishing a multichannel quantummore » network across separate solid-state substrates.« less
Molecular breast tomosynthesis with scanning focus multi-pinhole cameras
NASA Astrophysics Data System (ADS)
van Roosmalen, Jarno; Goorden, Marlies C.; Beekman, Freek J.
2016-08-01
Planar molecular breast imaging (MBI) is rapidly gaining in popularity in diagnostic oncology. To add 3D capabilities, we introduce a novel molecular breast tomosynthesis (MBT) scanner concept based on multi-pinhole collimation. In our design, the patient lies prone with the pendant breast lightly compressed between transparent plates. Integrated webcams view the breast through these plates and allow the operator to designate the scan volume (e.g. a whole breast or a suspected region). The breast is then scanned by translating focusing multi-pinhole plates and NaI(Tl) gamma detectors together in a sequence that optimizes count yield from the volume-of-interest. With simulations, we compared MBT with existing planar MBI. In a breast phantom containing different lesions, MBT improved tumour-to-background contrast-to-noise ratio (CNR) over planar MBI by 12% and 111% for 4.0 and 6.0 mm lesions respectively in case of whole breast scanning. For the same lesions, much larger CNR improvements of 92% and 241% over planar MBI were found in a scan that focused on a breast region containing several lesions. MBT resolved 3.0 mm rods in a Derenzo resolution phantom in the transverse plane compared to 2.5 mm rods distinguished by planar MBI. While planar MBI cannot provide depth information, MBT offered 4.0 mm depth resolution. Our simulations indicate that besides offering 3D localization of increased tracer uptake, multi-pinhole MBT can significantly increase tumour-to-background CNR compared to planar MBI. These properties could be promising for better estimating the position, extend and shape of lesions and distinguishing between single and multiple lesions.
NASA Astrophysics Data System (ADS)
Sun, Li; Wang, Deyu
2011-09-01
A new multi-level analysis method of introducing the super-element modeling method, derived from the multi-level analysis method first proposed by O. F. Hughes, has been proposed in this paper to solve the problem of high time cost in adopting a rational-based optimal design method for ship structural design. Furthermore, the method was verified by its effective application in optimization of the mid-ship section of a container ship. A full 3-D FEM model of a ship, suffering static and quasi-static loads, was used as the analyzing object for evaluating the structural performance of the mid-ship module, including static strength and buckling performance. Research results reveal that this new method could substantially reduce the computational cost of the rational-based optimization problem without decreasing its accuracy, which increases the feasibility and economic efficiency of using a rational-based optimal design method in ship structural design.
Speckle reduction methods in laser-based picture projectors
NASA Astrophysics Data System (ADS)
Akram, M. Nadeem; Chen, Xuyuan
2016-02-01
Laser sources have been promised for many years to be better light sources as compared to traditional lamps or light-emitting diodes (LEDs) for projectors, which enable projectors having wide colour gamut for vivid image, super brightness and high contrast for the best picture quality, long lifetime for maintain free operation, mercury free, and low power consumption for green environment. A major technology obstacle in using lasers for projection has been the speckle noise caused by to the coherent nature of the lasers. For speckle reduction, current state of the art solutions apply moving parts with large physical space demand. Solutions beyond the state of the art need to be developed such as integrated optical components, hybrid MOEMS devices, and active phase modulators for compact speckle reduction. In this article, major methods reported in the literature for the speckle reduction in laser projectors are presented and explained. With the advancement in semiconductor lasers with largely reduced cost for the red, green and the blue primary colours, and the developed methods for their speckle reduction, it is hoped that the lasers will be widely utilized in different projector applications in the near future.
Retinal projection type super multi-view head-mounted display
NASA Astrophysics Data System (ADS)
Takahashi, Hideya; Ito, Yutaka; Nakata, Seigo; Yamada, Kenji
2014-02-01
We propose a retinal projection type super multi-view head-mounted display (HMD). The smooth motion parallax provided by the super multi-view technique enables a precise superposition of virtual 3D images on the real scene. Moreover, if a viewer focuses one's eyes on the displayed 3D image, the stimulus for the accommodation of the human eye is produced naturally. Therefore, although proposed HMD is a monocular HMD, it provides observers with natural 3D images. The proposed HMD consists of an image projection optical system and a holographic optical element (HOE). The HOE is used as a combiner, and also works as a condenser lens to implement the Maxwellian view. Some parallax images are projected onto the HOE, and converged on the pupil, and then projected onto the retina. In order to verify the effectiveness of the proposed HMD, we constructed the prototype HMD. In the prototype HMD, the number of parallax images and the number of convergent points on the pupil is three. The distance between adjacent convergent points is 2 mm. We displayed virtual images at the distance from 20 cm to 200 cm in front of the pupil, and confirmed the accommodation. This paper describes the principle of proposed HMD, and also describes the experimental result.
Video flow active control by means of adaptive shifted foveal geometries
NASA Astrophysics Data System (ADS)
Urdiales, Cristina; Rodriguez, Juan A.; Bandera, Antonio J.; Sandoval, Francisco
2000-10-01
This paper presents a control mechanism for video transmission that relies on transmitting non-uniform resolution images depending on the delay of the communication channel. These images are built in an active way to keep the areas of interest of the image at the highest resolution available. In order to shift the area of high resolution over the image and to achieve a data structure easy to process by using conventional algorithms, a shifted fovea multi resolution geometry of adaptive size is used. Besides, if delays are nevertheless too high, the different areas of resolution of the image can be transmitted at different rates. A functional system has been developed for corridor surveillance with static cameras. Tests with real video images have proven that the method allows an almost constant rate of images per second as long as the channel is not collapsed.
NASA Astrophysics Data System (ADS)
Williams, B. P.; Kjellstrand, B.; Jones, G.; Reimuller, J. D.; Fritts, D. C.; Miller, A.; Geach, C.; Limon, M.; Hanany, S.; Kaifler, B.; Wang, L.; Taylor, M. J.
2017-12-01
PMC-Turbo is a NASA long-duration, high-altitude balloon mission that will deploy 7 high-resolution cameras to image polar mesospheric clouds (PMC) and measure gravity wave breakdown and turbulence. The mission has been enhanced by the addition of the DLR Balloon Lidar Experiment (BOLIDE) and an OH imager from Utah State University. This instrument suite will provide high horizontal and vertical resolution of the wave-modified PMC structure along a several thousand kilometer flight track. We have requested a flight from Kiruna, Sweden to Canada in June 2017 or McMurdo Base, Antarctica in Dec 2017. Three of the PMC camera systems were deployed on an aircraft and two tomographic ground sites for the High Level campaign in Canada in June/July 2017. On several nights the cameras observed PMC's with strong gravity wave breaking signatures. One PMC camera will piggyback on the Super Tiger mission scheduled to be launched in Dec 2017 from McMurdo, so we will obtain PMC images and wave/turbulence data from both the northern and southern hemispheres.
Multispectral Snapshot Imagers Onboard Small Satellite Formations for Multi-Angular Remote Sensing
NASA Technical Reports Server (NTRS)
Nag, Sreeja; Hewagama, Tilak; Georgiev, Georgi; Pasquale, Bert; Aslam, Shahid; Gatebe, Charles K.
2017-01-01
Multispectral snapshot imagers are capable of producing 2D spatial images with a single exposure at selected, numerous wavelengths using the same camera, therefore operate differently from push broom or whiskbroom imagers. They are payloads of choice in multi-angular, multi-spectral imaging missions that use small satellites flying in controlled formation, to retrieve Earth science measurements dependent on the targets Bidirectional Reflectance-Distribution Function (BRDF). Narrow fields of view are needed to capture images with moderate spatial resolution. This paper quantifies the dependencies of the imagers optical system, spectral elements and camera on the requirements of the formation mission and their impact on performance metrics such as spectral range, swath and signal to noise ratio (SNR). All variables and metrics have been generated from a comprehensive, payload design tool. The baseline optical parameters selected (diameter 7 cm, focal length 10.5 cm, pixel size 20 micron, field of view 1.15 deg) and snapshot imaging technologies are available. The spectral components shortlisted were waveguide spectrometers, acousto-optic tunable filters (AOTF), electronically actuated Fabry-Perot interferometers, and integral field spectrographs. Qualitative evaluation favored AOTFs because of their low weight, small size, and flight heritage. Quantitative analysis showed that waveguide spectrometers perform better in terms of achievable swath (10-90 km) and SNR (greater than 20) for 86 wavebands, but the data volume generated will need very high bandwidth communication to downlink. AOTFs meet the external data volume caps well as the minimum spectral (wavebands) and radiometric (SNR) requirements, therefore are found to be currently feasible in spite of lower swath and SNR.
High power fiber coupled diode lasers for display and lighting applications
NASA Astrophysics Data System (ADS)
Drovs, Simon; Unger, Andreas; Dürsch, Sascha; Köhler, Bernd; Biesenbach, Jens
2017-02-01
The performance of diode lasers in the visible spectral range has been continuously improved within the last few years, which was mainly driven by the goal to replace arc lamps in cinema or home projectors. In addition, the availability of such high power visible diode lasers also enables new applications in the medical field, but also the usage as pump sources for other solid state lasers. This paper summarizes the latest developments of fiber coupled sources with output power from 1.4 W to 120 W coupled into 100 μm to 400 μm fibers in the spectral range around 405 nm and 640 nm. New developments also include the use of fiber coupled multi single emitter arrays at 450 nm, as well as very compact modules with multi-W output power.
Spider-web inspired multi-resolution graphene tactile sensor.
Liu, Lu; Huang, Yu; Li, Fengyu; Ma, Ying; Li, Wenbo; Su, Meng; Qian, Xin; Ren, Wanjie; Tang, Kanglai; Song, Yanlin
2018-05-08
Multi-dimensional accurate response and smooth signal transmission are critical challenges in the advancement of multi-resolution recognition and complex environment analysis. Inspired by the structure-activity relationship between discrepant microstructures of the spiral and radial threads in a spider web, we designed and printed graphene with porous and densely-packed microstructures to integrate into a multi-resolution graphene tactile sensor. The three-dimensional (3D) porous graphene structure performs multi-dimensional deformation responses. The laminar densely-packed graphene structure contributes excellent conductivity with flexible stability. The spider-web inspired printed pattern inherits orientational and locational kinesis tracking. The multi-structure construction with homo-graphene material can integrate discrepant electronic properties with remarkable flexibility, which will attract enormous attention for electronic skin, wearable devices and human-machine interactions.
Super-resolution imaging of multiple cells by optimized flat-field epi-illumination
NASA Astrophysics Data System (ADS)
Douglass, Kyle M.; Sieben, Christian; Archetti, Anna; Lambert, Ambroise; Manley, Suliana
2016-11-01
Biological processes are inherently multi-scale, and supramolecular complexes at the nanoscale determine changes at the cellular scale and beyond. Single-molecule localization microscopy (SMLM) techniques have been established as important tools for studying cellular features with resolutions of the order of around 10 nm. However, in their current form these modalities are limited by a highly constrained field of view (FOV) and field-dependent image resolution. Here, we develop a low-cost microlens array (MLA)-based epi-illumination system—flat illumination for field-independent imaging (FIFI)—that can efficiently and homogeneously perform simultaneous imaging of multiple cells with nanoscale resolution. The optical principle of FIFI, which is an extension of the Köhler integrator, is further elucidated and modelled with a new, free simulation package. We demonstrate FIFI's capabilities by imaging multiple COS-7 and bacteria cells in 100 × 100 μm2 SMLM images—more than quadrupling the size of a typical FOV and producing near-gigapixel-sized images of uniformly high quality.
NASA Astrophysics Data System (ADS)
Benaud, P.; Anderson, K.; Quine, T. A.; James, M. R.; Quinton, J.; Brazier, R. E.
2016-12-01
While total sediment capture can accurately quantify soil loss via water erosion, it isn't practical at the field scale and provides little information on the spatial nature of soil erosion processes. Consequently, high-resolution, remote sensing, point cloud data provide an alternative method for quantifying soil loss. The accessibility of Structure-from-Motion Multi-Stereo View (SfM) and the potential for multi-temporal applications, offers an exciting opportunity to spatially quantify soil erosion. Accordingly, published research provides examples of the successful quantification of large erosion features and events, to centimetre accuracy. Through rigorous control of the camera and image network geometry, the centimetre accuracy achievable at the field scale, can translate to sub-millimetre accuracies within a laboratory environment. Accordingly, this study looks to understand how the ultra-high-resolution spatial information on soil surface topography, derived from SfM, can be integrated with a multi-element sediment tracer to develop a mechanistic understanding of rill and inter-rill erosion, under experimental conditions. A rainfall simulator was used to create three soil surface conditions; compaction and rainsplash, inter-rill erosion, and rill erosion, at two experimental scales (0.15 m2 and 3 m2). Total sediment capture was the primary validation for the experiments, allowing the comparison between structurally and volumetrically derived change, and true soil loss. A Terrestrial Laser Scanner (resolution of ca. 0.8mm) has been employed to assess spatial discrepancies within the SfM data sets and to provide an alternative measure of volumetric change. Preliminary results show the SfM approach used can achieve a ground resolution of less than 0.2 mm per pixel, and a RMSE of less than 0.3 mm. Consequently, it is expected that the ultra-high-resolution SfM point clouds can be utilised to provide a detailed assessment of soil loss via water erosion processes.
High-Definition Television (HDTV) Images for Earth Observations and Earth Science Applications
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Holland, S. Douglas; Runco, Susan K.; Pitts, David E.; Whitehead, Victor S.; Andrefouet, Serge M.
2000-01-01
As part of Detailed Test Objective 700-17A, astronauts acquired Earth observation images from orbit using a high-definition television (HDTV) camcorder, Here we provide a summary of qualitative findings following completion of tests during missions STS (Space Transport System)-93 and STS-99. We compared HDTV imagery stills to images taken using payload bay video cameras, Hasselblad film camera, and electronic still camera. We also evaluated the potential for motion video observations of changes in sunlight and the use of multi-aspect viewing to image aerosols. Spatial resolution and color quality are far superior in HDTV images compared to National Television Systems Committee (NTSC) video images. Thus, HDTV provides the first viable option for video-based remote sensing observations of Earth from orbit. Although under ideal conditions, HDTV images have less spatial resolution than medium-format film cameras, such as the Hasselblad, under some conditions on orbit, the HDTV image acquired compared favorably with the Hasselblad. Of particular note was the quality of color reproduction in the HDTV images HDTV and electronic still camera (ESC) were not compared with matched fields of view, and so spatial resolution could not be compared for the two image types. However, the color reproduction of the HDTV stills was truer than colors in the ESC images. As HDTV becomes the operational video standard for Space Shuttle and Space Station, HDTV has great potential as a source of Earth-observation data. Planning for the conversion from NTSC to HDTV video standards should include planning for Earth data archiving and distribution.
Scanning laser beam displays based on a 2D MEMS
NASA Astrophysics Data System (ADS)
Niesten, Maarten; Masood, Taha; Miller, Josh; Tauscher, Jason
2010-05-01
The combination of laser light sources and MEMS technology enables a range of display systems such as ultra small projectors for mobile devices, head-up displays for vehicles, wearable near-eye displays and projection systems for 3D imaging. Images are created by scanning red, green and blue lasers horizontally and vertically with a single two-dimensional MEMS. Due to the excellent beam quality of laser beams, the optical designs are efficient and compact. In addition, the laser illumination enables saturated display colors that are desirable for augmented reality applications where a virtual image is used. With this technology, the smallest projector engine for high volume manufacturing to date has been developed. This projector module has a height of 7 mm and a volume of 5 cc. The resolution of this projector is WVGA. No additional projection optics is required, resulting in an infinite focus depth. Unlike with micro-display projection displays, an increase in resolution will not lead to an increase in size or a decrease in efficiency. Therefore future projectors can be developed that combine a higher resolution in an even smaller and thinner form factor with increased efficiencies that will lead to lower power consumption.
NASA Astrophysics Data System (ADS)
Fink, Reinhold F.
2009-02-01
The retaining the excitation degree (RE) partitioning [R.F. Fink, Chem. Phys. Lett. 428 (2006) 461(20 September)] is reformulated and applied to multi-reference cases with complete active space (CAS) reference wave functions. The generalised van Vleck perturbation theory is employed to set up the perturbation equations. It is demonstrated that this leads to a consistent and well defined theory which fulfils all important criteria of a generally applicable ab initio method: The theory is proven numerically and analytically to be size-consistent and invariant with respect to unitary orbital transformations within the inactive, active and virtual orbital spaces. In contrast to most previously proposed multi-reference perturbation theories the necessary condition for a proper perturbation theory to fulfil the zeroth order perturbation equation is exactly satisfied with the RE partitioning itself without additional projectors on configurational spaces. The theory is applied to several excited states of the benchmark systems CH2 , SiH2 , and NH2 , as well as to the lowest states of the carbon, nitrogen and oxygen atoms. In all cases comparisons are made with full configuration interaction results. The multi-reference (MR)-RE method is shown to provide very rapidly converging perturbation series. Energy differences between states of similar configurations converge even faster.
Mary, a Pipeline to Aid Discovery of Optical Transients
NASA Astrophysics Data System (ADS)
Andreoni, I.; Jacobs, C.; Hegarty, S.; Pritchard, T.; Cooke, J.; Ryder, S.
2017-09-01
The ability to quickly detect transient sources in optical images and trigger multi-wavelength follow up is key for the discovery of fast transients. These include events rare and difficult to detect such as kilonovae, supernova shock breakout, and `orphan' Gamma-ray Burst afterglows. We present the Mary pipeline, a (mostly) automated tool to discover transients during high-cadenced observations with the Dark Energy Camera at Cerro Tololo Inter-American Observatory (CTIO). The observations are part of the `Deeper Wider Faster' programme, a multi-facility, multi-wavelength programme designed to discover fast transients, including counterparts to Fast Radio Bursts and gravitational waves. Our tests of the Mary pipeline on Dark Energy Camera images return a false positive rate of 2.2% and a missed fraction of 3.4% obtained in less than 2 min, which proves the pipeline to be suitable for rapid and high-quality transient searches. The pipeline can be adapted to search for transients in data obtained with imagers other than Dark Energy Camera.
NASA Astrophysics Data System (ADS)
Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.
2013-12-01
Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough routes of water flows for extensive target areas. After then, refinements through hydraulics simulations with CTX DTMs (~12-18m resolution) and HiRISE DTMs (~1- 4m resolution) were conducted by employing the output of HRSC simulations as the initial conditions. Thus even a few high and very high resolution stereo DTMs coverage enabled the performance of a high precision hydraulics analysis for reconstructing a whole fluvial event. In this manner, useful information to identify the characteristics of martian fluvial activities, such as water depth along the time line, flow direction, and travel time, were successfully retrieved with each target tributary. Together with all above useful outputs of hydraulics analysis, the local roughness and photogrammetric control of the stereo DTMs appeared to be crucial elements for accurate fluvial simulation. The potential of this study should be further explored for its application to the other extraterrestrial bodies where fluvial activity once existed, as well as the major martian channel and valleys.
Multi-fidelity methods for uncertainty quantification in transport problems
NASA Astrophysics Data System (ADS)
Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.
2016-12-01
We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.
NASA Astrophysics Data System (ADS)
Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.
2009-12-01
The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.
NIKA2, a dual-band millimetre camera on the IRAM 30 m telescope to map the cold universe
NASA Astrophysics Data System (ADS)
Désert, F.-X.; Adam, R.; Ade, P.; André, P.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; Doyle, S.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Lestrade, J.-F.; Macías-Pérez, J. F.; Maury, A.; Mauskopf, P.; Mayet, F.; Monfardini, A.; Pajot, F.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Ritacco, A.; Rodriguez, L.; Romero, C.; Roussel, H.; Ruppin, F.; Soler, J.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R.
2016-12-01
A consortium led by Institut Néel (Grenoble) has just finished installing a new powerful millimetre camera NIKA2 on the IRAM 30 m telescope. It has an instantaneous field-of-view of 6.5 arcminutes at both 1.2 and 2.0 mm with polarimetric capabilities at 1.2 mm. NIKA2 provides a near diffraction-limited angular resolution (resp. 12 and 18 arcseconds). The 3 detector arrays are made of more than 1000 KIDs each. KIDs are new superconducting devices emerging as an alternative to bolometers. The commissionning is ongoing in 2016 with a likely opening to the IRAM community in early 2017. NIKA2 is a very promising multi-purpose instrument which will enable many scientific discoveries in the coming decade.
Performance and applications of GaAs:Cr-based Medipix detector in X-ray CT
NASA Astrophysics Data System (ADS)
Kozhevnikov, D.; Chelkov, G.; Demichev, M.; Gridin, A.; Smolyanskiy, P.; Zhemchugov, A.
2017-01-01
In the recent years, the method of single photon counting X-ray μ-CT is being actively developed and applied in various fields. Results of our studies carried out using the MARS μ-CT scanner equipped with GaAs Medipix-based camera are presented. The procedure of mechanical alignment of the scanner is described, including direct and indirect measurements of the spatial resolution. The software chain for data processing and reconstruction has been developed and reported. We demonstrate the possibility to apply the scanner for research in geology and medicine and provide demo images of geological samples (chrome spinellids, titanium magnetite ore) and medical samples (atherosclerotic plaque, abdominal aortic aneurysm). The first results of multi-energy scans using GaAs:Cr-based camera are shown.
The Raffaello, a Multi-Purpose Logistics Module, arrives at KSC aboard a Beluga super transporter
NASA Technical Reports Server (NTRS)
1999-01-01
An Airbus Industrie A300-600ST 'Beluga' Super Transporter touches down at the Shuttle Landing Facility to deliver its cargo, the second Multi-Purpose Logistics Module (MPLM) for the International Space Station (ISS). One of Italy's major contributions to the ISS program, the MPLM, named Raffaello, is a reusable logistics carrier and the primary delivery system used to resupply and return station cargo requiring a pressurized environment. Weighing nearly 4.5 tons, the module measures 21 feet long and 15 feet in diameter. Raffaello will join Leonardo, the first Italian-built MPLM, in the Space Station Processing Facility for testing. NASA, Boeing, the Italian Space Agency and Alenia Aerospazio will provide engineering support.
The Raffaello, a Multi-Purpose Logistics Module, arrives at KSC aboard a Beluga super transporter
NASA Technical Reports Server (NTRS)
1999-01-01
An Airbus Industrie A300-600ST 'Beluga' Super Transporter lands in the rain at the Shuttle Landing Facility to deliver its cargo, the second Multi-Purpose Logistics Module (MPLM) for the International Space Station (ISS). One of Italy's major contributions to the ISS program, the MPLM, named Raffaello, is a reusable logistics carrier and the primary delivery system used to resupply and return station cargo requiring a pressurized environment. Weighing nearly 4.5 tons, the module measures 21 feet long and 15 feet in diameter. Raffaello will join Leonardo, the first Italian-built MPLM, in the Space Station Processing Facility for testing. NASA, Boeing, the Italian Space Agency and Alenia Aerospazio will provide engineering support.
NASA Astrophysics Data System (ADS)
Hotta, Aira; Sasaki, Takashi; Okumura, Haruhiko
2007-02-01
In this paper, we propose a novel display method to realize a high-resolution image in a central visual field for a hyper-realistic head dome projector. The method uses image processing based on the characteristics of human vision, namely, high central visual acuity and low peripheral visual acuity, and pixel shift technology, which is one of the resolution-enhancing technologies for projectors. The projected image with our method is a fine wide-viewing-angle image with high definition in the central visual field. We evaluated the psychological effects of the projected images with our method in terms of sensation of reality. According to the result, we obtained 1.5 times higher resolution in the central visual field and a greater sensation of reality by using our method.
Habitable Exoplanet Imaging Mission (HabEx): Architecture of the 4m Mission Concept
NASA Astrophysics Data System (ADS)
Kuan, Gary M.; Warfield, Keith R.; Mennesson, Bertrand; Kiessling, Alina; Stahl, H. Philip; Martin, Stefan; Shaklan, Stuart B.; amini, rashied
2018-01-01
The Habitable Exoplanet Imaging Mission (HabEx) study is tasked by NASA to develop a scientifically compelling and technologically feasible exoplanet direct imaging mission concept, with extensive general astrophysics capabilities, for the 2020 Decadal Survey in Astrophysics. The baseline architecture of this space-based observatory concept encompasses an unobscured 4m diameter aperture telescope flying in formation with a 72-meter diameter starshade occulter. This large aperture, ultra-stable observatory concept extends and enhances upon the legacy of the Hubble Space Telescope by allowing us to probe even fainter objects and peer deeper into the Universe in the same ultraviolet, visible, and near infrared wavelengths, and gives us the capability, for the first time, to image and characterize potentially habitable, Earth-sized exoplanets orbiting nearby stars. Revolutionary direct imaging of exoplanets will be undertaken using a high-contrast coronagraph and a starshade imager. General astrophysics science will be undertaken with two world-class instruments – a wide-field workhorse camera for imaging and multi-object grism spectroscopy, and a multi-object, multi-resolution ultraviolet spectrograph. This poster outlines the baseline architecture of the HabEx flagship mission concept.
NASA Astrophysics Data System (ADS)
Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven
2016-06-01
Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. For result verification these data mining techniques are then being employed within a citizen science project within the Zooniverse family. Examples of data mining and its verification will be presented.
Hippo in Super Resolution from Super Panorama
1998-07-03
This view of the "Hippo," 25 meters to the west of the lander, was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. These composites consist of more than 15 frames per eye (because multiple sequences covered the same area), taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01421
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
Aerial multi-camera systems: Accuracy and block triangulation issues
NASA Astrophysics Data System (ADS)
Rupnik, Ewelina; Nex, Francesco; Toschi, Isabella; Remondino, Fabio
2015-03-01
Oblique photography has reached its maturity and has now been adopted for several applications. The number and variety of multi-camera oblique platforms available on the market is continuously growing. So far, few attempts have been made to study the influence of the additional cameras on the behaviour of the image block and comprehensive revisions to existing flight patterns are yet to be formulated. This paper looks into the precision and accuracy of 3D points triangulated from diverse multi-camera oblique platforms. Its coverage is divided into simulated and real case studies. Within the simulations, different imaging platform parameters and flight patterns are varied, reflecting both current market offerings and common flight practices. Attention is paid to the aspect of completeness in terms of dense matching algorithms and 3D city modelling - the most promising application of such systems. The experimental part demonstrates the behaviour of two oblique imaging platforms in real-world conditions. A number of Ground Control Point (GCP) configurations are adopted in order to point out the sensitivity of tested imaging networks and arising block deformations. To stress the contribution of slanted views, all scenarios are compared against a scenario in which exclusively nadir images are used for evaluation.
NASA Technical Reports Server (NTRS)
2007-01-01
This Mars Exploration Rover Opportunity Pancam 'super resolution' mosaic of the approximately 6 m (20 foot) high cliff face of the Cape Verde promontory was taken by the rover from inside Victoria Crater, during the rover's descent into Duck Bay. Super-resolution is an imaging technique which utilizes information from multiple pictures of the same target in order to generate an image with a higher resolution than any of the individual images. Cape Verde is a geologically rich outcrop and is teaching scientists about how rocks at Victoria crater were modified since they were deposited long ago. This image complements super resolution mosaics obtained at Cape St. Mary and Cape St. Vincent and is consistent with the hypothesis that Victoria crater is located in the middle of what used to be an ancient sand dune field. Many rover team scientists are hoping to be able to eventually drive the rover closer to these layered rocks in the hopes of measuring their chemistry and mineralogy. This is a Mars Exploration Rover Opportunity Panoramic Camera image mosaic acquired on sols 1342 and 1356 (November 2 and 17, 2007), and was constructed from a mathematical combination of 64 different blue filter (480 nm) images.The Big Bang Theory--Coping with Multi-Religious Beliefs in the Super-Diverse Science Classroom
ERIC Educational Resources Information Center
De Carvalho, Roussel
2013-01-01
Large urban schools have to cope with a "super-diverse" population with a multireligious background in their classrooms. The job of the science teacher within this environment requires an ultra-sensitive pedagogical approach, and a deeper understanding of students' backgrounds and of scientific epistemology. Teachers must create a safe…
NASA Astrophysics Data System (ADS)
Terzopoulos, Demetri; Qureshi, Faisal Z.
Computer vision and sensor networks researchers are increasingly motivated to investigate complex multi-camera sensing and control issues that arise in the automatic visual surveillance of extensive, highly populated public spaces such as airports and train stations. However, they often encounter serious impediments to deploying and experimenting with large-scale physical camera networks in such real-world environments. We propose an alternative approach called "Virtual Vision", which facilitates this type of research through the virtual reality simulation of populated urban spaces, camera sensor networks, and computer vision on commodity computers. We demonstrate the usefulness of our approach by developing two highly automated surveillance systems comprising passive and active pan/tilt/zoom cameras that are deployed in a virtual train station environment populated by autonomous, lifelike virtual pedestrians. The easily reconfigurable virtual cameras distributed in this environment generate synthetic video feeds that emulate those acquired by real surveillance cameras monitoring public spaces. The novel multi-camera control strategies that we describe enable the cameras to collaborate in persistently observing pedestrians of interest and in acquiring close-up videos of pedestrians in designated areas.
A Multi-Resolution Data Structure for Two-Dimensional Morse Functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bremer, P-T; Edelsbrunner, H; Hamann, B
2003-07-30
The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.
Existence of topological multi-string solutions in Abelian gauge field theories
NASA Astrophysics Data System (ADS)
Han, Jongmin; Sohn, Juhee
2017-11-01
In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
NASA Astrophysics Data System (ADS)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.; Hill, K.; Bitter, M.; Rice, J. E.; Granetz, R.; Hubbard, A.; Irby, J.; Greenwald, M.; Marmar, E.; Tritz, K.; Stutman, D.; Stratton, B.; Efthimion, P.
2016-11-01
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (Te, nZ, ΔZeff, and ne,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well as transient levels of metal contributions. This technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.
Spectrally resolved laser interference microscopy
NASA Astrophysics Data System (ADS)
Butola, Ankit; Ahmad, Azeem; Dubey, Vishesh; Senthilkumaran, P.; Singh Mehta, Dalip
2018-07-01
We developed a new quantitative phase microscopy technique, namely, spectrally resolved laser interference microscopy (SR-LIM), with which it is possible to quantify multi-spectral phase information related to biological specimens without color crosstalk using a color CCD camera. It is a single shot technique where sequential switched on/off of red, green, and blue (RGB) wavelength light sources are not required. The method is implemented using a three-wavelength interference microscope and a customized compact grating based imaging spectrometer fitted at the output port. The results of the USAF resolution chart while employing three different light sources, namely, a halogen lamp, light emitting diodes, and lasers, are discussed and compared. The broadband light sources like the halogen lamp and light emitting diodes lead to stretching in the spectrally decomposed images, whereas it is not observed in the case of narrow-band light sources, i.e. lasers. The proposed technique is further successfully employed for single-shot quantitative phase imaging of human red blood cells at three wavelengths simultaneously without color crosstalk. Using the present technique, one can also use a monochrome camera, even though the experiments are performed using multi-color light sources. Finally, SR-LIM is not only limited to RGB wavelengths, it can be further extended to red, near infra-red, and infra-red wavelengths, which are suitable for various biological applications.
NASA Astrophysics Data System (ADS)
Carrasco, E.; Sánchez-Blanco, E.; García-Vargas, M. L.; Gil de Paz, A.; Páez, G.; Gallego, J.; Sánchez, F. M.; Vílchez, J. M.
2012-09-01
MEGARA is the next optical Integral-Field Unit (IFU) and Multi-Object Spectrograph (MOS) for Gran Telescopio Canarias. The instrument offers two IFUs plus a Multi-Object Spectroscopy (MOS) mode: a large compact bundle covering 12.5 arcsec x 11.3 arcsec on sky with 100 μm fiber-core; a small compact bundle, of 8.5 arcsec x 6.7 arcsec with 70 μm fiber-core and a fiber MOS positioner that allows to place up to 100 mini-bundles, 7 fibers each, with 100 μm fiber-core, within a 3.5 arcmin x 3.5 arcmin field of view, around the two IFUs. The fibers, organized in bundles, end in the pseudo-slit plate, which will be placed at the entrance focal plane of the MEGARA spectrograph. The large IFU and MOS modes will provide intermediate to high spectral resolutions, R=6800-17000. The small IFU mode will provide R=8000-20000. All these resolutions are possible thanks to a spectrograph design based in the used of volume phase holographic gratings in combination with prisms to keep fixed the collimator and camera angle. The MEGARA optics is composed by a total of 53 large optical elements per spectrograph: the field lens, the collimator and the camera lenses plus the complete set of pupil elements including holograms, windows and prisms. INAOE, a partner of the GTC and a partner of MEGARA consortium, is responsible of the optics manufacturing and tests. INAOE will carry out this project working in an alliance with CIO. This paper summarizes the status of MEGARA spectrograph optics at the Preliminary Design Review, held on March 2012.
NASA Astrophysics Data System (ADS)
Iglesias, F. A.; Feller, A.; Nagaraju, K.; Solanki, S. K.
2016-05-01
Context. Remote sensing of weak and small-scale solar magnetic fields is of utmost relevance when attempting to respond to a number of important open questions in solar physics. This requires the acquisition of spectropolarimetric data with high spatial resolution (~10-1 arcsec) and low noise (10-3 to 10-5 of the continuum intensity). The main limitations to obtain these measurements from the ground, are the degradation of the image resolution produced by atmospheric seeing and the seeing-induced crosstalk (SIC). Aims: We introduce the prototype of the Fast Solar Polarimeter (FSP), a new ground-based, high-cadence polarimeter that tackles the above-mentioned limitations by producing data that are optimally suited for the application of post-facto image restoration, and by operating at a modulation frequency of 100 Hz to reduce SIC. Methods: We describe the instrument in depth, including the fast pnCCD camera employed, the achromatic modulator package, the main calibration steps, the effects of the modulation frequency on the levels of seeing-induced spurious signals, and the effect of the camera properties on the image restoration quality. Results: The pnCCD camera reaches 400 fps while keeping a high duty cycle (98.6%) and very low noise (4.94 e- rms). The modulator is optimized to have high (>80%) total polarimetric efficiency in the visible spectral range. This allows FSP to acquire 100 photon-noise-limited, full-Stokes measurements per second. We found that the seeing induced signals that are present in narrow-band, non-modulated, quiet-sun measurements are (a) lower than the noise (7 × 10-5) after integrating 7.66 min, (b) lower than the noise (2.3 × 10-4) after integrating 1.16 min and (c) slightly above the noise (4 × 10-3) after restoring case (b) by means of a multi-object multi-frame blind deconvolution. In addition, we demonstrate that by using only narrow-band images (with low S/N of 13.9) of an active region, we can obtain one complete set of high-quality restored measurements about every 2 s.
Super-hydrophobic multi-walled carbon nanotube coatings for stainless steel.
De Nicola, Francesco; Castrucci, Paola; Scarselli, Manuela; Nanni, Francesca; Cacciotti, Ilaria; De Crescenzi, Maurizio
2015-04-10
We have taken advantage of the native surface roughness and the iron content of AISI 316 stainless steel to directly grow multi-walled carbon nanotube (MWCNT) random networks by chemical vapor deposition (CVD) at low-temperature (1000°C) without the addition of any external catalysts or time-consuming pre-treatments. In this way, super-hydrophobic MWCNT films on stainless steel sheets were obtained, exhibiting high contact angle values (154°C) and high adhesion force (high contact angle hysteresis). Furthermore, the investigation of MWCNT films with scanning electron microscopy (SEM) reveals a two-fold hierarchical morphology of the MWCNT random networks made of hydrophilic carbonaceous nanostructures on the tip of hydrophobic MWCNTs. Owing to the Salvinia effect, the hydrophobic and hydrophilic composite surface of the MWCNT films supplies a stationary super-hydrophobic coating for conductive stainless steel. This biomimetical inspired surface not only may prevent corrosion and fouling, but also could provide low friction and drag reduction.
Independent CMEs from a Single Solar Active Region - The Case of the Super-Eruptive NOAA AR11429
NASA Astrophysics Data System (ADS)
Chintzoglou, Georgios; Patsourakos, Spiros; Vourlidas, Angelos
2014-06-01
In this investigation we study AR 11429, the origin of the twin super-fast CME eruptions of 07-Mar-2012. This AR fulfills all the requirements for the 'perfect storm'; namely, Hale's law incompatibility and a delta-magnetic configuration. In fact, during its limb-to-limb transit, AR 11429 spawned several eruptions which caused geomagnetic storms, including the biggest in Cycle 24 so far. Magnetic Flux Ropes (MFRs) are twisted magnetic structures in the corona, best seen in ~10MK hot plasma emission and are often considered as the culprit causing such super-eruptions. However, their 'dormant' existence in the solar atmosphere (i.e. prior to eruptions), is a matter of strong debate. Aided by multi-wavelength and multi-spacecraft observations (SDO/HMI & AIA, HINODE/SOT/SP, STEREO B/EUVI) and by using a Non-Linear Force-Free (NLFFF) model for the coronal magnetic field, our work shows two separate, weakly-twisted magnetic flux systems which suggest the existence of possible pre-eruption MFRs.
Nanophotonic projection system.
Aflatouni, Firooz; Abiri, Behrooz; Rekhi, Angad; Hajimiri, Ali
2015-08-10
Low-power integrated projection technology can play a key role in development of low-cost mobile devices with built-in high-resolution projectors. Low-cost 3D imaging and holography systems are also among applications of such a technology. In this paper, an integrated projection system based on a two-dimensional optical phased array with fast beam steering capability is reported. Forward biased p-i-n phase modulators with 200MHz bandwidth are used per each array element for rapid phase control. An optimization algorithm is implemented to compensate for the phase dependent attenuation of the p-i-n modulators. Using rapid vector scanning technique, images were formed and recorded within a single snapshot of the IR camera.
Techniques for High-contrast Imaging in Multi-star Systems. II. Multi-star Wavefront Control
NASA Astrophysics Data System (ADS)
Sirbu, D.; Thomas, S.; Belikov, R.; Bendek, E.
2017-11-01
Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments due to the diffraction and aberration leakage caused by companion stars. Consequently, many scientifically valuable multi-star systems are excluded from direct imaging target lists for exoplanet surveys and characterization missions. Multi-star Wavefront Control (MSWC) is a technique that uses a coronagraphic instrument’s deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. MSWC uses “non-redundant” modes on the DM to independently control speckles from each star in the dark zone. Our previous paper also introduced the Super-Nyquist wavefront control technique, which uses a diffraction grating to generate high-contrast regions beyond the Nyquist limit (nominal region correctable by the DM). These two techniques can be combined as MSWC-s to generate high-contrast regions for multi-star systems at wide (Super-Nyquist) angular separations, while MSWC-0 refers to close (Sub-Nyquist) angular separations. As a case study, a high-contrast wavefront control simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged with a small aperture at 8× {10}-9 mean raw contrast in 10% broadband light in one-sided dark holes from 1.6-5.5 λ/D. Another case study using a larger 2.4 m aperture telescope such as the Wide-Field Infrared Survey Telescope uses these techniques to image the habitable zone of Alpha Centauri at 3.2× {10}-9 mean raw contrast in monochromatic light.
Early Results from the Odyssey THEMIS Investigation
NASA Technical Reports Server (NTRS)
Christensen, Philip R.; Bandfield, Joshua L.; Bell, James F., III; Hamilton, Victoria E.; Ivanov, Anton; Jakosky, Bruce M.; Kieffer, Hugh H.; Lane, Melissa D.; Malin, Michael C.; McConnochie, Timothy
2003-01-01
The Thermal Emission Imaging System (THEMIS) began studying the surface and atmosphere of Mars in February, 2002 using thermal infrared (IR) multi-spectral imaging between 6.5 and 15 m, and visible/near-IR images from 450 to 850 nm. The infrared observations continue a long series of spacecraft observations of Mars, including the Mariner 6/7 Infrared Spectrometer, the Mariner 9 Infrared Interferometer Spectrometer (IRIS), the Viking Infrared Thermal Mapper (IRTM) investigations, the Phobos Termoscan, and the Mars Global Surveyor Thermal Emission Spectrometer (MGS TES). The THEMIS investigation's specific objectives are to: (1) determine the mineralogy of localized deposits associated with hydrothermal or sub-aqueous environments, and to identify future landing sites likely to represent these environments; (2) search for thermal anomalies associated with active sub-surface hydrothermal systems; (3) study small-scale geologic processes and landing site characteristics using morphologic and thermophysical properties; (4) investigate polar cap processes at all seasons; and (5) provide a high spatial resolution link to the global hyperspectral mineral mapping from the TES investigation. THEMIS provides substantially higher spatial resolution IR multi-spectral images to complement TES hyperspectral (143-band) global mapping, and regional visible imaging at scales intermediate between the Viking and MGS cameras.
NASA Technical Reports Server (NTRS)
Varosi, F.; Gezari, D.; Dwek, E.; Telesco, C.
2016-01-01
We have analyzed multi-wavelength mid-infrared images of the central parsec of the Galactic Center using a two-temperature line-of-sight (LOS) radiative transfer model at each pixel of the images, giving maps of temperatures, luminosities and opacities of the hot, warm, cold (dark)dust components. The data consists of images at nine wavelengths in the mid-infrared (N-band and Q-band) from the Thermal Region Camera and Spectrograph (T-ReCS) instrument operating at the Gemini South Observatory. The results of the LOS modeling indicate that the extinction optical depth is quite large and varies substantially over the FOV. The high-resolution images of the central parsec of the Galactic center region were obtained with T-ReCS at Gemini South in January 2004. These images provide nearly diffraction-limited resolution (approx. 0.5) of the central parsec. The T-ReCS images were taken with nine filters (3.8, 4.7, 7.7, 8.7, 9.7, 10.3, 12.3, 18.3 and 24.5m), over a field-of-view (FOV) of 20 x 20 arcsec.
Adaptive multi-resolution Modularity for detecting communities in networks
NASA Astrophysics Data System (ADS)
Chen, Shi; Wang, Zhi-Zhong; Bao, Mei-Hua; Tang, Liang; Zhou, Ji; Xiang, Ju; Li, Jian-Ming; Yi, Chen-He
2018-02-01
Community structure is a common topological property of complex networks, which attracted much attention from various fields. Optimizing quality functions for community structures is a kind of popular strategy for community detection, such as Modularity optimization. Here, we introduce a general definition of Modularity, by which several classical (multi-resolution) Modularity can be derived, and then propose a kind of adaptive (multi-resolution) Modularity that can combine the advantages of different Modularity. By applying the Modularity to various synthetic and real-world networks, we study the behaviors of the methods, showing the validity and advantages of the multi-resolution Modularity in community detection. The adaptive Modularity, as a kind of multi-resolution method, can naturally solve the first-type limit of Modularity and detect communities at different scales; it can quicken the disconnecting of communities and delay the breakup of communities in heterogeneous networks; and thus it is expected to generate the stable community structures in networks more effectively and have stronger tolerance against the second-type limit of Modularity.
Design and testing of a novel multi-stroke micropositioning system with variable resolutions.
Xu, Qingsong
2014-02-01
Multi-stroke stages are demanded in micro-/nanopositioning applications which require smaller and larger motion strokes with fine and coarse resolutions, respectively. This paper presents the conceptual design of a novel multi-stroke, multi-resolution micropositioning stage driven by a single actuator for each working axis. It eliminates the issue of the interference among different drives, which resides in conventional multi-actuation stages. The stage is devised based on a fully compliant variable stiffness mechanism, which exhibits unequal stiffnesses in different strokes. Resistive strain sensors are employed to offer variable position resolutions in the different strokes. To quantify the design of the motion strokes and coarse/fine resolution ratio, analytical models are established. These models are verified through finite-element analysis simulations. A proof-of-concept prototype XY stage is designed, fabricated, and tested to demonstrate the feasibility of the presented ideas. Experimental results of static and dynamic testing validate the effectiveness of the proposed design.
The technical consideration of multi-beam mask writer for production
NASA Astrophysics Data System (ADS)
Lee, Sang Hee; Ahn, Byung-Sup; Choi, Jin; Shin, In Kyun; Tamamushi, Shuichi; Jeon, Chan-Uk
2016-10-01
Multi-beam mask writer is under development to solve the throughput and patterning resolution problems in VSB mask writer. Theoretically, the writing time is appropriate for future design node and the resolution is improved with multi-beam mask writer. Many previous studies show the feasible results of resolution, CD control and registration. Although such technical results of development tool seem to be enough for mass production, there are still many unexpected problems for real mass production. In this report, the technical challenges of multi-beam mask writer are discussed in terms of production and application. The problems and issues are defined based on the performance of current development tool compared with the requirements of mask quality. Using the simulation and experiment, we analyze the specific characteristics of electron beam in multi-beam mask writer scheme. Consequently, we suggest necessary specifications for mass production with multi-beam mask writer in the future.
NASA Astrophysics Data System (ADS)
Pattke, Marco; Martin, Manuel; Voit, Michael
2017-05-01
Tracking people with cameras in public areas is common today. However with an increasing number of cameras it becomes harder and harder to view the data manually. Especially in safety critical areas automatic image exploitation could help to solve this problem. Setting up such a system can however be difficult because of its increased complexity. Sensor placement is critical to ensure that people are detected and tracked reliably. We try to solve this problem using a simulation framework that is able to simulate different camera setups in the desired environment including animated characters. We combine this framework with our self developed distributed and scalable system for people tracking to test its effectiveness and can show the results of the tracking system in real time in the simulated environment.
Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta
2010-01-01
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
Light field measurement based on the single-lens coherent diffraction imaging
NASA Astrophysics Data System (ADS)
Shen, Cheng; Tan, Jiubin; Liu, Zhengjun
2018-01-01
Plenoptic camera and holography are popular light field measurement techniques. However, the low resolution or the complex apparatus hinders their widespread application. In this paper, we put forward a new light field measurement scheme. The lens is introduced into coherent diffraction imaging to operate an optical transform, extended fractional Fourier transform. Combined with the multi-image phase retrieval algorithm, the scheme is proved to hold several advantages. It gets rid of the support requirement and is much easier to implement while keeping a high resolution by making full use of the detector plane. Also, it is verified that our scheme has a superiority over the direct lens focusing imaging in amplitude measurement accuracy and phase retrieval ability.
Focal Plane Detectors for the Advanced Gamma-Ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Wagner, R. G.; Byrum, K.; Drake, G.; Funk, S.; Otte, N.; Smith, A.; Tajima, H.; Williams, D.
2009-05-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. It is being designed to achieve a significant improvement in sensitivity compared to current Imaging Air Cherenkov Telescope (IACT) Arrays. One of the main requirements in order that AGIS fulfills this goal will be to achieve higher angular resolution than current IACTs. Simulations show that a substantial improvement in angular resolution may be achieved if the pixel size is reduced to 0.05 deg, i.e. two to three times smaller than for current IACT cameras. Here we present results from testing of alternatives being considered for AGIS, including both silicon photomultipliers (SiPMs) and multi-anode photomultipliers (MAPMTs).
Integrated Arrays on Silicon at Terahertz Frequencies
NASA Technical Reports Server (NTRS)
Chattopadhayay, Goutam; Lee, Choonsup; Jung, Cecil; Lin, Robert; Peralta, Alessandro; Mehdi, Imran; Llombert, Nuria; Thomas, Bertrand
2011-01-01
In this paper we explore various receiver font-end and antenna architecture for use in integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies and use of novel integrated antennas with silicon micromachining are reported. We report novel stacking of micromachined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages which easily leads to the development of 2- dimensioanl multi-pixel receiver front-ends in the terahertz frequency range. We also report an integrated micro-lens antenna that goes with the silicon micro-machined front-end. The micro-lens antenna is fed by a waveguide that excites a silicon lens antenna through a leaky-wave or electromagnetic band gap (EBG) resonant cavity. We utilized advanced semiconductor nanofabrication techniques to design, fabricate, and demonstrate a super-compact, low-mass submillimeter-wave heterodyne frontend. When the micro-lens antenna is integrated with the receiver front-end we will be able to assemble integrated heterodyne array receivers for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.
Tian, Junlong; Pan, Feng; Xue, Ruiyang; Zhang, Wang; Fang, Xiaotian; Liu, Qinglei; Wang, Yuhua; Zhang, Zhijian; Zhang, Di
2015-05-07
A tin oxide multi-tube array (SMTA) with a parallel effect was fabricated through a simple and promising method combining chemosynthesis and biomimetic techniques; a biomimetic template was derived from the bristles on the wings of the Alpine Black Swallowtail butterfly (Papilio maackii). SnO2 tubes are hollow and porous structures with micro-pores regularly distributed on the wall. The morphology, the delicate microstructure and the crystal structure of this SMTA were characterized by super resolution digital microscopy, scanning electron microscopy, transmission electron microscopy and X-ray diffraction. The SMTA exhibits a high sensitivity to H2S gas at room temperature. It also exhibits a short response/recovery time, with an average value of 14/30 s at 5 ppm. In particular, heating is not required for the SMTA in the gas sensitivity measurement process. On the basis of these results, SMTA is proposed as a suitable new material for the design and fabrication of room-temperature H2S gas sensors.
Atom Probe Tomography Analysis of Gallium-Nitride-Based Light-Emitting Diodes
NASA Astrophysics Data System (ADS)
Prosa, Ty J.; Olson, David; Giddings, A. Devin; Clifton, Peter H.; Larson, David J.; Lefebvre, Williams
2014-03-01
Thin-film light-emitting diodes (LEDs) composed of GaN/InxGa1-xN/GaN quantum well (QW) structures are integrated into modern optoelectronic devices because of the tunable InGaN band-gap enabling emission of the full visible spectrum. Atom probe tomography (APT) offers unique capabilities for 3D device characterization including compositional mapping of nano-volumes (>106 nm3) , high detection efficiency (>50%), and good sensitivity. In this study, APT is used to understand the distribution of dopants as well as Al and In alloying agents in a GaN device. Measurements using transmission electron microscopy (TEM) and secondary ion mass spectrometry (SIMS) have also been made to improve the accuracy of the APT analysis by correlating the information content of these complimentary techniques. APT analysis reveals various QW and other optoelectronic structures including a Mg p-GaN layer, an Al-rich electron blocking layer, an In-rich multi-QW region, and an In-based super-lattice structure. The multi-QW composition shows good quantitative agreement with layer thickness and spacing extracted from a high resolution TEM image intensity analysis.
Low Noise Titanium Nitride KIDs for SuperSpec: A Millimeter-Wave On-Chip Spectrometer
NASA Astrophysics Data System (ADS)
Hailey-Dunsheath, S.; Shirokoff, E.; Barry, P. S.; Bradford, C. M.; Chapman, S.; Che, G.; Glenn, J.; Hollister, M.; Kovács, A.; LeDuc, H. G.; Mauskopf, P.; McKenney, C.; O'Brient, R.; Padin, S.; Reck, T.; Shiu, C.; Tucker, C. E.; Wheeler, J.; Williamson, R.; Zmuidzinas, J.
2016-07-01
SuperSpec is a novel on-chip spectrometer we are developing for multi-object, moderate resolution (R = 100-500), large bandwidth ({˜ }1.65:1), submillimeter and millimeter survey spectroscopy of high-redshift galaxies. The spectrometer employs a filter bank architecture, and consists of a series of half-wave resonators formed by lithographically-patterned superconducting transmission lines. The signal power admitted by each resonator is detected by a lumped element titanium nitride (TiN) kinetic inductance detector operating at 100-200 MHz. We have tested a new prototype device that achieves the targeted R=100 resolving power, and has better detector sensitivity and optical efficiency than previous devices. We employ a new method for measuring photon noise using both coherent and thermal sources of radiation to cleanly separate the contributions of shot and wave noise. We report an upper limit to the detector NEP of 1.4× 10^{-17} W Hz^{-1/2}, within 10 % of the photon noise-limited NEP for a ground-based R=100 spectrometer.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.
Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-10-21
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array
Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Con, Celal; Cui, Bo
2017-12-01
This paper describes a simple and low-cost fabrication method for multi-functional nanostructures with outstanding anti-reflective and super-hydrophobic properties. Our method employed phase separation of a metal salt-polymer nanocomposite film that leads to nanoisland formation after etching away the polymer matrix, and the metal salt island can then be utilized as a hard mask for dry etching the substrate or sublayer. Compared to many other methods for patterning metallic hard mask structures, such as the popular lift-off method, our approach involves only spin coating and thermal annealing, thus is more cost-efficient. Metal salts including aluminum nitrate nonahydrate (ANN) and chromium nitrate nonahydrate (CNN) can both be used, and high aspect ratio (1:30) and high-resolution (sub-50 nm) pillars etched into silicon can be achieved readily. With further control of the etching profile by adjusting the dry etching parameters, cone-like silicon structure with reflectivity in the visible region down to a remarkably low value of 2% was achieved. Lastly, by coating a hydrophobic surfactant layer, the pillar array demonstrated a super-hydrophobic property with an exceptionally high water contact angle of up to 165.7°.
Con, Celal; Cui, Bo
2017-12-16
This paper describes a simple and low-cost fabrication method for multi-functional nanostructures with outstanding anti-reflective and super-hydrophobic properties. Our method employed phase separation of a metal salt-polymer nanocomposite film that leads to nanoisland formation after etching away the polymer matrix, and the metal salt island can then be utilized as a hard mask for dry etching the substrate or sublayer. Compared to many other methods for patterning metallic hard mask structures, such as the popular lift-off method, our approach involves only spin coating and thermal annealing, thus is more cost-efficient. Metal salts including aluminum nitrate nonahydrate (ANN) and chromium nitrate nonahydrate (CNN) can both be used, and high aspect ratio (1:30) and high-resolution (sub-50 nm) pillars etched into silicon can be achieved readily. With further control of the etching profile by adjusting the dry etching parameters, cone-like silicon structure with reflectivity in the visible region down to a remarkably low value of 2% was achieved. Lastly, by coating a hydrophobic surfactant layer, the pillar array demonstrated a super-hydrophobic property with an exceptionally high water contact angle of up to 165.7°.
NASA Technical Reports Server (NTRS)
Drummond, Mark; Hine, Butler; Genet, Russell; Genet, David; Talent, David; Boyd, Louis; Trueblood, Mark; Filippenko, Alexei V. (Editor)
1991-01-01
The objective of multi-use telescopes is to reduce the initial and operational costs of space telescopes to the point where a fair number of telescopes, a dozen or so, would be affordable. The basic approach is to develop a common telescope, control system, and power and communications subsystem that can be used with a wide variety of instrument payloads, i.e., imaging CCD cameras, photometers, spectrographs, etc. By having such a multi-use and multi-user telescope, a common practice for earth-based telescopes, development cost can be shared across many telescopes, and the telescopes can be produced in economical batches.
Design and characterization of an ultraresolution seamlessly tiled display for data visualization
NASA Astrophysics Data System (ADS)
Bordes, Nicole; Bleha, William P.; Pailthorpe, Bernard
2003-09-01
The demand for more pixels in digital displays is beginning to be met as manufacturers increase the native resolution of projector chips. Tiling several projectors still offers one solution to augment the pixel capacity of a display. However problems of color and illumination uniformity across projectors need to be addressed as well as the computer software required to drive such devices. In this paper we present the results obtained on a desktop size tiled projector array of three D-ILA projectors sharing a common illumination source. The composite image on a 3 x 1 array, is 3840 by 1024 pixels with a resolution of about 80 dpi. The system preserves desktop resolution, is compact and can fit in a normal room or laboratory. A fiber optic beam splitting system and a single set of red, green and blue dichroic filters are the key to color and illumination uniformity. The D-ILA chips inside each projector can be adjusted individually to set or change characteristics such as contrast, brightness or gamma curves. The projectors were matched carefully and photometric variations were corrected, leading to a seamless tiled image. Photometric measurements were performed to characterize the display and losses through the optical paths, and are reported here. This system is driven by a small PC computer cluster fitted with graphics cards and is running Linux. The Chromium API can be used for tiling graphics tiles across the display and interfacing to users' software applications. There is potential for scaling the design to accommodate larger arrays, up to 4x5 projectors, increasing display system capacity to 50 Megapixels. Further increases, beyond 100 Megapixels can be anticipated with new generation D-ILA chips capable of projecting QXGA (2k x 1.5k), with ongoing evolution as QUXGA (4k x 2k) becomes available.
Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie
2016-03-01
Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.
Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots.
Wang, Junpeng; Liu, Xiaotong; Shen, Han-Wei; Lin, Guang
2017-01-01
Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science.
Get-in-the-Zone (GITZ) Transition Display Format for Changing Camera Views in Multi-UAV Operations
2008-12-01
the multi-UAV operator will witch between dynamic and static missions, each potentially involving very different scenario environments and task...another. Inspired by cinematography techniques to help audiences maintain spatial understanding of a scene across discrete film cuts, use of a
NASA Astrophysics Data System (ADS)
Awumah, A.; Mahanti, P.; Robinson, M. S.
2017-12-01
Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.
A multi-resolution approach to electromagnetic modelling
NASA Astrophysics Data System (ADS)
Cherevatova, M.; Egbert, G. D.; Smirnov, M. Yu
2018-07-01
We present a multi-resolution approach for 3-D magnetotelluric forward modelling. Our approach is motivated by the fact that fine-grid resolution is typically required at shallow levels to adequately represent near surface inhomogeneities, topography and bathymetry, while a much coarser grid may be adequate at depth where the diffusively propagating electromagnetic fields are much smoother. With a conventional structured finite difference grid, the fine discretization required to adequately represent rapid variations near the surface is continued to all depths, resulting in higher computational costs. Increasing the computational efficiency of the forward modelling is especially important for solving regularized inversion problems. We implement a multi-resolution finite difference scheme that allows us to decrease the horizontal grid resolution with depth, as is done with vertical discretization. In our implementation, the multi-resolution grid is represented as a vertical stack of subgrids, with each subgrid being a standard Cartesian tensor product staggered grid. Thus, our approach is similar to the octree discretization previously used for electromagnetic modelling, but simpler in that we allow refinement only with depth. The major difficulty arose in deriving the forward modelling operators on interfaces between adjacent subgrids. We considered three ways of handling the interface layers and suggest a preferable one, which results in similar accuracy as the staggered grid solution, while retaining the symmetry of coefficient matrix. A comparison between multi-resolution and staggered solvers for various models shows that multi-resolution approach improves on computational efficiency without compromising the accuracy of the solution.
NASA Astrophysics Data System (ADS)
McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.
2017-06-01
Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.
McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J
2017-06-07
Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.
McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J
2017-01-01
Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833
Development of two-framing camera with large format and ultrahigh speed
NASA Astrophysics Data System (ADS)
Jiang, Xiaoguo; Wang, Yuan; Wang, Yi
2012-10-01
High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.
Implementation of a Multi-Robot Coverage Algorithm on a Two-Dimensional, Grid-Based Environment
2017-06-01
two planar laser range finders with a 180-degree field of view , color camera, vision beacons, and wireless communicator. In their system, the robots...Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF A MULTI -ROBOT COVERAGE ALGORITHM ON A TWO -DIMENSIONAL, GRID-BASED ENVIRONMENT 5. FUNDING NUMBERS...path planning coverage algorithm for a multi -robot system in a two -dimensional, grid-based environment. We assess the applicability of a topology
Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering.
Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus
2014-12-01
This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs.
NASA Astrophysics Data System (ADS)
Till, Christy B.; Pritchard, Matthew; Miller, Craig A.; Brugman, Karalee K.; Ryan-Davis, Juliet
2018-04-01
Multi-disciplinary analyses of Earth's most destructive volcanic systems show that continuous monitoring and an understanding of each volcano's quirks, rather than a single unified model, are key to generating accurate hazard assessments.
The Raffaello, a Multi-Purpose Logistics Module, arrives at KSC aboard a Beluga super transporter
NASA Technical Reports Server (NTRS)
1999-01-01
An Airbus Industrie A300-600ST 'Beluga' Super Transporter is reflected in the rain puddles as it comes to a stop at the Shuttle Landing Facility. The Beluga is carrying the Raffaello, the second Multi-Purpose Logistics Module (MPLM) for the International Space Station (ISS). One of Italy's major contributions to the ISS program, the MPLM is a reusable logistics carrier and the primary delivery system used to resupply and return station cargo requiring a pressurized environment. Weighing nearly 4.5 tons, the module measures 21 feet long and 15 feet in diameter. Raffaello will join Leonardo, the first Italian-built MPLM, in the Space Station Processing Facility for testing. NASA, Boeing, the Italian Space Agency and Alenia Aerospazio will provide engineering support.
The Raffaello, a Multi-Purpose Logistics Module, arrives at KSC aboard a Beluga super transporter
NASA Technical Reports Server (NTRS)
1999-01-01
An Airbus Industrie A300-600ST 'Beluga' Super Transporter is reflected in the rain puddles as it taxis toward the mate/demate tower at the Shuttle Landing Facility. The Beluga is carrying the Raffaello, the second Multi-Purpose Logistics Module (MPLM) for the International Space Station (ISS). One of Italy's major contributions to the ISS program, the MPLM is a reusable logistics carrier and the primary delivery system used to resupply and return station cargo requiring a pressurized environment. Weighing nearly 4.5 tons, the module measures 21 feet long and 15 feet in diameter. Raffaello will join Leonardo, the first Italian-built MPLM, in the Space Station Processing Facility for testing. NASA, Boeing, the Italian Space Agency and Alenia Aerospazio will provide engineering support.
Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring
NASA Astrophysics Data System (ADS)
Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.
2014-12-01
Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.
A new high-speed IR camera system
NASA Technical Reports Server (NTRS)
Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.
1994-01-01
A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.
3D super resolution range-gated imaging for canopy reconstruction and measurement
NASA Astrophysics Data System (ADS)
Huang, Hantao; Wang, Xinwei; Sun, Liang; Lei, Pingshun; Fan, Songtao; Zhou, Yan
2018-01-01
In this paper, we proposed a method of canopy reconstruction and measurement based on 3D super resolution range-gated imaging. In this method, high resolution 2D intensity images are grasped by active gate imaging, and 3D images of canopy are reconstructed by triangular-range-intensity correlation algorithm at the same time. A range-gated laser imaging system(RGLIS) is established based on 808 nm diode laser and gated intensified charge-coupled device (ICCD) camera with 1392´1040 pixels. The proof experiments have been performed for potted plants located 75m away and trees located 165m away. The experiments show it that can acquire more than 1 million points per frame, and 3D imaging has the spatial resolution about 0.3mm at the distance of 75m and the distance accuracy about 10 cm. This research is beneficial for high speed acquisition of canopy structure and non-destructive canopy measurement.
DOT National Transportation Integrated Search
2004-10-01
The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...
MuSICa at GRIS: a prototype image slicer for EST at GREGOR
NASA Astrophysics Data System (ADS)
Calcines, A.; Collados, M.; López, R. L.
2013-05-01
This communication presents a prototype image slicer for the 4-m European Solar Telescope (EST) designed for the spectrograph of the 1.5-m GREGOR solar telescope (GRIS). The design of this integral field unit has been called MuSICa (Multi-Slit Image slicer based on collimator-Camera). It is a telecentric system developed specifically for the integral field, high resolution spectrograph of EST and presents multi-slit capability, reorganizing a bidimensional field of view of 80 arcsec^{2} into 8 slits, each one of them with 200 arcsec length × 0.05 arcsec width. It minimizes the number of optical components needed to fulfil this multi-slit capability, three arrays of mirrors: slicer, collimator and camera mirror arrays (the first one flat and the other two spherical). The symmetry of the layout makes it possible to overlap the pupil images associated to each part of the sliced entrance field of view. A mask with only one circular aperture is placed at the pupil position. This symmetric characteristic offers some advantages: facilitates the manufacturing process, the alignment and reduces the costs. In addition, it is compatible with two modes of operation: spectroscopic and spectro-polarimetric, offering a great versatility. The optical quality of the system is diffraction-limited. The prototype will improve the performances of GRIS at GREGOR and is part of the feasibility study of the integral field unit for the spectrographs of EST. Although MuSICa has been designed as a solar image slicer, its concept can also be applied to night-time astronomical instruments (Collados et al. 2010, Proc. SPIE, Vol. 7733, 77330H; Collados et al. 2012, AN, 333, 901; Calcines et al. 2010, Proc. SPIE, Vol. 7735, 77351X)
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited).
Delgado-Aparicio, L F; Maddox, J; Pablant, N; Hill, K; Bitter, M; Rice, J E; Granetz, R; Hubbard, A; Irby, J; Greenwald, M; Marmar, E; Tritz, K; Stutman, D; Stratton, B; Efthimion, P
2016-11-01
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e , n Z , ΔZ eff , and n e,fast ). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well as transient levels of metal contributions. This technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.
NASA Astrophysics Data System (ADS)
Swain, Pradyumna; Mark, David
2004-09-01
The emergence of curved CCD detectors as individual devices or as contoured mosaics assembled to match the curved focal planes of astronomical telescopes and terrestrial stereo panoramic cameras represents a major optical design advancement that greatly enhances the scientific potential of such instruments. In altering the primary detection surface within the telescope"s optical instrumentation system from flat to curved, and conforming the applied CCD"s shape precisely to the contour of the telescope"s curved focal plane, a major increase in the amount of transmittable light at various wavelengths through the system is achieved. This in turn enables multi-spectral ultra-sensitive imaging with much greater spatial resolution necessary for large and very large telescope applications, including those involving infrared image acquisition and spectroscopy, conducted over very wide fields of view. For earth-based and space-borne optical telescopes, the advent of curved CCD"s as the principle detectors provides a simplification of the telescope"s adjoining optics, reducing the number of optical elements and the occurrence of optical aberrations associated with large corrective optics used to conform to flat detectors. New astronomical experiments may be devised in the presence of curved CCD applications, in conjunction with large format cameras and curved mosaics, including three dimensional imaging spectroscopy conducted over multiple wavelengths simultaneously, wide field real-time stereoscopic tracking of remote objects within the solar system at high resolution, and deep field survey mapping of distant objects such as galaxies with much greater multi-band spatial precision over larger sky regions. Terrestrial stereo panoramic cameras equipped with arrays of curved CCD"s joined with associative wide field optics will require less optical glass and no mechanically moving parts to maintain continuous proper stereo convergence over wider perspective viewing fields than their flat CCD counterparts, lightening the cameras and enabling faster scanning and 3D integration of objects moving within a planetary terrain environment. Preliminary experiments conducted at the Sarnoff Corporation indicate the feasibility of curved CCD imagers with acceptable electro-optic integrity. Currently, we are in the process of evaluating the electro-optic performance of a curved wafer scale CCD imager. Detailed ray trace modeling and experimental electro-optical data performance obtained from the curved imager will be presented at the conference.
Design and performance tests of the calorimetric tract of a Compton Camera for small-animals imaging
NASA Astrophysics Data System (ADS)
Rossi, P.; Baldazzi, G.; Battistella, A.; Bello, M.; Bollini, D.; Bonvicini, V.; Fontana, C. L.; Gennaro, G.; Moschini, G.; Navarria, F.; Rashevsky, A.; Uzunov, N.; Zampa, G.; Zampa, N.; Vacchi, A.
2011-02-01
The bio-distribution and targeting capability of pharmaceuticals may be assessed in small animals by imaging gamma-rays emitted from radio-isotope markers. Detectors that exploit the Compton concept allow higher gamma-ray efficiency compared to conventional Anger cameras employing collimators, and feature sub-millimeter spatial resolution and compact geometry. We are developing a Compton Camera that has to address several requirements: the high rates typical of the Compton concept; detection of gamma-rays of different energies that may range from 140 keV ( 99 mTc) to 511 keV ( β+ emitters); presence of gamma and beta radiation with energies up to 2 MeV in case of 188Re. The camera consists of a thin position-sensitive Tracker that scatters the gamma ray, and a second position-sensitive detection system to totally absorb the energy of the scattered photons (Calorimeter). In this paper we present the design and discuss the realization of the calorimetric tract, including the choice of scintillator crystal, pixel size, and detector geometry. Simulations of the gamma-ray trajectories from source to detectors have helped to assess the accuracy of the system and decide on camera design. Crystals of different materials, such as LaBr 3 GSO and YAP, and of different size, in continuous or segmented geometry, have been optically coupled to a multi-anode Hamamatsu H8500 detector, allowing measurements of spatial resolution and efficiency.
QWIP technology for both military and civilian applications
NASA Astrophysics Data System (ADS)
Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.
2001-10-01
Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.
NASA Astrophysics Data System (ADS)
Egal, A.; Gural, P. S.; Vaubaillon, J.; Colas, F.; Thuillot, W.
2017-09-01
The CABERNET project was designed to push the limits for obtaining accurate measurements of meteoroids orbits from photographic and video meteor camera recordings. The discrepancy between the measured and theoretic orbits of these objects heavily depends on the semi-major axis determination, and thus on the reliability of the pre-atmospheric velocity computation. With a spatial resolution of 0.01° per pixel and a temporal resolution of up to 10 ms, CABERNET should be able to provide accurate measurements of velocities and trajectories of meteors. To achieve this, it is necessary to improve the precision of the data reduction processes, and especially the determination of the meteor's velocity. In this work, most of the steps of the velocity computation are thoroughly investigated in order to reduce the uncertainties and error contributions at each stage of the reduction process. The accuracy of the measurement of meteor centroids is established and results in a precision of 0.09 pixels for CABERNET, which corresponds to 3.24‧‧. Several methods to compute the velocity were investigated based on the trajectory determination algorithms described in Ceplecha (1987) and Borovicka (1990), as well as the multi-parameter fitting (MPF) method proposed by Gural (2012). In the case of the MPF, many optimization methods were implemented in order to find the most efficient and robust technique to solve the minimization problem. The entire data reduction process is assessed using simulated meteors, with different geometrical configurations and deceleration behaviors. It is shown that the multi-parameter fitting method proposed by Gural(2012)is the most accurate method to compute the pre-atmospheric velocity in all circumstances. Many techniques that assume constant velocity at the beginning of the path as derived from the trajectory determination using Ceplecha (1987) or Borovicka (1990) can lead to large errors for decelerating meteors. The MPF technique also allows one to reliably compute the velocity for very low convergence angles (∼ 1°). Despite the better accuracy of this method, the poor conditioning of the velocity propagation models used in the meteor community and currently employed by the multi-parameter fitting method prevent us from optimally computing the pre-atmospheric velocity. Specifically, the deceleration parameters are particularly difficult to determine. The quality of the data provided by the CABERNET network limits the error induced by this effect to achieve an accuracy of about 1% on the velocity computation. Such a precision would not be achievable with lower resolution camera networks and today's commonly used trajectory reduction algorithms. To improve the performance of the multi-parameter fitting method, a linearly independent deceleration formulation needs to be developed.
Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
NASA Astrophysics Data System (ADS)
Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.
2007-02-01
The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.
Innovative Camera and Image Processing System to Characterize Cryospheric Changes
NASA Astrophysics Data System (ADS)
Schenk, A.; Csatho, B. M.; Nagarajan, S.
2010-12-01
The polar regions play an important role in Earth’s climatic and geodynamic systems. Digital photogrammetric mapping provides a means for monitoring the dramatic changes observed in the polar regions during the past decades. High-resolution, photogrammetrically processed digital aerial imagery provides complementary information to surface measurements obtained by laser altimetry systems. While laser points accurately sample the ice surface, stereo images allow for the mapping of features, such as crevasses, flow bands, shear margins, moraines, leads, and different types of sea ice. Tracking features in repeat images produces a dense velocity vector field that can either serve as validation for interferometrically derived surface velocities or it constitutes a stand-alone product. A multi-modal, photogrammetric platform consists of one or more high-resolution, commercial color cameras, GPS and inertial navigation system as well as optional laser scanner. Such a system, using a Canon EOS-1DS Mark II camera, was first flown on the Icebridge missions Fall 2009 and Spring 2010, capturing hundreds of thousands of images at a frame rate of about one second. While digital images and videos have been used for quite some time for visual inspection, precise 3D measurements with low cost, commercial cameras require special photogrammetric treatment that only became available recently. Calibrating the multi-camera imaging system and geo-referencing the images are absolute prerequisites for all subsequent applications. Commercial cameras are inherently non-metric, that is, their sensor model is only approximately known. Since these cameras are not as rugged as photogrammetric cameras, the interior orientation also changes, due to temperature and pressure changes and aircraft vibration, resulting in large errors in 3D measurements. It is therefore necessary to calibrate the cameras frequently, at least whenever the system is newly installed. Geo-referencing the images is performed by the Applanix navigation system. Our new method enables a 3D reconstruction of ice sheet surface with high accuracy and unprecedented details, as it is demonstrated by examples from the Antarctic Peninsula, acquired by the IceBridge mission. Repeat digital imaging also provides data for determining surface elevation changes and velocities that are critical parameters for ice sheet models. Although these methods work well, there are known problems with satellite images and the traditional area-based matching, especially over rapidly changing outlet glaciers. To take full advantage of the high resolution, repeat stereo imaging we have developed a new method. The processing starts with the generation of a DEM from geo-referenced stereo images of the first time epoch. The next step is concerned with extracting and matching interest points in object space. Since an interest point moves its spatial position between two time epochs, such points are only radiometrically conjugate but not geometrically. In fact, the geometric displacement of two identical points, together with the time difference, renders velocities. We computed the evolution of the velocity field and surface topography on the floating tongue of the Jakobshavn glacier from historical stereo aerial photographs to illustrate the approach.
Multi-sensor fusion over the World Trade Center disaster site
NASA Astrophysics Data System (ADS)
Rodarmel, Craig; Scott, Lawrence; Simerlink, Deborah A.; Walker, Jeffrey
2002-09-01
The immense size and scope of the rescue and clean-up of the World Trade Center site created a need for data that would provide a total overview of the disaster area. To fulfill this need, the New York State Office for Technology (NYSOFT) contracted with EarthData International to collect airborne remote sensing data over Ground Zero with an airborne light detection and ranging (LIDAR) sensor, a high-resolution digital camera, and a thermal camera. The LIDAR data provided a three-dimensional elevation model of the ground surface that was used for volumetric calculations and also in the orthorectification of the digital images. The digital camera provided high-resolution imagery over the site to aide the rescuers in placement of equipment and other assets. In addition, the digital imagery was used to georeference the thermal imagery and also provided the visual background for the thermal data. The thermal camera aided in the location and tracking of underground fires. The combination of data from these three sensors provided the emergency crews with a timely, accurate overview containing a wealth of information of the rapidly changing disaster site. Because of the dynamic nature of the site, the data was acquired on a daily basis, processed, and turned over to NYSOFT within twelve hours of the collection. During processing, the three datasets were combined and georeferenced to allow them to be inserted into the client's geographic information systems.
Selkowitz, D.J.
2010-01-01
Shrub cover appears to be increasing across many areas of the Arctic tundra biome, and increasing shrub cover in the Arctic has the potential to significantly impact global carbon budgets and the global climate system. For most of the Arctic, however, there is no existing baseline inventory of shrub canopy cover, as existing maps of Arctic vegetation provide little information about the density of shrub cover at a moderate spatial resolution across the region. Remotely-sensed fractional shrub canopy maps can provide this necessary baseline inventory of shrub cover. In this study, we compare the accuracy of fractional shrub canopy (> 0.5 m tall) maps derived from multi-spectral, multi-angular, and multi-temporal datasets from Landsat imagery at 30 m spatial resolution, Moderate Resolution Imaging SpectroRadiometer (MODIS) imagery at 250 m and 500 m spatial resolution, and MultiAngle Imaging Spectroradiometer (MISR) imagery at 275 m spatial resolution for a 1067 km2 study area in Arctic Alaska. The study area is centered at 69 ??N, ranges in elevation from 130 to 770 m, is composed primarily of rolling topography with gentle slopes less than 10??, and is free of glaciers and perennial snow cover. Shrubs > 0.5 m in height cover 2.9% of the study area and are primarily confined to patches associated with specific landscape features. Reference fractional shrub canopy is determined from in situ shrub canopy measurements and a high spatial resolution IKONOS image swath. Regression tree models are constructed to estimate fractional canopy cover at 250 m using different combinations of input data from Landsat, MODIS, and MISR. Results indicate that multi-spectral data provide substantially more accurate estimates of fractional shrub canopy cover than multi-angular or multi-temporal data. Higher spatial resolution datasets also provide more accurate estimates of fractional shrub canopy cover (aggregated to moderate spatial resolutions) than lower spatial resolution datasets, an expected result for a study area where most shrub cover is concentrated in narrow patches associated with rivers, drainages, and slopes. Including the middle infrared bands available from Landsat and MODIS in the regression tree models (in addition to the four standard visible and near-infrared spectral bands) typically results in a slight boost in accuracy. Including the multi-angular red band data available from MISR in the regression tree models, however, typically boosts accuracy more substantially, resulting in moderate resolution fractional shrub canopy estimates approaching the accuracy of estimates derived from the much higher spatial resolution Landsat sensor. Given the poor availability of snow and cloud-free Landsat scenes in many areas of the Arctic and the promising results demonstrated here by the MISR sensor, MISR may be the best choice for large area fractional shrub canopy mapping in the Alaskan Arctic for the period 2000-2009.
Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J; Bongiorno, Daniel
2013-01-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales.
Bryson, Mitch; Johnson-Roberson, Matthew; Murphy, Richard J.; Bongiorno, Daniel
2013-01-01
Intertidal ecosystems have primarily been studied using field-based sampling; remote sensing offers the ability to collect data over large areas in a snapshot of time that could complement field-based sampling methods by extrapolating them into the wider spatial and temporal context. Conventional remote sensing tools (such as satellite and aircraft imaging) provide data at limited spatial and temporal resolutions and relatively high costs for small-scale environmental science and ecologically-focussed studies. In this paper, we describe a low-cost, kite-based imaging system and photogrammetric/mapping procedure that was developed for constructing high-resolution, three-dimensional, multi-spectral terrain models of intertidal rocky shores. The processing procedure uses automatic image feature detection and matching, structure-from-motion and photo-textured terrain surface reconstruction algorithms that require minimal human input and only a small number of ground control points and allow the use of cheap, consumer-grade digital cameras. The resulting maps combine imagery at visible and near-infrared wavelengths and topographic information at sub-centimeter resolutions over an intertidal shoreline 200 m long, thus enabling spatial properties of the intertidal environment to be determined across a hierarchy of spatial scales. Results of the system are presented for an intertidal rocky shore at Jervis Bay, New South Wales, Australia. Potential uses of this technique include mapping of plant (micro- and macro-algae) and animal (e.g. gastropods) assemblages at multiple spatial and temporal scales. PMID:24069206
The optical design of the G-CLEF Spectrograph: the first light instrument for the GMT
NASA Astrophysics Data System (ADS)
Ben-Ami, Sagi; Epps, Harland; Evans, Ian; Mueller, Mark; Podgorski, William; Szentgyorgyi, Andrew
2016-08-01
The GMT-Consortium Large Earth Finder (G-CLEF), the first major light instrument for the GMT, is a fiber-fed, high-resolution echelle spectrograph. In the following paper, we present the optical design of G-CLEF. We emphasize the unique solutions derived for the spectrograph fiber-feed: the Mangin mirror that corrects the cylindrical field curvature, the implementation of VPH grisms as cross dispersers, and our novel solution for a multi-colored exposure meter. We describe the spectrograph blue and red cameras comprised of 7 and 8 elements respectively, with one aspheric surface in each camera, and present the expected echellogram imaged on the instrument focal planes. Finally, we present ghost analysis and mitigation strategy that takes into account both single reflection and double reflection back scattering from various elements in the optical train.
1973-09-01
This Earth Resource Experiment Package (EREP) photograph of the Uncompahgre area of Colorado was electronically acquired in September of 1973 by the Multi-spectral Scarner, Skylab Experiment S192. EREP images were used to analyze the vegetation conditions and landscape characteristic of this area. Skylab's Earth sensors played the dual roles of gathering information about the planet and perfecting instruments and techniques for future satellites and manned stations. An array of six fixed cameras, another for high resolution, and the astronauts' handheld cameras photographed surface features. Other instruments, recording on magnetic tape, measured the reflectivity of plants, soils, and water. Radar measured the altitude of land and water surfaces. The sensors' objectives were to survey croplands and forests, identify soils and rock types, map natural features and urban developments, detect sediments and the spread of pollutants, study clouds and the sea, and determine the extent of snow and ice cover.
A simple and low-cost structured illumination microscopy using a pico-projector
NASA Astrophysics Data System (ADS)
Özgürün, Baturay
2018-02-01
Here, development of a low-cost structured illumination microscopy (SIM) based on a pico-projector is presented. The pico-projector consists of independent red, green and blue LEDs that remove need for an external illumination source. Moreover, display element of the pico-projector serves as a pattern generating spatial light modulator. A simple lens group is employed to couple light from the projector to an epi-illumination port of a commercial microscope system. 2D sub SIM images are acquired and synthesized to surpass the diffraction limit using 40x (0.75 NA) objective. Resolution of the reconstructed SIM images is verified with a dye-and-object object and a fixed cell sample.
Strategic options towards an affordable high-performance infrared camera
NASA Astrophysics Data System (ADS)
Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.
2016-05-01
The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.
NASA Astrophysics Data System (ADS)
Sakon, I.; Onaka, T.; Kataza, H.; Wada, T.; Sarugaku, Y.; Matsuhara, H.; Nakagawa, T.; Kobayashi, N.; Kemper, C.; Ohyama, Y.; Matsumoto, T.; Seok, J. Y.
Mid-Infrared Camera and Spectrometers (MCS) is one of the Focal-Plane Instruments proposed for the SPICA mission in the pre-project phase. SPICA MCS is equipped with two spectrometers with different spectral resolution powers (R=λ /δ λ ); medium-resolution spectrometer (MRS) which covers 12-38µ m with R≃1100-3000, and high-resolution spectrometer (HRS) which covers either 12-18µ m with R≃30000. MCS is also equipped with Wide Field Camera (WFC), which is capable of performing multi-objects grism spectroscopy in addition to the imaging observation. A small slit aperture for low-resolution slit spectroscopy is planned to be placed just next to the field of view (FOV) aperture for imaging and slit-less spectroscopic observation. MCS covers an important part of the core spectral range of SPICA and, complementary with SAFARI (SpicA FAR-infrared Instrument), can do crucial observations for a number of key science cases to revolutionize our understanding of the lifecycle of dust in the universe. In this article, the latest design specification and the expected performance of the SPICA/MCS are introduced. Key science cases that should be targetted by SPICA/MCS have been discussed by the MCS science working group. Among such science cases, some of those related to dust science are briefly introduced.
Diagnosing clouds and hazes in exoplanet atmospheres
NASA Astrophysics Data System (ADS)
Fraine, Jonathan David
Exoplanet atmospheres provide a probe into the conditions on alien worlds, from hot Jupiters to Super-Earths. We can now glimpse the behaviour of extreme solar systems that defy our understanding of planet formation and capture our imaginations about the possibilities for understanding planets and life in our universe. I combined multi-epoch, multi-instrument observations from both space and ground based facilities. I developed observational techniques and tools to constrain exoplanetary atmospheric compositions, temperature profiles, and scale heights over a span of planetary masses and wavelengths, that provided a probe into the properties of these diverse planetary atmospheres. I led a team that used the Spitzer Space Telescope, with the IR Array Camera (IRAC), to observe the well known transiting Super-Earth, GJ 1214b (˜2.7 R⊕). My precisely constrained infrared transit depth, error ˜ O(40 ppm), significantly constrained the lack of any molecular detections out to a wavelength of 5mum. The significance of this null detection challenges self-consistent models for the atmosphere of this super-Earth. Models must invoke thick, grey opacity clouds that uniformly cause the atmosphere to be opaque at all wavelengths. My team and I used the Hubble Space Telescope Wide Field Camera 3 (HST-WFC3) to spectroscopically probe the atmosphere of the transiting warm Neptune, HAT-P-11b (˜4.5 R⊕), and detected the first molecular signature from a small exoplanet (Rp < RSaturn), inferring the presence of a hydrogen rich atmosphere. The average densities of many transiting exoplanets are known, but the degree to which atmospheric composition---abundance of Hydrogen relative to other atoms and molecules---correlates with the bulk composition has not yet been established. In an effort to characterize the atmospheric metallicity in greater detail, my team observed HAT-P-11 using warm Spitzer IRAC at 3.6 and 4.5mum. The non-detections of eclipses HAT-P-11b provided upper limits on the temperature profile at 3.6 and 4.5mu m. I am one of the founding members of the ACCESS collaboration (Arizona-CfA-Catolica Exoplanet Spectroscopy Survey), a ground based observational campaign to spectroscopically survey a catalogue of exoplanetary atmospheres using major optical telescopes. I observed several of our targets from the 6.5m Magellan-Baade telescope. The results of my first observation provided low signal-to-noise constraints on the cloud properties of the hot Jupiter WASP-4b, as well as the UV radiation environment produced by its host star, WASP-4. The combination of these observational constraints provided greater insight into the end-products of the planet formation process, and developed the knowledge base of our community for both cloudy and clear worlds.
A Multi-resolution, Multi-epoch Low Radio Frequency Survey of the Kepler K2 Mission Campaign 1 Field
NASA Astrophysics Data System (ADS)
Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.
2016-10-01
We present the first dedicated radio continuum survey of a Kepler K2 mission field, Field 1, covering the North Galactic Cap. The survey is wide field, contemporaneous, multi-epoch, and multi-resolution in nature and was conducted at low radio frequencies between 140 and 200 MHz. The multi-epoch and ultra wide field (but relatively low resolution) part of the survey was provided by 15 nights of observation using the Murchison Widefield Array (MWA) over a period of approximately a month, contemporaneous with K2 observations of the field. The multi-resolution aspect of the survey was provided by the low resolution (4‧) MWA imaging, complemented by non-contemporaneous but much higher resolution (20″) observations using the Giant Metrewave Radio Telescope (GMRT). The survey is, therefore, sensitive to the details of radio structures across a wide range of angular scales. Consistent with other recent low radio frequency surveys, no significant radio transients or variables were detected in the survey. The resulting source catalogs consist of 1085 and 1468 detections in the two MWA observation bands (centered at 154 and 185 MHz, respectively) and 7445 detections in the GMRT observation band (centered at 148 MHz), over 314 square degrees. The survey is presented as a significant resource for multi-wavelength investigations of the more than 21,000 target objects in the K2 field. We briefly examine our survey data against K2 target lists for dwarf star types (stellar types M and L) that have been known to produce radio flares.
Polishing techniques for MEGARA pupil elements optics
NASA Astrophysics Data System (ADS)
Izazaga, R.; Carrasco, E.; Aguirre, D.; Salas, A.; Gil de Paz, A.; Gallego, J.; Iglesias, J.; Arroyo, J. M.; Hernández, M.; López, N.; López, V.; Quechol, J. T.; Salazar, M. F.; Carballo, C.; Cruz, E.; Arriaga, J.; De la Luz, J. A.; Huepa, A.; Jaimes, G. L.; Reyes, J.
2016-07-01
MEGARA (Multi-Espectrógrafo en GTC de Alta Resolución para Astronomía) is the new integral-field and multi-object optical spectrograph for the 10.4m Gran Telescopio Canarias.. It will offer RFWHM 6,000, 12,000 and 18,700 for the low- , mid- and high-resolution, respectively in the wavelength range 3650-9700Å. .The dispersive elements are volume phase holographic (VPH) gratings, sandwiched between two flat Fused Silica windows of high optical precision in large apertures. The design, based in VPHs in combination with Ohara PBM2Y prisms allows to keep the collimator and camera angle fixed. Seventy three optical elements are being built in Mexico at INAOE and CIO. For the low resolution modes, the VPHs windows specifications in irregularity is 1 fringe in 210mm x 170mm and 0.5 fringe in 190mm x 160mm. for a window thickness of 25 mm. For the medium and high resolution modes the irregularity specification is 2 fringes in 220mm x 180mm and 1 fringe in 205mm x 160mm, for a window thickness of 20mm. In this work we present a description of the polishing techniques developed at INAOE optical workshop to fabricate the 36 Fused Silica windows and 24 PBM2Y prisms that allows us to achieve such demanding specifications. We include the processes of mounting, cutting, blocking, polishing and testing.
A Silicon SPECT System for Molecular Imaging of the Mouse Brain.
Shokouhi, Sepideh; Fritz, Mark A; McDonald, Benjamin S; Durko, Heather L; Furenlid, Lars R; Wilson, Donald W; Peterson, Todd E
2007-01-01
We previously demonstrated the feasibility of using silicon double-sided strip detectors (DSSDs) for SPECT imaging of the activity distribution of iodine-125 using a 300-micrometer thick detector. Based on this experience, we now have developed fully customized silicon DSSDs and associated readout electronics with the intent of developing a multi-pinhole SPECT system. Each DSSD has a 60.4 mm × 60.4 mm active area and is 1 mm thick. The strip pitch is 59 micrometers, and the readout of the 1024 strips on each side gives rise to a detector with over one million pixels. Combining four high-resolution DSSDs into a SPECT system offers an unprecedented space-bandwidth product for the imaging of single-photon emitters. The system consists of two camera heads with two silicon detectors stacked one behind the other in each head. The collimator has a focused pinhole system with cylindrical-shaped pinholes that are laser-drilled in a 250 μm tungsten plate. The unique ability to collect projection data at two magnifications simultaneously allows for multiplexed data at high resolution to be combined with lower magnification data with little or no multiplexing. With the current multi-pinhole collimator design, our SPECT system will be capable of offering high spatial resolution, sensitivity and angular sampling for small field-of-view applications, such as molecular imaging of the mouse brain.
VizieR Online Data Catalog: Antennae galaxies (NGC 4038/4039) revisited (Whitmore+, 2010)
NASA Astrophysics Data System (ADS)
Whitmore, B. C.; Chandar, R.; Schweizer, F.; Rothberg, B.; Leitherer, C.; Rieke, M.; Rieke, G.; Blair, W. P.; Mengel, S.; Alonso-Herrero, A.
2012-06-01
Observations of the main bodies of NGC 4038/39 were made with the Hubble Space Telescope (HST), using the ACS, as part of Program GO-10188. Multi-band photometry was obtained in the following optical broadband filters: F435W (~B), F550M (~V), and F814W (~I). Archival F336W photometry of the Antennae (Program GO-5962) was used to supplement our optical ACS/WFC observations. Infrared observations were made using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) camera on HST as part of Program GO-10188. Observations were made using the NIC2 camera with the F160W, F187N, and F237M filters, and the NIC3 camera with the F110W, F160W, F164W, F187N, and F222M filters. (10 data files).
Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware
NASA Astrophysics Data System (ADS)
Kang, Y.-W.; Byun, Y. I.; Rhee, J. H.; Oh, S. H.; Kim, D. K.
2007-12-01
We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512), KAF-1602E(1536×1024), KAF-3200E(2184×1472) made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.
Laser scatter feature of surface defect on apples
NASA Astrophysics Data System (ADS)
Rao, Xiuqin; Ying, Yibin; Cen, YiKe; Huang, Haibo
2006-10-01
A machine vision system for real-time fruit quality inspection was developed. The system consists of a chamber, a laser projector, a TMS-7DSP CCD camera (PULNIX Inc.), and a computer. A Meteor-II/MC frame grabber (Matrox Graphics Inc.) was inserted into the slot of the computer to grab fruit images. The laser projector and the camera were mounted at the ceiling of the chamber. An apple was put in the chamber, the spot of the laser projector was projected on the surface of the fruit, and an image was grabbed. 2 breed of apples was test, Each apple was imaged twice, one was imaged for the normal surface, and the other for the defect. The red component of the images was used to get the feature of the defect and the sound surface of the fruits. The average value, STD value and comentropy Value of red component of the laser scatter image were analyzed. The Standard Deviation value of red component of normal is more suitable to separate the defect surface from sound surface for the ShuijinFuji apples, but for bintang apples, there is more work need to do to separate the different surface with laser scatter image.
NASA Astrophysics Data System (ADS)
Lavallée, Yan; Johnson, Jeffrey; Andrews, Benjamin; Wolf, Rudiger; Rose, William; Chigna, Gustavo; Pineda, Armand
2016-04-01
In January 2016, we held the first scientific/educational Workshops on Volcanoes (WoV). The workshop took place at Santiaguito volcano - the most active volcano in Guatemala. 69 international scientists of all ages participated in this intensive, multi-parametric investigation of the volcanic activity, which included the deployment of seismometers, tiltmeters, infrasound microphones and mini-DOAS as well as optical, thermographic, UV and FTIR cameras around the active vent. These instruments recorded volcanic activity in concert over a period of 3 to 9 days. Here we review the research activities and present some of the spectacular observations made through this interdisciplinary efforts. Observations range from high-resolution drone and IR footage of explosions, monitoring of rock falls and quantification of the erupted mass of different gases and ash, as well as morphological changes in the dome caused by recurring explosions (amongst many other volcanic processes). We will discuss the success of such integrative ventures in furthering science frontiers and developing the next generation of geoscientists.
NASA Astrophysics Data System (ADS)
Chen, Enguo; Liu, Peng; Yu, Feihong
2012-10-01
A novel synchronized optimization method of multiple freeform surfaces is proposed and applied to double lenses illumination system design of CF-LCoS pico-projectors. Based on Snell's law and the energy conservation law, a series of first-order partial differential equations are derived for the multiple freeform surfaces of the initial system. By assigning the light deflection angle to each freeform surface, multiple surfaces can be obtained simultaneously by solving the corresponding equations, meanwhile the restricted angle on CF-LCoS is guaranteed. In order to improve the spatial uniformity, the multi-surfaces are synchronously optimized by using simplex algorithm for an extended LED source. Design example shows that the double lenses based illumination system, which employs a single 2 mm×2 mm LED chip and a CF-LCoS panel with a diagonal of 0.59 inches satisfies the needs of pico-projector. Moreover, analytical result indicates that the design method represents substantial improvement and practical significance over traditional CF-LCoS projection system, which could offer outstanding performance with both portability and low cost. The synchronized optimization design method could not only realize collimating and uniform illumination, but also could be introduced to other specific light conditions.
New ultrasensitive pickup device for deep-sea robots: underwater super-HARP color TV camera
NASA Astrophysics Data System (ADS)
Maruyama, Hirotaka; Tanioka, Kenkichi; Uchida, Tetsuo
1994-11-01
An ultra-sensitive underwater super-HARP color TV camera has been developed. The characteristics -- spectral response, lag, etc. -- of the super-HARP tube had to be designed for use underwater because the propagation of light in water is very different from that in air, and also depends on the light's wavelength. The tubes have new electrostatic focusing and magnetic deflection functions and are arranged in parallel to miniaturize the camera. A deep sea robot (DOLPHIN 3K) was fitted with this camera and used for the first sea test in Sagami Bay, Japan. The underwater visual information was clear enough to promise significant improvements in both deep sea surveying and safety. It was thus confirmed that the Super- HARP camera is very effective for underwater use.
Proper Orthogonal Decomposition on Experimental Multi-phase Flow in a Pipe
NASA Astrophysics Data System (ADS)
Viggiano, Bianca; Tutkun, Murat; Cal, Raúl Bayoán
2016-11-01
Multi-phase flow in a 10 cm diameter pipe is analyzed using proper orthogonal decomposition. The data were obtained using X-ray computed tomography in the Well Flow Loop at the Institute for Energy Technology in Kjeller, Norway. The system consists of two sources and two detectors; one camera records the vertical beams and one camera records the horizontal beams. The X-ray system allows measurement of phase holdup, cross-sectional phase distributions and gas-liquid interface characteristics within the pipe. The mathematical framework in the context of multi-phase flows is developed. Phase fractions of a two-phase (gas-liquid) flow are analyzed and a reduced order description of the flow is generated. Experimental data deepens the complexity of the analysis with limited known quantities for reconstruction. Comparison between the reconstructed fields and the full data set allows observation of the important features. The mathematical description obtained from the decomposition will deepen the understanding of multi-phase flow characteristics and is applicable to fluidized beds, hydroelectric power and nuclear processes to name a few.
Causes of cine image quality deterioration in cardiac catheterization laboratories.
Levin, D C; Dunham, L R; Stueve, R
1983-10-01
Deterioration of cineangiographic image quality can result from malfunctions or technical errors at a number of points along the cine imaging chain: generator and automatic brightness control, x-ray tube, x-ray beam geometry, image intensifier, optics, cine camera, cine film, film processing, and cine projector. Such malfunctions or errors can result in loss of image contrast, loss of spatial resolution, improper control of film optical density (brightness), or some combination thereof. While the electronic and photographic technology involved is complex, physicians who perform cardiac catheterization should be conversant with the problems and what can be done to solve them. Catheterization laboratory personnel have control over a number of factors that directly affect image quality, including radiation dose rate per cine frame, kilovoltage or pulse width (depending on type of automatic brightness control), cine run time, selection of small or large focal spot, proper object-intensifier distance and beam collimation, aperture of the cine camera lens, selection of cine film, processing temperature, processing immersion time, and selection of developer.
NASA Astrophysics Data System (ADS)
Sun, Y. S.; Zhang, L.; Xu, B.; Zhang, Y.
2018-04-01
The accurate positioning of optical satellite image without control is the precondition for remote sensing application and small/medium scale mapping in large abroad areas or with large-scale images. In this paper, aiming at the geometric features of optical satellite image, based on a widely used optimization method of constraint problem which is called Alternating Direction Method of Multipliers (ADMM) and RFM least-squares block adjustment, we propose a GCP independent block adjustment method for the large-scale domestic high resolution optical satellite image - GISIBA (GCP-Independent Satellite Imagery Block Adjustment), which is easy to parallelize and highly efficient. In this method, the virtual "average" control points are built to solve the rank defect problem and qualitative and quantitative analysis in block adjustment without control. The test results prove that the horizontal and vertical accuracy of multi-covered and multi-temporal satellite images are better than 10 m and 6 m. Meanwhile the mosaic problem of the adjacent areas in large area DOM production can be solved if the public geographic information data is introduced as horizontal and vertical constraints in the block adjustment process. Finally, through the experiments by using GF-1 and ZY-3 satellite images over several typical test areas, the reliability, accuracy and performance of our developed procedure will be presented and studied in this paper.
Microfilming for Drafting Students
ERIC Educational Resources Information Center
Bass, Ronald E.
1972-01-01
If you have a 35mm camera, an enlarger or filmstrip projector, and developing equipment you can introduce your drafting students to one of the processes used in the newly emerging field of technical communication.'' (Editor)
Structured light system calibration method with optimal fringe angle.
Li, Beiwen; Zhang, Song
2014-11-20
For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H) mm×250(W) mm×500(D) mm.
Towards Robust Self-Calibration for Handheld 3d Line Laser Scanning
NASA Astrophysics Data System (ADS)
Bleier, M.; Nüchter, A.
2017-11-01
This paper studies self-calibration of a structured light system, which reconstructs 3D information using video from a static consumer camera and a handheld cross line laser projector. Intersections between the individual laser curves and geometric constraints on the relative position of the laser planes are exploited to achieve dense 3D reconstruction. This is possible without any prior knowledge of the movement of the projector. However, inaccurrately extracted laser lines introduce noise in the detected intersection positions and therefore distort the reconstruction result. Furthermore, when scanning objects with specular reflections, such as glossy painted or metalic surfaces, the reflections are often extracted from the camera image as erroneous laser curves. In this paper we investiagte how robust estimates of the parameters of the laser planes can be obtained despite of noisy detections.
Barnacle Bill in Super Resolution from Insurance Panorama
NASA Technical Reports Server (NTRS)
1998-01-01
Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.
This view of Barnacle Bill was produced by combining the 'Insurance Pan' frames taken while the IMP camera was still in its stowed position on sol2. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The right eye composite consists of 5 frames, taken with different color filters, the left eye consists of only 1 frame. The resultant image from each eye was enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars.The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument.A higher-speed compressive sensing camera through multi-diode design
NASA Astrophysics Data System (ADS)
Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore
2013-05-01
Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.
A weighted optimization approach to time-of-flight sensor fusion.
Schwarz, Sebastian; Sjostrom, Marten; Olsson, Roger
2014-01-01
Acquiring scenery depth is a fundamental task in computer vision, with many applications in manufacturing, surveillance, or robotics relying on accurate scenery information. Time-of-flight cameras can provide depth information in real-time and overcome short-comings of traditional stereo analysis. However, they provide limited spatial resolution and sophisticated upscaling algorithms are sought after. In this paper, we present a sensor fusion approach to time-of-flight super resolution, based on the combination of depth and texture sources. Unlike other texture guided approaches, we interpret the depth upscaling process as a weighted energy optimization problem. Three different weights are introduced, employing different available sensor data. The individual weights address object boundaries in depth, depth sensor noise, and temporal consistency. Applied in consecutive order, they form three weighting strategies for time-of-flight super resolution. Objective evaluations show advantages in depth accuracy and for depth image based rendering compared with state-of-the-art depth upscaling. Subjective view synthesis evaluation shows a significant increase in viewer preference by a factor of four in stereoscopic viewing conditions. To the best of our knowledge, this is the first extensive subjective test performed on time-of-flight depth upscaling. Objective and subjective results proof the suitability of our approach to time-of-flight super resolution approach for depth scenery capture.
Retinal oxygen saturation evaluation by multi-spectral fundus imaging
NASA Astrophysics Data System (ADS)
Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James
2007-03-01
Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.
NASA Astrophysics Data System (ADS)
Defanti, Thomas A.; Acevedo, Daniel; Ainsworth, Richard A.; Brown, Maxine D.; Cutchin, Steven; Dawe, Gregory; Doerr, Kai-Uwe; Johnson, Andrew; Knox, Chris; Kooima, Robert; Kuester, Falko; Leigh, Jason; Long, Lance; Otto, Peter; Petrovic, Vid; Ponto, Kevin; Prudhomme, Andrew; Rao, Ramesh; Renambot, Luc; Sandin, Daniel J.; Schulze, Jurgen P.; Smarr, Larry; Srinivasan, Madhu; Weber, Philip; Wickham, Gregory
2011-03-01
The CAVE, a walk-in virtual reality environment typically consisting of 4-6 3 m-by-3 m sides of a room made of rear-projected screens, was first conceived and built in 1991. In the nearly two decades since its conception, the supporting technology has improved so that current CAVEs are much brighter, at much higher resolution, and have dramatically improved graphics performance. However, rear-projection-based CAVEs typically must be housed in a 10 m-by-10 m-by-10 m room (allowing space behind the screen walls for the projectors), which limits their deployment to large spaces. The CAVE of the future will be made of tessellated panel displays, eliminating the projection distance, but the implementation of such displays is challenging. Early multi-tile, panel-based, virtual-reality displays have been designed, prototyped, and built for the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia by researchers at the University of California, San Diego, and the University of Illinois at Chicago. New means of image generation and control are considered key contributions to the future viability of the CAVE as a virtual-reality device.
New developments in super-resolution for GaoFen-4
NASA Astrophysics Data System (ADS)
Li, Feng; Fu, Jie; Xin, Lei; Liu, Yuhong; Liu, Zhijia
2017-10-01
In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.
Current status of the facility instrumentation suite at the Large Binocular Telescope Observatory
NASA Astrophysics Data System (ADS)
Rothberg, Barry; Kuhn, Olga; Edwards, Michelle L.; Hill, John M.; Thompson, David; Veillet, Christian; Wagner, R. Mark
2016-07-01
The current status of the facility instrumentation for the Large Binocular Telescope (LBT) is reviewed. The LBT encompasses two 8.4 meter primary mirrors on a single mount yielding an effective collecting area of 11.8 meters or 23 meters when interferometrically combined. The three facility instruments at LBT include: 1) the Large Binocular Cameras (LBCs), each with a 23'× 25' field of view (FOV). The blue optimized and red optimized optical wavelength LBCs are mounted at the prime focus of the SX (left) and DX (right) primary mirrors, respectively. Combined, the filter suite of the two LBCs cover 0.3-1.1 μm, including the addition of new medium-band filters centered on TiO (0.78 μm) and CN (0.82 μm) 2) the Multi-Object Double Spectrograph (MODS), two identical optical spectrographs each mounted at the straight through f/15 Gregorian focus of the primary mirrors. The capabilities of MODS-1 and -2 include imaging with Sloan filters (u, g, r, i, and z) and medium resolution (R ˜ 2000) spectroscopy, each with 24 interchangeable masks (multi-object or longslit) over a 6'× 6' FOV. Each MODS is capable of blue (0.32-0.6 μm) and red (0.5-1.05 μm) wavelength only spectroscopy coverage or both can employ a dichroic for 0.32-1.05 μm wavelength coverage (with reduced coverage from 0.56- 0.57 μm) and 3) the two LBT Utility Camera in the Infrared instruments (LUCIs), are each mounted at a bent-front Gregorian f/15 focus of a primary mirror. LUCI-1 and 2 are designed for seeing-limited (4'× 4' FOV) and active optics using thin-shell adaptive secondary mirrors (0.5'× 0.5' FOV) imaging and spectroscopy over the wavelength range of 0.95-2.5 μm and spectroscopic resolutions of 400 <= R <= 11000 (depending on the combination of grating, slits, and cameras used). The spectroscopic capabilities also include 32 interchangeable multi-object or longslit masks which are cryogenically cooled. Currently all facility instruments are in-place at the LBT and, for the first time, have been on-sky for science observations. In Summer 2015 LUCI-1 was refurbished to replace the infrared detector; to install a high-resolution camera to take advantage of the active optics SX secondary; and to install a grating designed primarily for use with high resolution active optics. Thus, like MODS-1 and -2, both LUCIs now have specifications nearly identical to each other. The software interface for both LUCIs have also been replaced, allowing both instruments to be run together from a single interface. With the installation of all facility instruments finally complete we also report on the first science use of "mixed-mode" operations, defined as the combination of different paired instruments with each mirror (i.e. LBC+MODS, LBC+LUCI, LUCI+MODS). Although both primary mirrors reside on a single fixed mount, they are capable of operating as independent entities within a defined "co-pointing" limit. This provides users with the additional capability to independently dither each mirror or center observations on two different sets of spatial coordinates within this limit.
Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu
2016-01-01
Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127
Imaging Techniques for Dense 3D reconstruction of Swimming Aquatic Life using Multi-view Stereo
NASA Astrophysics Data System (ADS)
Daily, David; Kiser, Jillian; McQueen, Sarah
2016-11-01
Understanding the movement characteristics of how various species of fish swim is an important step to uncovering how they propel themselves through the water. Previous methods have focused on profile capture methods or sparse 3D manual feature point tracking. This research uses an array of 30 cameras to automatically track hundreds of points on a fish as they swim in 3D using multi-view stereo. Blacktip sharks, sting rays, puffer fish, turtles and more were imaged in collaboration with the National Aquarium in Baltimore, Maryland using the multi-view stereo technique. The processes for data collection, camera synchronization, feature point extraction, 3D reconstruction, 3D alignment, biological considerations, and lessons learned will be presented. Preliminary results of the 3D reconstructions will be shown and future research into mathematically characterizing various bio-locomotive maneuvers will be discussed.
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e, n Z, ΔZ eff, and n e,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well asmore » transient levels of metal contributions. As a result, this technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.« less
Multi-energy SXR cameras for magnetically confined fusion plasmas (invited)
Delgado-Aparicio, L. F.; Maddox, J.; Pablant, N.; ...
2016-11-14
A compact multi-energy soft x-ray camera has been developed for time, energy and space-resolved measurements of the soft-x-ray emissivity in magnetically confined fusion plasmas. Multi-energy soft x-ray imaging provides a unique opportunity for measuring, simultaneously, a variety of important plasma properties (T e, n Z, ΔZ eff, and n e,fast). The electron temperature can be obtained by modeling the slope of the continuum radiation from ratios of the available brightness and inverted radial emissivity profiles over multiple energy ranges. Impurity density measurements are also possible using the line-emission from medium- to high-Z impurities to separate the background as well asmore » transient levels of metal contributions. As a result, this technique should be explored also as a burning plasma diagnostic in-view of its simplicity and robustness.« less
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
Focal Plane Detectors for the Advanced Gamma-Ray Imaging System (AGIS)
NASA Astrophysics Data System (ADS)
Otte, A. N.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Horan, D.; Mukherjee, R.; Smith, A.; Tajima, H.; Wagner, R. G.; Williams, D. A.
2008-12-01
The Advanced Gamma-Ray Imaging System (AGIS) is a concept for the next generation observatory in ground-based very high energy gamma-ray astronomy. Design goals are ten times better sensitivity, higher angular resolution, and a lower energy threshold than existing Cherenkov telescopes. Simulations show that a substantial improvement in angular resolution may be achieved if the pixel diameter is reduced to the order of 0.05 deg, i.e. two to three times smaller than the pixel diameter of current Cherenkov telescope cameras. At these dimensions, photon detectors with smaller physical dimensions can be attractive alternatives to the classical photomultiplier tube (PMT). Furthermore, the operation of an experiment with the size of AGIS requires photon detectors that are among other things more reliable, more durable, and possibly higher efficiency photon detectors. Alternative photon detectors we are considering for AGIS include both silicon photomultipliers (SiPMs) and multi-anode photomultipliers (MAPMTs). Here we present results from laboratory testing of MAPMTs and SiPMs along with results from the first incorporation of these devices into cameras on test bed Cherenkov telescopes.