Sample records for pixel optical depth

  1. DSCOVR EPIC AERUV Parameters

    Atmospheric Science Data Center

    2018-06-27

    ... AerosolType      The aerosol type associated with the ground pixel.        1 - Smoke ... algorithm flag associated with the ground pixel:     Aerosol extinction Optical Depth (AOD), Single Scattering Albedo (SSA), and     Aerosol Absorption Optical Depth (AAOD) Retrievals:        0 - Most ...

  2. Pixel-based parametric source depth map for Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Altabella, L.; Boschi, F.; Spinelli, A. E.

    2016-01-01

    Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.

  3. Uncertainty in cloud optical depth estimates made from satellite radiance measurements

    NASA Technical Reports Server (NTRS)

    Pincus, Robert; Szczodrak, Malgorzata; Gu, Jiujing; Austin, Philip

    1995-01-01

    The uncertainty in optical depths retrieved from satellite measurements of visible wavelength radiance at the top of the atmosphere is quantified. Techniques are briefly reviewed for the estimation of optical depth from measurements of radiance, and it is noted that these estimates are always more uncertain at greater optical depths and larger solar zenith angles. The lack of radiometric calibration for visible wavelength imagers on operational satellites dominates the uncertainty retrievals of optical depth. This is true for both single-pixel retrievals and for statistics calculated from a population of individual retrievals. For individual estimates or small samples, sensor discretization can also be significant, but the sensitivity of the retrieval to the specification of the model atmosphere is less important. The relative uncertainty in calibration affects the accuracy with which optical depth distributions measured by different sensors may be quantitatively compared, while the absolute calibration uncertainty, acting through the nonlinear mapping of radiance to optical depth, limits the degree to which distributions measured by the same sensor may be distinguished.

  4. Medical diagnosis system and method with multispectral imaging. [depth of burns and optical density of the skin

    NASA Technical Reports Server (NTRS)

    Anselmo, V. J.; Reilly, T. H. (Inventor)

    1979-01-01

    A skin diagnosis system includes a scanning and optical arrangement whereby light reflected from each incremental area (pixel) of the skin is directed simultaneously to three separate light filters, e.g., IR, red, and green. As a result, the three devices simultaneously produce three signals which are directly related to the reflectance of light of different wavelengths from the corresponding pixel. These three signals for each pixel after processing are used as inputs to one or more output devices to produce a visual color display and/or a hard copy color print, for one usable as a diagnostic aid by a physician.

  5. Improving Pixel Level Cloud Optical Property Retrieval using Monte Carlo Simulations

    NASA Technical Reports Server (NTRS)

    Oreopoulos, Lazaros; Marshak, Alexander; Cahalan, Robert F.

    1999-01-01

    The accurate pixel-by-pixel retrieval of cloud optical properties from space is influenced by radiative smoothing due to high order photon scattering and radiative roughening due to low order scattering events. Both are caused by cloud heterogeneity and the three-dimensional nature of radiative transfer and can be studied with the aid of computer simulations. We use Monte Carlo simulations on variable 1-D and 2-D model cloud fields to seek for dependencies of smoothing and roughening phenomena on single scattering albedo, solar zenith angle, and cloud characteristics. The results are discussed in the context of high resolution satellite (such as Landsat) retrieval applications. The current work extends the investigation on the inverse NIPA (Non-local Independent Pixel Approximation) as a tool for removing smoothing and improving retrievals of cloud optical depth. This is accomplished by: (1) Delineating the limits of NIPA applicability; (2) Exploring NIPA parameter dependences on cloud macrostructural features, such as mean cloud optical depth and geometrical thickness, degree of extinction and cloud top height variability. We also compare parameter values from empirical and theoretical considerations; (3) Examining the differences between applying NIPA on radiation quantities vs direct application on optical properties; (4) Studying the radiation budget importance of the NIPA corrections as a function of scale. Finally, we discuss fundamental adjustments that need to be considered for successful radiance inversion at non-conservative wavelengths and oblique Sun angles. These adjustments are necessary to remove roughening signatures which become more prominent with increasing absorption and solar zenith angle.

  6. An energy- and depth-dependent model for x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallas, Brandon D.; Boswell, Jonathan S.; Badano, Aldo

    In this paper, we model an x-ray imaging system, paying special attention to the energy- and depth-dependent characteristics of the inputs and interactions: x rays are polychromatic, interaction depth and conversion to optical photons is energy-dependent, optical scattering and the collection efficiency depend on the depth of interaction. The model we construct is a random function of the point process that begins with the distribution of x rays incident on the phosphor and ends with optical photons being detected by the active area of detector pixels to form an image. We show how the point-process representation can be used tomore » calculate the characteristic statistics of the model. We then simulate a Gd{sub 2}O{sub 2}S:Tb phosphor, estimate its characteristic statistics, and proceed with a signal-detection experiment to investigate the impact of the pixel fill factor on detecting spherical calcifications (the signal). The two extremes possible from this experiment are that SNR{sup 2} does not change with fill factor or changes in proportion to fill factor. In our results, the impact of fill factor is between these extremes, and depends on the diameter of the signal.« less

  7. Remote sensing of submerged aquatic vegetation in lower Chesapeake Bay - A comparison of Landsat MSS to TM imagery

    NASA Technical Reports Server (NTRS)

    Ackleson, S. G.; Klemas, V.

    1987-01-01

    Landsat MSS and TM imagery, obtained simultaneously over Guinea Marsh, VA, as analyzed and compares for its ability to detect submerged aquatic vegetation (SAV). An unsupervised clustering algorithm was applied to each image, where the input classification parameters are defined as functions of apparent sensor noise. Class confidence and accuracy were computed for all water areas by comparing the classified images, pixel-by-pixel, to rasterized SAV distributions derived from color aerial photography. To illustrate the effect of water depth on classification error, areas of depth greater than 1.9 m were masked, and class confidence and accuracy recalculated. A single-scattering radiative-transfer model is used to illustrate how percent canopy cover and water depth affect the volume reflectance from a water column containing SAV. For a submerged canopy that is morphologically and optically similar to Zostera marina inhabiting Lower Chesapeake Bay, dense canopies may be isolated by masking optically deep water. For less dense canopies, the effect of increasing water depth is to increase the apparent percent crown cover, which may result in classification error.

  8. Axial resolution improvement in spectral domain optical coherence tomography using a depth-adaptive maximum-a-posterior framework

    NASA Astrophysics Data System (ADS)

    Boroomand, Ameneh; Tan, Bingyao; Wong, Alexander; Bizheva, Kostadinka

    2015-03-01

    The axial resolution of Spectral Domain Optical Coherence Tomography (SD-OCT) images degrades with scanning depth due to the limited number of pixels and the pixel size of the camera, any aberrations in the spectrometer optics and wavelength dependent scattering and absorption in the imaged object [1]. Here we propose a novel algorithm which compensates for the blurring effect of these factors of the depth-dependent axial Point Spread Function (PSF) in SDOCT images. The proposed method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model. The aim is to compensate for the depth-dependent axial blur in SD-OCT images and simultaneously suppress the speckle noise which is inherent to all OCT images. Applying the proposed depth-dependent axial resolution enhancement technique to an OCT image of cucumber considerably improved the axial resolution of the image especially at higher imaging depths and allowed for better visualization of cellular membrane and nuclei. Comparing the result of our proposed method with the conventional Lucy-Richardson deconvolution algorithm clearly demonstrates the efficiency of our proposed technique in better visualization and preservation of fine details and structures in the imaged sample, as well as better speckle noise suppression. This illustrates the potential usefulness of our proposed technique as a suitable replacement for the hardware approaches which are often very costly and complicated.

  9. SNR improvement for hyperspectral application using frame and pixel binning

    NASA Astrophysics Data System (ADS)

    Rehman, Sami Ur; Kumar, Ankush; Banerjee, Arup

    2016-05-01

    Hyperspectral imaging spectrometer systems are increasingly being used in the field of remote sensing for variety of civilian and military applications. The ability of such instruments in discriminating finer spectral features along with improved spatial and radiometric performance have made such instruments a powerful tool in the field of remote sensing. Design and development of spaceborne hyper spectral imaging spectrometers poses lot of technological challenges in terms of optics, dispersion element, detectors, electronics and mechanical systems. The main factors that define the type of detectors are the spectral region, SNR, dynamic range, pixel size, number of pixels, frame rate, operating temperature etc. Detectors with higher quantum efficiency and higher well depth are the preferred choice for such applications. CCD based Si detectors serves the requirement of high well depth for VNIR band spectrometers but suffers from smear. Smear can be controlled by using CMOS detectors. Si CMOS detectors with large format arrays are available. These detectors generally have smaller pitch and low well depth. Binning technique can be used with available CMOS detectors to meet the large swath, higher resolution and high SNR requirements. Availability of larger dwell time of satellite can be used to bin multiple frames to increase the signal collection even with lesser well depth detectors and ultimately increase the SNR. Lab measurements reveal that SNR improvement by frame binning is more in comparison to pixel binning. Effect of pixel binning as compared to the frame binning will be discussed and degradation of SNR as compared to theoretical value for pixel binning will be analyzed.

  10. Inferring river bathymetry via Image-to-Depth Quantile Transformation (IDQT)

    USGS Publications Warehouse

    Legleiter, Carl

    2016-01-01

    Conventional, regression-based methods of inferring depth from passive optical image data undermine the advantages of remote sensing for characterizing river systems. This study introduces and evaluates a more flexible framework, Image-to-Depth Quantile Transformation (IDQT), that involves linking the frequency distribution of pixel values to that of depth. In addition, a new image processing workflow involving deep water correction and Minimum Noise Fraction (MNF) transformation can reduce a hyperspectral data set to a single variable related to depth and thus suitable for input to IDQT. Applied to a gravel bed river, IDQT avoided negative depth estimates along channel margins and underpredictions of pool depth. Depth retrieval accuracy (R25 0.79) and precision (0.27 m) were comparable to an established band ratio-based method, although a small shallow bias (0.04 m) was observed. Several ways of specifying distributions of pixel values and depths were evaluated but had negligible impact on the resulting depth estimates, implying that IDQT was robust to these implementation details. In essence, IDQT uses frequency distributions of pixel values and depths to achieve an aspatial calibration; the image itself provides information on the spatial distribution of depths. The approach thus reduces sensitivity to misalignment between field and image data sets and allows greater flexibility in the timing of field data collection relative to image acquisition, a significant advantage in dynamic channels. IDQT also creates new possibilities for depth retrieval in the absence of field data if a model could be used to predict the distribution of depths within a reach.

  11. Independent Pixel and Two Dimensional Estimates of LANDSAT-Derived Cloud Field Albedo

    NASA Technical Reports Server (NTRS)

    Chambers, L. H.; Wielicki, Bruce A.; Evans, K. F.

    1996-01-01

    A theoretical study has been conducted on the effects of cloud horizontal inhomogeneity on cloud albedo bias. A two-dimensional (2D) version of the Spherical Harmonic Discrete Ordinate Method (SHDOM) is used to estimate the albedo bias of the plane parallel (PP-IPA) and independent pixel (IPA-2D) approximations for a wide range of 2D cloud fields obtained from LANDSAT. They include single layer trade cumulus, open and closed cell broken stratocumulus, and solid stratocumulus boundary layer cloud fields over ocean. Findings are presented on a variety of averaging scales and are summarized as a function of cloud fraction, mean cloud optical depth, cloud aspect ratio, standard deviation of optical depth, and the gamma function parameter Y (a measure of the width of the optical depth distribution). Biases are found to be small for small cloud fraction or mean optical depth, where the cloud fields under study behave linearly. They are large (up to 0.20 for PP-IPA bias, -0.12 for IPA-2D bias) for large v. On a scene average basis PP-IPA bias can reach 0.30, while IPA-2D bias reaches its largest magnitude at -0.07. Biases due to horizontal transport (IPA-2D) are much smaller than PP-IPA biases but account for 20% RMS of the bias overall. Limitations of this work include the particular cloud field set used, assumptions of conservative scattering, constant cloud droplet size, no gas absorption or surface reflectance, and restriction to 2D radiative transport. The LANDSAT data used may also be affected by radiative smoothing.

  12. Application of velocity filtering to optical-flow passive ranging

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1992-01-01

    The performance of the velocity filtering method as applied to optical-flow passive ranging under real-world conditions is evaluated. The theory of the 3-D Fourier transform as applied to constant-speed moving points is reviewed, and the space-domain shift-and-add algorithm is derived from the general 3-D matched filtering formulation. The constant-speed algorithm is then modified to fit the actual speed encountered in the optical flow application, and the passband of that filter is found in terms of depth (sensor/object distance) so as to cover any given range of depths. Two algorithmic solutions for the problems associated with pixel interpolation and object expansion are developed, and experimental results are presented.

  13. An analysis of haze effects on LANDSAT multispectral scanner data

    NASA Technical Reports Server (NTRS)

    Johnson, W. R.; Sestak, M. L. (Principal Investigator)

    1981-01-01

    Early season changes in optical depth change brightness, primarily along the soil line; and during crop development, changes in optical depth change both greenness and brightness. Thus, the existence of haze in the imagery could cause an unsuspecting analyst to interpret the spectral appearance as indicating an episodal event when, in fact, haze was present. The techniques for converting LANDSAT-3 data to simulate LANDSAT-2 data are in error. The yellowness and none such computations are affected primarily. Yellowness appears well correlated to optical depth. Experimental evidence with variable background and variable optical depth is needed, however. The variance of picture elements within a spring wheat field is related to its equivalent in optical depth changes caused by haze. This establishes the sensitivity of channel 1 (greenness) pixels to changes in haze levels. The between field picture element means and variances were determined for the spring wheat fields. This shows the variability of channel data on two specific dates, emphasizing that crop development can be influenced by many factors. The atmospheric correction program ATCOR reduces segment data from LANDSAT acquisitions to a common haze level and improves the results of analysis.

  14. High-resolution photography of clouds from the surface: Retrieval of optical depth of thin clouds down to centimeter scales: High-Resolution Photography of Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz, Stephen E.; Huang, Dong; Vladutescu, Daniela Viviana

    This article describes the approach and presents initial results, for a period of several minutes in north central Oklahoma, of an examination of clouds by high resolution digital photography from the surface looking vertically upward. A commercially available camera having 35-mm equivalent focal length up to 1200 mm (nominal resolution as fine as 6 µrad, which corresponds to 9 mm for cloud height 1.5 km) is used to obtain a measure of zenith radiance of a 30 m × 30 m domain as a two-dimensional image consisting of 3456 × 3456 pixels (12 million pixels). Downwelling zenith radiance varies substantiallymore » within single images and between successive images obtained at 4-s intervals. Variation in zenith radiance found on scales down to about 10 cm is attributed to variation in cloud optical depth (COD). Attention here is directed primarily to optically thin clouds, COD less than about 2. A radiation transfer model used to relate downwelling zenith radiance to COD and to relate the counts in the camera image to zenith radiance, permits determination of COD on a pixel-by-pixel basis. COD for thin clouds determined in this way exhibits considerable variation, for example, an order of magnitude within 15 m, a factor of 2 within 4 m, and 25% (0.12 to 0.15) over 14 cm. In conclusion, this approach, which examines cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opens new avenues for examination of cloud structure and evolution.« less

  15. High-resolution photography of clouds from the surface: Retrieval of optical depth of thin clouds down to centimeter scales: High-Resolution Photography of Clouds

    DOE PAGES

    Schwartz, Stephen E.; Huang, Dong; Vladutescu, Daniela Viviana

    2017-03-08

    This article describes the approach and presents initial results, for a period of several minutes in north central Oklahoma, of an examination of clouds by high resolution digital photography from the surface looking vertically upward. A commercially available camera having 35-mm equivalent focal length up to 1200 mm (nominal resolution as fine as 6 µrad, which corresponds to 9 mm for cloud height 1.5 km) is used to obtain a measure of zenith radiance of a 30 m × 30 m domain as a two-dimensional image consisting of 3456 × 3456 pixels (12 million pixels). Downwelling zenith radiance varies substantiallymore » within single images and between successive images obtained at 4-s intervals. Variation in zenith radiance found on scales down to about 10 cm is attributed to variation in cloud optical depth (COD). Attention here is directed primarily to optically thin clouds, COD less than about 2. A radiation transfer model used to relate downwelling zenith radiance to COD and to relate the counts in the camera image to zenith radiance, permits determination of COD on a pixel-by-pixel basis. COD for thin clouds determined in this way exhibits considerable variation, for example, an order of magnitude within 15 m, a factor of 2 within 4 m, and 25% (0.12 to 0.15) over 14 cm. In conclusion, this approach, which examines cloud structure on scales 3 to 5 orders of magnitude finer than satellite products, opens new avenues for examination of cloud structure and evolution.« less

  16. A Proposed Extension to the Soil Moisture and Ocean Salinity Level 2 Algorithm for Mixed Forest and Moderate Vegetation Pixels

    NASA Technical Reports Server (NTRS)

    Panciera, Rocco; Walker, Jeffrey P.; Kalma, Jetse; Kim, Edward

    2011-01-01

    The Soil Moisture and Ocean Salinity (SMOS)mission, launched in November 2009, provides global maps of soil moisture and ocean salinity by measuring the L-band (1.4 GHz) emission of the Earth's surface with a spatial resolution of 40-50 km.Uncertainty in the retrieval of soilmoisture over large heterogeneous areas such as SMOS pixels is expected, due to the non-linearity of the relationship between soil moisture and the microwave emission. The current baseline soilmoisture retrieval algorithm adopted by SMOS and implemented in the SMOS Level 2 (SMOS L2) processor partially accounts for the sub-pixel heterogeneity of the land surface, by modelling the individual contributions of different pixel fractions to the overall pixel emission. This retrieval approach is tested in this study using airborne L-band data over an area the size of a SMOS pixel characterised by a mix Eucalypt forest and moderate vegetation types (grassland and crops),with the objective of assessing its ability to correct for the soil moisture retrieval error induced by the land surface heterogeneity. A preliminary analysis using a traditional uniform pixel retrieval approach shows that the sub-pixel heterogeneity of land cover type causes significant errors in soil moisture retrieval (7.7%v/v RMSE, 2%v/v bias) in pixels characterised by a significant amount of forest (40-60%). Although the retrieval approach adopted by SMOS partially reduces this error, it is affected by errors beyond the SMOS target accuracy, presenting in particular a strong dry bias when a fraction of the pixel is occupied by forest (4.1%v/v RMSE,-3.1%v/v bias). An extension to the SMOS approach is proposed that accounts for the heterogeneity of vegetation optical depth within the SMOS pixel. The proposed approach is shown to significantly reduce the error in retrieved soil moisture (2.8%v/v RMSE, -0.3%v/v bias) in pixels characterised by a critical amount of forest (40-60%), at the limited cost of only a crude estimate of the optical depth of the forested area (better than 35% uncertainty). This study makes use of an unprecedented data set of airborne L-band observations and ground supporting data from the National Airborne Field Experiment 2005 (NAFE'05), which allowed accurate characterisation of the land surface heterogeneity over an area equivalent in size to a SMOS pixel.

  17. Pixel-based absorption correction for dual-tracer fluorescence imaging of receptor binding potential

    PubMed Central

    Kanick, Stephen C.; Tichauer, Kenneth M.; Gunn, Jason; Samkoe, Kimberley S.; Pogue, Brian W.

    2014-01-01

    Ratiometric approaches to quantifying molecular concentrations have been used for decades in microscopy, but have rarely been exploited in vivo until recently. One dual-tracer approach can utilize an untargeted reference tracer to account for non-specific uptake of a receptor-targeted tracer, and ultimately estimate receptor binding potential quantitatively. However, interpretation of the relative dynamic distribution kinetics is confounded by differences in local tissue absorption at the wavelengths used for each tracer. This study simulated the influence of absorption on fluorescence emission intensity and depth sensitivity at typical near-infrared fluorophore wavelength bands near 700 and 800 nm in mouse skin in order to correct for these tissue optical differences in signal detection. Changes in blood volume [1-3%] and hemoglobin oxygen saturation [0-100%] were demonstrated to introduce substantial distortions to receptor binding estimates (error > 30%), whereas sampled depth was relatively insensitive to wavelength (error < 6%). In response, a pixel-by-pixel normalization of tracer inputs immediately post-injection was found to account for spatial heterogeneities in local absorption properties. Application of the pixel-based normalization method to an in vivo imaging study demonstrated significant improvement, as compared with a reference tissue normalization approach. PMID:25360349

  18. Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography.

    PubMed

    Lan, Gongpu; Li, Guoqiang

    2017-03-07

    Nonlinear sampling of the interferograms in wavenumber (k) space degrades the depth-dependent signal sensitivity in conventional spectral domain optical coherence tomography (SD-OCT). Here we report a linear-in-wavenumber (k-space) spectrometer for an ultra-broad bandwidth (760 nm-920 nm) SD-OCT, whereby a combination of a grating and a prism serves as the dispersion group. Quantitative ray tracing is applied to optimize the linearity and minimize the optical path differences for the dispersed wavenumbers. Zemax simulation is used to fit the point spread functions to the rectangular shape of the pixels of the line-scan camera and to improve the pixel sampling rates. An experimental SD-OCT is built to test and compare the performance of the k-space spectrometer with that of a conventional one. Design results demonstrate that this k-space spectrometer can reduce the nonlinearity error in k-space from 14.86% to 0.47% (by approximately 30 times) compared to the conventional spectrometer. The 95% confidence interval for RMS diameters is 5.48 ± 1.76 μm-significantly smaller than both the pixel size (14 μm × 28 μm) and the Airy disc (25.82 μm in diameter, calculated at the wavenumber of 7.548 μm -1 ). Test results demonstrate that the fall-off curve from the k-space spectrometer exhibits much less decay (maximum as -5.20 dB) than the conventional spectrometer (maximum as -16.84 dB) over the whole imaging depth (2.2 mm).

  19. Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lan, Gongpu; Li, Guoqiang

    2017-03-01

    Nonlinear sampling of the interferograms in wavenumber (k) space degrades the depth-dependent signal sensitivity in conventional spectral domain optical coherence tomography (SD-OCT). Here we report a linear-in-wavenumber (k-space) spectrometer for an ultra-broad bandwidth (760 nm-920 nm) SD-OCT, whereby a combination of a grating and a prism serves as the dispersion group. Quantitative ray tracing is applied to optimize the linearity and minimize the optical path differences for the dispersed wavenumbers. Zemax simulation is used to fit the point spread functions to the rectangular shape of the pixels of the line-scan camera and to improve the pixel sampling rates. An experimental SD-OCT is built to test and compare the performance of the k-space spectrometer with that of a conventional one. Design results demonstrate that this k-space spectrometer can reduce the nonlinearity error in k-space from 14.86% to 0.47% (by approximately 30 times) compared to the conventional spectrometer. The 95% confidence interval for RMS diameters is 5.48 ± 1.76 μm—significantly smaller than both the pixel size (14 μm × 28 μm) and the Airy disc (25.82 μm in diameter, calculated at the wavenumber of 7.548 μm-1). Test results demonstrate that the fall-off curve from the k-space spectrometer exhibits much less decay (maximum as -5.20 dB) than the conventional spectrometer (maximum as -16.84 dB) over the whole imaging depth (2.2 mm).

  20. Automatic segmentation of the choroid in enhanced depth imaging optical coherence tomography images.

    PubMed

    Tian, Jing; Marziliano, Pina; Baskaran, Mani; Tun, Tin Aung; Aung, Tin

    2013-03-01

    Enhanced Depth Imaging (EDI) optical coherence tomography (OCT) provides high-definition cross-sectional images of the choroid in vivo, and hence is used in many clinical studies. However, the quantification of the choroid depends on the manual labelings of two boundaries, Bruch's membrane and the choroidal-scleral interface. This labeling process is tedious and subjective of inter-observer differences, hence, automatic segmentation of the choroid layer is highly desirable. In this paper, we present a fast and accurate algorithm that could segment the choroid automatically. Bruch's membrane is detected by searching the pixel with the biggest gradient value above the retinal pigment epithelium (RPE) and the choroidal-scleral interface is delineated by finding the shortest path of the graph formed by valley pixels using Dijkstra's algorithm. The experiments comparing automatic segmentation results with the manual labelings are conducted on 45 EDI-OCT images and the average of Dice's Coefficient is 90.5%, which shows good consistency of the algorithm with the manual labelings. The processing time for each image is about 1.25 seconds.

  1. Impact of intensive dust outbreaks on marine primary production as seen by satellites

    NASA Astrophysics Data System (ADS)

    Papadimas, Christos; Hatzianastassiou, Nikos; Mihalopoulos, Nikos; Kanakidou, Maria

    2016-04-01

    The impact of intensive dust outbreaks from the African continent on the marine primary production of the Mediterranean sea is here investigated using MODIS satellite observations of atmospheric aerosol optical depth and chlorophyll-a in the seawater. Dust outbreak episodes in the area are detected based on aerosol relevant satellite observations over a 12-year period from 2003 to 2014. For a total of 167 identified episodes, correlations between aerosol optical depth and chlorophyll-a are investigated both on regional and on a pixel by pixel basis as well as for simultaneous or time-lagged satellite observations. The identified co-variations are thoroughly discussed in view of the impact of nutrient atmospheric deposition on the marine biology in the Mediterranean Sea ecosystem. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: ARISTEIA - PANOPLY (Pollution Alters Natural Aerosol Composition: implications for Ocean Productivity, cLimate and air qualitY) grant.

  2. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  3. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  4. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  5. The Size of Dust and Smoke

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Desert dust particles tend to be larger in size than aerosols that originate from the processes of combustion. How precisely do the size of the aerosol particles comprising the dust that obscured the Red Sea on July 26, 2005, contrast with the size of the haze particles that obscured the United States eastern seaboard on the same day? NASA's Multi-angle Imaging SpectroRadiometer (MISR), which views Earth at nine different angles in four wavelengths, provides information about the amount, size, and shape of airborne particles. Here, MISR aerosol amount and size is presented for these two events. These MISR results distinguish desert dust, the most common non-spherical aerosol type, from pollution and forest fire particles. Determining aerosol characteristics is a key to understanding how aerosol particles influence the size, abundance, and rate of production of cloud droplets, and to a better understanding of how aerosols influence clouds and climate.

    The left panel of each of these two image sets (Red Sea, left; U.S. coastline, right) is a natural-color view from MISR's 70-degree forward viewing camera. The color-coded maps in the central panels show aerosol optical depth; the right panels provide a measure of aerosol size, expressed as the 'Angstrom exponent.' For the optical depth maps, yellow pixels indicate the most optically-thick aerosols, whereas the red, green and blue pixels represent progressively decreasing aerosol amounts. For this dramatic dust storm over the Red Sea, the aerosol is quite thick, and in some places, the dust over water is too optically thick for MISR to retrieve the aerosol amount. For the eastern seaboard haze, the thickest aerosols have accumulated over the Atlantic Ocean off the coasts of South Carolina and Georgia. Cases where no successful retrieval occurred, either due to extremely high aerosol optical thickness or to clouds, appear as dark gray pixels.

    For the Angstrom exponent maps, the blue and green pixels (smaller values) correspond with more large particles, whilst the yellow and red pixels, representing higher Angstrom exponents, correspond with more small particles. Angstrom exponent is related to the way the aerosol optical depth (AOD) changes with wavelength -- a more steeply decreasing AOD with wavelength indicates smaller particles. The greater the magnitude of the Angstrom exponent, the greater the contribution of smaller particles to the overall particle distribution. For optically thick desert dust storms, as in this case, the Angstrom exponent is expected to be relatively low -- likely below 1. For the eastern seaboard haze, the Angstrom exponent is significantly higher, indicating the relative abundance of small pollution particles, especially over the Atlantic where the aerosol optical depth is also very high.

    With a nearly simultaneous data acquisition time, the MODIS instrument also collected data for these events, and image features for both the dust storm and the haze are available.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously, viewing the entire globe between 82 north and 82 south latitude every nine days. This image covers an area of about 1,265 kilometers by 400 kilometers. These data products were generated from a portion of the imagery acquired during Terra orbits 29809 and 29814 and utilize data from blocks 60 to 67 and 71 to 78 within World Reference System-2 paths 17 and 170, respectively.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is managed for NASA by the California Institute of Technology.

  6. All-digital full waveform recording photon counting flash lidar

    NASA Astrophysics Data System (ADS)

    Grund, Christian J.; Harwit, Alex

    2010-08-01

    Current generation analog and photon counting flash lidar approaches suffer from limitation in waveform depth, dynamic range, sensitivity, false alarm rates, optical acceptance angle (f/#), optical and electronic cross talk, and pixel density. To address these issues Ball Aerospace is developing a new approach to flash lidar that employs direct coupling of a photocathode and microchannel plate front end to a high-speed, pipelined, all-digital Read Out Integrated Circuit (ROIC) to achieve photon-counting temporal waveform capture in each pixel on each laser return pulse. A unique characteristic is the absence of performance-limiting analog or mixed signal components. When implemented in 65nm CMOS technology, the Ball Intensified Imaging Photon Counting (I2PC) flash lidar FPA technology can record up to 300 photon arrivals in each pixel with 100 ps resolution on each photon return, with up to 6000 range bins in each pixel. The architecture supports near 100% fill factor and fast optical system designs (f/#<1), and array sizes to 3000×3000 pixels. Compared to existing technologies, >60 dB ultimate dynamic range improvement, and >104 reductions in false alarm rates are anticipated, while achieving single photon range precision better than 1cm. I2PC significantly extends long-range and low-power hard target imaging capabilities useful for autonomous hazard avoidance (ALHAT), navigation, imaging vibrometry, and inspection applications, and enables scannerless 3D imaging for distributed target applications such as range-resolved atmospheric remote sensing, vegetation canopies, and camouflage penetration from terrestrial, airborne, GEO, and LEO platforms. We discuss the I2PC architecture, development status, anticipated performance advantages, and limitations.

  7. Fast, Deep-Record-Length, Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas

    2014-10-01

    HyperV Technologies has been developing an imaging diagnostic comprised of an array of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 1000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog-to-digital converters and modern memory chips, a prototype 100 pixel array with an extremely deep record length (128 k points at 20 Msamples/s) and 10 bit pixel resolution has already been achieved. HyperV now seeks to extend these techniques to construct a prototype 1000 Pixel framing camera with up to 100 Msamples/sec rate and 10 to 12 bit depth. Preliminary experimental results as well as Phase 2 plans will be discussed. Work supported by USDOE Phase 2 SBIR Grant DE-SC0009492.

  8. Three-dimensional optical topography of brain activity in infants watching videos of human movement

    NASA Astrophysics Data System (ADS)

    Correia, Teresa; Lloyd-Fox, Sarah; Everdell, Nick; Blasi, Anna; Elwell, Clare; Hebden, Jeremy C.; Gibson, Adam

    2012-03-01

    We present 3D optical topography images reconstructed from data obtained previously while infants observed videos of adults making natural movements of their eyes and hands. The optical topography probe was placed over the temporal cortex, which in adults is responsible for cognitive processing of similar stimuli. Increases in oxyhaemoglobin were measured and reconstructed using a multispectral imaging algorithm with spatially variant regularization to optimize depth discrimination. The 3D optical topography images suggest that similar brain regions are activated in infants and adults. Images were presented showing the distribution of activation in a plane parallel to the surface, as well as changes in activation with depth. The time-course of activation was followed in the pixel which demonstrated the largest change, showing that changes could be measured with high temporal resolution. These results suggest that infants a few months old have regions which are specialized for reacting to human activity, and that these subtle changes can be effectively analysed using 3D optical topography.

  9. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    NASA Astrophysics Data System (ADS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  10. Design of a k-space spectrometer for ultra-broad waveband spectral domain optical coherence tomography

    PubMed Central

    Lan, Gongpu; Li, Guoqiang

    2017-01-01

    Nonlinear sampling of the interferograms in wavenumber (k) space degrades the depth-dependent signal sensitivity in conventional spectral domain optical coherence tomography (SD-OCT). Here we report a linear-in-wavenumber (k-space) spectrometer for an ultra-broad bandwidth (760 nm–920 nm) SD-OCT, whereby a combination of a grating and a prism serves as the dispersion group. Quantitative ray tracing is applied to optimize the linearity and minimize the optical path differences for the dispersed wavenumbers. Zemax simulation is used to fit the point spread functions to the rectangular shape of the pixels of the line-scan camera and to improve the pixel sampling rates. An experimental SD-OCT is built to test and compare the performance of the k-space spectrometer with that of a conventional one. Design results demonstrate that this k-space spectrometer can reduce the nonlinearity error in k-space from 14.86% to 0.47% (by approximately 30 times) compared to the conventional spectrometer. The 95% confidence interval for RMS diameters is 5.48 ± 1.76 μm—significantly smaller than both the pixel size (14 μm × 28 μm) and the Airy disc (25.82 μm in diameter, calculated at the wavenumber of 7.548 μm−1). Test results demonstrate that the fall-off curve from the k-space spectrometer exhibits much less decay (maximum as −5.20 dB) than the conventional spectrometer (maximum as –16.84 dB) over the whole imaging depth (2.2 mm). PMID:28266502

  11. An alternative approach to depth of field which avoids the blur circle and uses the pixel pitch

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert

    2015-09-01

    Modern thermal imaging systems apply more and more uncooled detectors. High volume applications work with detectors which have a reduced pixel count (typical between 200x150 and 640x480). This shrinks the application of modern image treatment procedures like wave front coding. On the other hand side, uncooled detectors demand lenses with fast F-numbers near 1.0. Which are the limits on resolution if the target to analyze changes its distance to the camera system? The aim to implement lens arrangements without any focusing mechanism demands a deeper quantification of the Depth of Field problem. The proposed Depth of Field approach avoids the classic "accepted image blur circle". It bases on a camera specific depth of focus which is transformed in the object space by paraxial relations. The traditional RAYLEIGH's -criterion bases on the unaberrated Point Spread Function and delivers a first order relation for the depth of focus. Hence, neither the actual lens resolution neither the detector impact is considered. The camera specific depth of focus respects a lot of camera properties: Lens aberrations at actual F-number, detector size and pixel pitch. The through focus MTF is the base of the camera specific depth of focus. It has a nearly symmetric course around the maximum of sharp imaging. The through focus MTF is considered at detector's Nyquist frequency. The camera specific depth of focus is this the axial distance in front and behind of sharp image plane where the through focus MTF is <0.25. This camera specific depth of focus is transferred in the object space by paraxial relations. It follows a general applicable Depth of Field diagram which could be applied to lenses realizing a lateral magnification range -0.05…0. Easy to handle formulas are provided between hyperfocal distance and the borders of the Depth of Field in dependence on sharp distances. These relations are in line with the classical Depth of Field-theory. Thermal pictures, taken by different IR-camera cores, illustrate the new approach. The quite often requested graph "MTF versus distance" choses the half Nyquist frequency as reference. The paraxial transfer of the through focus MTF in object space distorts the MTF-curve: hard drop at closer distances than sharp distance, smooth drop at further distances. The formula of a general Diffraction-Limited-Through-Focus-MTF (DLTF) is deducted. Arbitrary detector-lens combinations could be discussed. Free variables in this analysis are waveband, aperture based F-number (lens) and pixel pitch (detector). The DLTF- discussion provides physical limits and technical requirements. The detector development with pixel pitches smaller than captured wavelength in the LWIR-region generates a special challenge for optical design.

  12. Reconciling Simulated and Observed Views of Clouds: MODIS, ISCCP, and the Limits of Instrument Simulators in Climate Models

    NASA Technical Reports Server (NTRS)

    Pincus, Robert; Platnick, Steven E.; Ackerman, Steve; Hemler, Richard; Hofmann, Patrick

    2011-01-01

    The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds are represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This work examines the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. We focus on the stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15% of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.

  13. Interferometric SAR for characterization of ravines as a function of their density, depth, and surface cover

    NASA Astrophysics Data System (ADS)

    Chatterjee, R. S.; Saha, S. K.; Suresh Kumar; Sharika Mathew; Lakhera, R. C.; Dadhwal, V. K.

    In recent years, the problem of ravine erosion with consequent loss of usable land has received much attention worldwide. The Chambal ravine zone in India is well known for being an extremely intricate, deeply incised network of ravines in a 10 km wide zone on the flanks of the Chambal River. It occupies an area of ˜0.5 million hectares at the expense of fertile agricultural land of the Chambal Valley. The broad grouping of the ravines considering their reclamation potential, as carried out by previous workers based on visual interpretation of optical remote sensing data, is mostly descriptive in nature. In the present study, characterization of the ravines as a function of their erosion potential expressed through ravine density, ravine depth, and ravine surface cover was made in quantitative terms exploiting the preferential characteristics of side-looking, long-wavelength, coherent SAR signal and precision measurements associated with the InSAR technique. The outlines of ravines appear remarkably prominent in SAR backscattered amplitude images due to the high sensitivity of the SAR signal to terrain ruggedness. Using local statistics-based meso and macro textural information of SAR backscattered amplitude images in 7×7 pixel windows (the pixel size being 20 m×20 m), the ravine-affected area has been classified into three density classes, namely low, moderate, and high density ravine classes. C-band InSAR digital elevation models (DEMs) of sparsely vegetated ravine areas essentially give the terrain height. From the pixel-by-pixel terrain height, the ravine depth was calculated by differencing the maximum and minimum terrain heights of the pixels in a 100 m distance range. Considering the vertical precision of the ERS InSAR DEMs of ˜5 m and ravine depth classification by previous workers [Sharma, H.S., 1968. Genesis and pattern of ravines of the Lower Chambal Valley, India. Special Issue. 21st International Geographical Union Congress 30(4), 14-24; Seth, S.P., Bhatnagar, R.K., Chauhan, S.S., 1969. Reclamability classification and nature of ravines of Chambal Command Areas. Journal of Soil and Water Conservation in India 17 (3-4), 39-44.], three depth classes, namely shallow (<5 m), moderately deep (5-20 m), and deep (>20 m) ravines, were made. Using the temporal decorrelation property of the close time interval InSAR data pair, namely the ERS SAR tandem pair, four ravine surface cover classes, namely barren land, grass/scrub/crop land, sparse vegetation, and wet land/dense vegetation, could be delineated, which was corroborated by the spectral signatures in the optical range and selective ground truths.

  14. Three-dimensional cross point readout detector design for including depth information

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Jae; Baek, Cheol-Ha

    2018-04-01

    We designed a depth-encoding positron emission tomography (PET) detector using a cross point readout method with wavelength-shifting (WLS) fibers. To evaluate the characteristics of the novel detector module and the PET system, we used the DETECT2000 to perform optical photon transport in the crystal array. The GATE was also used. The detector module is made up of four layers of scintillator arrays, the five layers of WLS fiber arrays, and two sensor arrays. The WLS fiber arrays in each layer cross each other to transport light to each sensor array. The two sensor arrays are coupled to the forward and left sides of the WLS fiber array, respectively. The identification of three-dimensional pixels was determined using a digital positioning algorithm. All pixels were well decoded, with the system resolution ranging from 2.11 mm to 2.29 mm at full width at half maximum (FWHM).

  15. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  16. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    PubMed

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. What is the Uncertainty in MODIS Aerosol Optical Depth in the Vicinity of Clouds?

    NASA Technical Reports Server (NTRS)

    Patadia, Falguni; Levy, Rob; Mattoo, Shana

    2017-01-01

    MODIS dark-target (DT) algorithm retrieves aerosol optical depth (AOD) using a Look Up Table (LUT) approach. Global comparison of AOD (Collection 6 ) with ground-based sun photometer gives an Estimated Error (EE) of +/-(0.04 + 10%) over ocean. However, EE does not represent per-retrieval uncertainty. For retrievals that are biased high compared to AERONET, here we aim to closely examine the contribution of biases due to presence of clouds and per-pixel retrieval uncertainty. We have characterized AOD uncertainty at 550 nm, due to standard deviation of reflectance in 10 km retrieval region, uncertainty related to gas (H2O, O3) absorption, surface albedo, and aerosol models. The uncertainty in retrieved AOD seems to lie within the estimated over ocean error envelope of +/-(0.03+10%). Regions between broken clouds tend to have higher uncertainty. Compared to C6 AOD, a retrieval omitting observations in the vicinity of clouds (< or = 1 km) is biased by about +/- 0.05. For homogeneous aerosol distribution, clear sky retrievals show near zero bias. Close look at per-pixel reflectance histograms suggests retrieval possibility using median reflectance values.

  18. Estimation of photosynthetically available radiation (PAR) from OCEANSAT-I OCM using a simple atmospheric radiative transfer model

    NASA Astrophysics Data System (ADS)

    Tripathy, Madhumita; Raman, Mini; Chauhan, Prakash

    2015-10-01

    Photosynthetically available radiation (PAR) is an important variable for radiation budget, marine and terrestrial ecosystem models. OCEANSAT-1 Ocean Color Monitor (OCM) PAR was estimated using two different methods under both clear and cloudy sky conditions. In the first approach, aerosol optical depth (AOD) and cloud optical depth (COD) were estimated from OCEANSAT-1 OCM TOA (top-of-atmosphere) radiance data on a pixel by pixel basis and PAR was estimated from extraterrestrial solar flux for fifteen spectral bands using a radiative transfer model. The second approach used TOA radiances measured by OCM in the PAR spectral range to compute PAR. This approach also included surface albedo and cloud albedo as inputs. Comparison between OCEANSAT-1 OCM PAR at noon with in situ measured PAR shows that root mean square difference was 5.82% for the method I and 7.24% for the method II in daily time scales. Results indicate that methodology adopted to estimate PAR from OCEANSAT-1 OCM can produce reasonably accurate PAR estimates over the tropical Indian Ocean region. This approach can be extended to OCEANSAT-2 OCM and future OCEANSAT-3 OCM data for operational estimation of PAR for regional marine ecosystem applications.

  19. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-cloud Aerosols over Ocean Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2013-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4%. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  20. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-Cloud Aerosols Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Meyer, K.; Platnick, S.; Oreopoulos, L.; Lee, D.; Yu, H.

    2014-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It accounts for the overlapping of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure. Effects of sub-grid scale cloud and aerosol variations on DRE are accounted for. It is computationally efficient through using grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table in radiative transfer calculations. We verified that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous shortwave DRE that generally agrees with more rigorous pixel-level computation within 4. We have also computed the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global ocean based on 4 yr of CALIOP and MODIS data. We found that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds.

  1. Imaging natural materials with a quasi-microscope. [spectrophotometry of granular materials

    NASA Technical Reports Server (NTRS)

    Bragg, S.; Arvidson, R.

    1977-01-01

    A Viking lander camera with auxilliary optics mounted inside the dust post was evaluated to determine its capability for imaging the inorganic properties of granular materials. During mission operations, prepared samples would be delivered to a plate positioned within the camera's field of view and depth of focus. The auxiliary optics would then allow soil samples to be imaged with an 11 pm pixel size in the broad band (high resolution, black and white) mode, and a 33 pm pixel size in the multispectral mode. The equipment will be used to characterize: (1) the size distribution of grains produced by igneous (intrusive and extrusive) processes or by shock metamorphism, (2) the size distribution resulting from crushing, chemical alteration, or by hydraulic or aerodynamic sorting; (3) the shape and degree of grain roundness and surface texture induced by mechanical and chemical alteration; and (4) the mineralogy and chemistry of grains.

  2. Adaptive Optics Images of the Galactic Center: Using Empirical Noise-maps to Optimize Image Analysis

    NASA Astrophysics Data System (ADS)

    Albers, Saundra; Witzel, Gunther; Meyer, Leo; Sitarski, Breann; Boehle, Anna; Ghez, Andrea M.

    2015-01-01

    Adaptive Optics images are one of the most important tools in studying our Galactic Center. In-depth knowledge of the noise characteristics is crucial to optimally analyze this data. Empirical noise estimates - often represented by a constant value for the entire image - can be greatly improved by computing the local detector properties and photon noise contributions pixel by pixel. To comprehensively determine the noise, we create a noise model for each image using the three main contributors—photon noise of stellar sources, sky noise, and dark noise. We propagate the uncertainties through all reduction steps and analyze the resulting map using Starfinder. The estimation of local noise properties helps to eliminate fake detections while improving the detection limit of fainter sources. We predict that a rigorous understanding of noise allows a more robust investigation of the stellar dynamics in the center of our Galaxy.

  3. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    NASA Astrophysics Data System (ADS)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  4. Retrieval of Cloud Properties for Partially Cloud-Filled Pixels During CRYSTAL-FACE

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Minnis, P.; Smith, W. L.; Khaiyer, M. M.; Heck, P. W.; Sun-Mack, S.; Uttal, T.; Comstock, J.

    2003-12-01

    Partially cloud-filled pixels can be a significant problem for remote sensing of cloud properties. Generally, the optical depth and effective particle sizes are often too small or too large, respectively, when derived from radiances that are assumed to be overcast but contain radiation from both clear and cloud areas within the satellite imager field of view. This study presents a method for reducing the impact of such partially cloud field pixels by estimating the cloud fraction within each pixel using higher resolution visible (VIS, 0.65mm) imager data. Although the nominal resolution for most channels on the Geostationary Operational Environmental Satellite (GOES) imager and the Moderate Resolution Imaging Spectroradiometer (MODIS) on Terra are 4 and 1 km, respectively, both instruments also take VIS channel data at 1 km and 0.25 km, respectively. Thus, it may be possible to obtain an improved estimate of cloud fraction within the lower resolution pixels by using the information contained in the higher resolution VIS data. GOES and MODIS multi-spectral data, taken during the Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE), are analyzed with the algorithm used for the Atmospheric Radiation Measurement Program (ARM) and the Clouds and Earth's Radiant Energy System (CERES) to derive cloud amount, temperature, height, phase, effective particle size, optical depth, and water path. Normally, the algorithm assumes that each pixel is either entirely clear or cloudy. In this study, a threshold method is applied to the higher resolution VIS data to estimate the partial cloud fraction within each low-resolution pixel. The cloud properties are then derived from the observed low-resolution radiances using the cloud cover estimate to properly extract the radiances due only to the cloudy part of the scene. This approach is applied to both GOES and MODIS data to estimate the improvement in the retrievals for each resolution. Results are compared with the radar reflectivity techniques employed by the NOAA ETL MMCR and the PARSL 94 GHz radars located at the CRYSTAL-FACE Eastern & Western Ground Sites, respectively. This technique is most likely to yield improvements for low and midlevel layer clouds that have little thermal variability in cloud height.

  5. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  6. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  7. 3D reconstruction of the optic nerve head using stereo fundus images for computer-aided diagnosis of glaucoma

    NASA Astrophysics Data System (ADS)

    Tang, Li; Kwon, Young H.; Alward, Wallace L. M.; Greenlee, Emily C.; Lee, Kyungmoo; Garvin, Mona K.; Abràmoff, Michael D.

    2010-03-01

    The shape of the optic nerve head (ONH) is reconstructed automatically using stereo fundus color images by a robust stereo matching algorithm, which is needed for a quantitative estimate of the amount of nerve fiber loss for patients with glaucoma. Compared to natural scene stereo, fundus images are noisy because of the limits on illumination conditions and imperfections of the optics of the eye, posing challenges to conventional stereo matching approaches. In this paper, multi scale pixel feature vectors which are robust to noise are formulated using a combination of both pixel intensity and gradient features in scale space. Feature vectors associated with potential correspondences are compared with a disparity based matching score. The deep structures of the optic disc are reconstructed with a stack of disparity estimates in scale space. Optical coherence tomography (OCT) data was collected at the same time, and depth information from 3D segmentation was registered with the stereo fundus images to provide the ground truth for performance evaluation. In experiments, the proposed algorithm produces estimates for the shape of the ONH that are close to the OCT based shape, and it shows great potential to help computer-aided diagnosis of glaucoma and other related retinal diseases.

  8. Ultrafast Phase Mapping of Thin-Sections from An Apollo 16 Drive Tube - a New Visualisation of Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Botha, Pieter; Butcher, Alan R.; Horsch, Hana; Rickman, Doug; Wentworth, Susan J.; Schrader, Christian M.; Stoeser, Doug; Benedictus, Aukje; Gottlieb, Paul; McKay, David

    2008-01-01

    Polished thin-sections of samples extracted from Apollo drive tubes provide unique insights into the structure of the Moon's regolith at various landing sites. In particular, they allow the mineralogy and texture of the regolith to be studied as a function of depth. Much has been written about such thin-sections based on optical, SEM and EPMA studies, in terms of their essential petrographic features, but there has been little attempt to quantify these aspects from a spatial perspective. In this study, we report the findings of experimental analysis of two thin-sections (64002, 6019, depth range 5.0 - 8.0 cm & 64001, 6031, depth range 50.0 - 53.1 cm), from a single Apollo 16 drive tube using QEMSCAN . A key feature of the method is phase identification by ultrafast energy dispersive x-ray mapping on a pixel-by-pixel basis. By selecting pixel resolutions ranging from 1 - 5 microns, typically 8,500,000 individual measurement points can be collected on a thin-section. The results we present include false colour digital images of both thin-sections. From these images, information such as phase proportions (major, minor and trace phases), particle textures, packing densities, and particle geometries, has been quantified. Parameters such as porosity and average phase density, which are of geomechanical interest, can also be calculated automatically. This study is part of an on-going investigation into spatial variation of lunar regolith and NASA's ISRU Lunar Simulant Development Project.

  9. Large Area Cd0.9Zn0.1Te Pixelated Detector: Fabrication and Characterization

    NASA Astrophysics Data System (ADS)

    Chaudhuri, Sandeep K.; Nguyen, Khai; Pak, Rahmi O.; Matei, Liviu; Buliga, Vladimir; Groza, Michael; Burger, Arnold; Mandal, Krishna C.

    2014-04-01

    Cd0.9Zn0.1Te (CZT) based pixelated radiation detectors have been fabricated and characterized for gamma ray detection. Large area CZT single crystals has been grown using a tellurium solvent method. A 10 ×10 guarded pixelated detector has been fabricated on a 19.5 ×19.5 ×5 mm3 crystal cut out from the grown ingot. The pixel dimensions were 1.3 ×1.3 mm2 and were pitched at 1.8 mm. A guard grid was used to reduce interpixel/inter-electrode leakage. The crystal was characterized in planar configuration using electrical, optical and optoelectronic methods prior to the fabrication of pixelated geometry. Current-voltage (I-V) measurements revealed a leakage current of 27 nA at an operating bias voltage of 1000 V and a resistivity of 3.1 ×1010 Ω-cm. Infrared transmission imaging revealed an average tellurium inclusion/precipitate size less than 8 μm. Pockels measurement has revealed a near-uniform depth-wise distribution of the internal electric field. The mobility-lifetime product in this crystal was calculated to be 6.2 ×10 - 3 cm2/V using alpha ray spectroscopic method. Gamma spectroscopy using a 137Cs source on the pixelated structure showed fully resolved 662 keV gamma peaks for all the pixels, with percentage resolution (FWHM) as high as 1.8%.

  10. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    NASA Astrophysics Data System (ADS)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  11. Flagging optically shallow pixels for improved analysis of ocean color data

    NASA Astrophysics Data System (ADS)

    McKinna, L. I. W.; Werdell, J.; Knowles, D., Jr.

    2016-02-01

    Ocean color remote-sensing is routinely used to derive marine geophysical parameters from sensor-observed water-leaving radiances. However, in clear geometrically shallow regions, traditional ocean color algorithms can be confounded by light reflected from the seafloor. Such regions are typically referred to as "optically shallow". When performing spatiotemporal analyses of ocean color datasets, optically shallow features such as coral reefs can lead to unexpected regional biases. Benthic contamination of the water-leaving radiance is dependent on bathymetry, water clarity and seafloor albedo. Thus, a prototype ocean color processing flag called OPTSHAL has been developed that takes all three variables into account. In the method described here, the optical depth of the water column at 547 nm, ζ(547), is predicted from known bathymetry and estimated inherent optical properties. If ζ(547) is less then the pre-defined threshold, a pixel is flagged as optically shallow. Radiative transfer modeling was used to identify the appropriate threshold value of ζ(547) for a generic benthic sand albedo. OPTSHAL has been evaluated within the NASA Ocean Biology Processing Group's L2GEN code. Using MODIS Aqua imagery, OPTSHAL was tested in two regions: (i) the Pedro Bank south-west of Jamaica, and (ii) the Great Barrier Reef, Australia. It is anticipated that OPTSHAL will benefit end-users when quality controlling derived ocean color products. Further, OPTSHAL may prove useful as a mechanism for switching between optically deep and shallow algorithms during ocean color processing.

  12. In vitro imaging of ophthalmic tissue by digital interference holography

    NASA Astrophysics Data System (ADS)

    Potcoava, Mariana C.; Kay, Christine N.; Kim, Myung K.; Richards, David W.

    2010-01-01

    We used digital interference holography (DIH) for in vitro imaging of human optic nerve head and retina. Samples of peripheral retina, macula, and optic nerve head from two formaldehyde-preserved human eyes were dissected and mounted onto slides. Holograms were captured by a monochrome CCD camera (Sony XC-ST50, with 780 × 640 pixels and pixel size of ∼9 µm). Light source was a solid-state pumped dye laser with tunable wavelength range of 560-605 nm. Using about 50 wavelengths in this band, holograms were obtained and numerically reconstructed using custom software based on NI LabView. Tomographic images were produced by superposition of holograms. Holograms of all tissue samples were obtained with a signal-to-noise ratio of approximately 50 dB. Optic nerve head characteristics (shape, diameter, cup depth, and cup width) were quantified with a few micron resolution (4.06-4.8 µm). Multiple layers were distinguishable in cross-sectional images of the macula. To our knowledge, this is the first report of DIH use to image human macular and optic nerve tissue. DIH has the potential to become a useful tool for researchers and clinicians in the diagnosis and treatment of many ocular diseases, including glaucoma and a variety of macular diseases.

  13. Backside illuminated CMOS-TDI line scanner for space applications

    NASA Astrophysics Data System (ADS)

    Cohen, O.; Ben-Ari, N.; Nevo, I.; Shiloah, N.; Zohar, G.; Kahanov, E.; Brumer, M.; Gershon, G.; Ofer, O.

    2017-09-01

    A new multi-spectral line scanner CMOS image sensor is reported. The backside illuminated (BSI) image sensor was designed for continuous scanning Low Earth Orbit (LEO) space applications including A custom high quality CMOS Active Pixels, Time Delayed Integration (TDI) mechanism that increases the SNR, 2-phase exposure mechanism that increases the dynamic Modulation Transfer Function (MTF), very low power internal Analog to Digital Converters (ADC) with resolution of 12 bit per pixel and on chip controller. The sensor has 4 independent arrays of pixels where each array is arranged in 2600 TDI columns with controllable TDI depth from 8 up to 64 TDI levels. A multispectral optical filter with specific spectral response per array is assembled at the package level. In this paper we briefly describe the sensor design and present some electrical and electro-optical recent measurements of the first prototypes including high Quantum Efficiency (QE), high MTF, wide range selectable Full Well Capacity (FWC), excellent linearity of approximately 1.3% in a signal range of 5-85% and approximately 1.75% in a signal range of 2-95% out of the signal span, readout noise of approximately 95 electrons with 64 TDI levels, negligible dark current and power consumption of less than 1.5W total for 4 bands sensor at all operation conditions .

  14. A 20 Mfps high frame-depth CMOS burst-mode imager with low power in-pixel NMOS-only passive amplifier

    NASA Astrophysics Data System (ADS)

    Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.

    2017-02-01

    This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.

  15. Reconciling Simulated and Observed Views of Clouds: MODIS, ISCCP, and the Limits or Instrument Simulators

    NASA Technical Reports Server (NTRS)

    Ackerman, Steven A.; Hemler, Richard S.; Hofman, Robert J. Patrick; Pincus, Robert; Platnick, Steven

    2011-01-01

    The properties of clouds that may be observed by satellite instruments, such as optical depth and cloud top pressure, are only loosely related to the way clouds m-e represented in models of the atmosphere. One way to bridge this gap is through "instrument simulators," diagnostic tools that map the model representation to synthetic observations so that differences between simulator output and observations can be interpreted unambiguously as model error. But simulators may themselves be restricted by limited information available from the host model or by internal assumptions. This paper considers the extent to which instrument simulators are able to capture essential differences between MODIS and ISCCP, two similar but independent estimates of cloud properties. The authors review the measurements and algorithms underlying these two cloud climatologies, introduce a MODIS simulator, and detail data sets developed for comparison with global models using ISCCP and MODIS simulators, In nature MODIS observes less mid-level doudines!> than ISCCP, consistent with the different methods used to determine cloud top pressure; aspects of this difference are reproduced by the simulators running in a climate modeL But stark differences between MODIS and ISCCP observations of total cloudiness and the distribution of cloud optical thickness can be traced to different approaches to marginal pixels, which MODIS excludes and ISCCP treats as homogeneous. These pixels, which likely contain broken clouds, cover about 15 k of the planet and contain almost all of the optically thinnest clouds observed by either instrument. Instrument simulators can not reproduce these differences because the host model does not consider unresolved spatial scales and so can not produce broken pixels. Nonetheless, MODIS and ISCCP observation are consistent for all but the optically-thinnest clouds, and models can be robustly evaluated using instrument simulators by excluding ambiguous observations.

  16. Photoacoustic and ultrasound imaging of cancellous bone tissue.

    PubMed

    Yang, Lifeng; Lashkari, Bahman; Tan, Joel W Y; Mandelis, Andreas

    2015-07-01

    We used ultrasound (US) and photoacoustic (PA) imaging modalities to characterize cattle trabecular bones. The PA signals were generated with an 805-nm continuous wave laser used for optimally deep optical penetration depth. The detector for both modalities was a 2.25-MHz US transducer with a lateral resolution of ~1 mm at its focal point. Using a lateral pixel size much larger than the size of the trabeculae, raster scanning generated PA images related to the averaged values of the optical and thermoelastic properties, as well as density measurements in the focal volume. US backscatter yielded images related to mechanical properties and density in the focal volume. The depth of interest was selected by time-gating the signals for both modalities. The raster scanned PA and US images were compared with microcomputed tomography (μCT) images averaged over the same volume to generate similar spatial resolution as US and PA. The comparison revealed correlations between PA and US modalities with the mineral volume fraction of the bone tissue. Various features and properties of these modalities such as detectable depth, resolution, and sensitivity are discussed.

  17. Simulation of Small-Pitch HgCdTe Photodetectors

    NASA Astrophysics Data System (ADS)

    Vallone, Marco; Goano, Michele; Bertazzi, Francesco; Ghione, Giovanni; Schirmacher, Wilhelm; Hanna, Stefan; Figgemeier, Heinrich

    2017-09-01

    Recent studies indicate as an important technological step the development of infrared HgCdTe-based focal plane arrays (FPAs) with sub-wavelength pixel pitch, with the advantage of smaller volume, lower weight, and potentially lower cost. In order to assess the limits of pixel pitch scaling, we present combined three-dimensional optical and electrical simulations of long-wavelength infrared HgCdTe FPAs, with 3 μm, 5 μm, and 10 μm pitch. Numerical simulations predict significant cavity effects, brought by the array periodicity. The optical and electrical contributions to spectral inter-pixel crosstalk are investigated as functions of pixel pitch, by illuminating the FPAs with Gaussian beams focused on the central pixel. Despite the FPAs being planar with 100% pixel duty cycle, our calculations suggest that the total crosstalk with nearest-neighbor pixels could be kept acceptably small also with pixels only 3 μ m wide and a diffraction-limited optical system.

  18. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias.

    PubMed

    Stefanov, Konstantin D; Clarke, Andrew S; Ivory, James; Holland, Andrew D

    2018-01-03

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths.

  19. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias †

    PubMed Central

    Clarke, Andrew S.; Ivory, James; Holland, Andrew D.

    2018-01-01

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths. PMID:29301379

  20. Modeling radiative transfer with the doubling and adding approach in a climate GCM setting

    NASA Astrophysics Data System (ADS)

    Lacis, A. A.

    2017-12-01

    The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.

  1. Layer by layer: complex analysis with OCT technology

    NASA Astrophysics Data System (ADS)

    Florin, Christian

    2017-03-01

    Standard visualisation systems capture two- dimensional images and need more or less fast image processing systems. Now, the ASP Array (Actives sensor pixel array) opens a new world in imaging. On the ASP array, each pixel is provided with its own lens and with its own signal pre-processing. The OCT technology works in "real time" with highest accuracy. In the ASP array systems functionalities of the data acquisition and signal processing are even integrated onto the "pixel level". For the extraction of interferometric features, the time-of-flight principle (TOF) is used. The ASP architecture offers the demodulation of the optical signal within a pixel with up to 100 kHz and the reconstruction of the amplitude and its phase. The dynamics of image capture with the ASP array is higher by two orders of magnitude in comparison with conventional image sensors!!! The OCT- Technology allows a topographic imaging in real time with an extremely high geometric spatial resolution. The optical path length is generated by an axial movement of the reference mirror. The amplitude-modulated optical signal and the carrier frequency are proportional to the scan rate and contains the depth information. Each maximum of the signal envelope corresponds to a reflection (or scattering) within a sample. The ASP array produces at same time 300 * 300 axial Interferorgrams which touch each other on all sides. The signal demodulation for detecting the envelope is not limited by the frame rate of the ASP array in comparison to standard OCT systems. If an optical signal arrives to a pixel of the ASP Array an electrical signal is generated. The background is faded to saturation of pixels by high light intensity to avoid. The sampled signal is integrated continuously multiplied by a signal of the same frequency and two paths whose phase is shifted by 90 degrees from each other are averaged. The outputs of the two paths are routed to the PC, where the envelope amplitude and the phase calculate a three-dimensional tomographic image. For 3D measuring technique specially designed ASP- arrays with a very high image rate are available. If ASP- Arrays are coupled with the OCT method, layer thicknesses can be determined without contact, sealing seams can be inspected or geometrical shapes can be measured. From a stack of hundreds of single OCT images, interesting images can be selected and fed to the computer to analyse them.

  2. Dual-mode optical microscope based on single-pixel imaging

    NASA Astrophysics Data System (ADS)

    Rodríguez, A. D.; Clemente, P.; Tajahuerce, E.; Lancis, J.

    2016-07-01

    We demonstrate an inverted microscope that can image specimens in both reflection and transmission modes simultaneously with a single light source. The microscope utilizes a digital micromirror device (DMD) for patterned illumination altogether with two single-pixel photosensors for efficient light detection. The system, a scan-less device with no moving parts, works by sequential projection of a set of binary intensity patterns onto the sample that are codified onto a modified commercial DMD. Data to be displayed are geometrically transformed before written into a memory cell to cancel optical artifacts coming from the diamond-like shaped structure of the micromirror array. The 24-bit color depth of the display is fully exploited to increase the frame rate by a factor of 24, which makes the technique practicable for real samples. Our commercial DMD-based LED-illumination is cost effective and can be easily coupled as an add-on module for already existing inverted microscopes. The reflection and transmission information provided by our dual microscope complement each other and can be useful for imaging non-uniform samples and to prevent self-shadowing effects.

  3. The Use of Aerosol Optical Depth in Estimating Trace Gas Emissions from Biomass Burning Plumes

    NASA Astrophysics Data System (ADS)

    Jones, N.; Paton-Walsh, C.; Wilson, S.; Meier, A.; Deutscher, N.; Griffith, D.; Murcray, F.

    2003-12-01

    We have observed significant correlations between aerosol optical depth (AOD) at 500 nm and column amounts of a number of biomass burning indicators (carbon monoxide, hydrogen cyanide, formaldehyde and ammonia) in bushfire smoke plumes over SE Australia during the 2001/2002 and 2002/2003 fire seasons from remote sensing measurements. The Department of Chemistry, University of Wollongong, operates a high resolution Fourier Transform Spectrometer (FTS), in the city of Wollongong, approximately 80 km south of Sydney. During the recent bushfires we collected over 1500 solar FTIR spectra directly through the smoke over Wollongong. The total column amounts of the biomass burning indicators were calculated using the profile retrieval software package SFIT2. Using the same solar beam, a small grating spectrometer equipped with a 2048 pixel CCD detector array, was used to calculate simultaneous aerosol optical depths. This dataset is therefore unique in its temporal sampling, location to active fires, and range of simultaneously measured constituents. There are several important applications of the AOD to gas column correlation. The estimation of global emissions from biomass burning currently has very large associated uncertainties. The use of visible radiances measured by satellites, and hence AOD, could significantly reduce these uncertainties by giving a direct estimate of global emissions of gases from biomass burning through application of the AOD to gas correlation. On a more local level, satellite-derived aerosol optical depth maps could be inverted to infer approximate concentration levels of smoke-related pollutants at the ground and in the lower troposphere, and thus can be used to determine the nature of any significant health impacts.

  4. A multi-approach to the optical depth of a contrail cirrus cluster

    NASA Astrophysics Data System (ADS)

    Vazquez-Navarro, Margarita; Bugliaro, Luca; Schumann, Ulrich; Strandgren, Johan; Wirth, Martin; Voigt, Christiane

    2017-04-01

    Amongst the individual aviation emissions, contrail cirrus contribute the largest fraction to the aviation effects on climate. To investigate the optical depth from contrail cirrus, we selected a cirrus and contrail cloud outbreak on the 10th April 2014 between the North Sea and Switzerland detected during the ML-CIRRUS experiment (Voigt et al., 2017). The outbreak was not forecast by weather prediction models. We describe its origin and evolution using a combination of in-situ measurements, remote sensing approaches and contrail prediction model prognosis. The in-situ and lidar measurements were carried out with the HALO aircraft, where the cirrus was first identified. Model predictions from the contrail prediction model CoCiP (Schumann et al., 2012) point to an anthropogenic origin. The satellite pictures from the SEVIRI imager on MSG combined with the use of a contrail cluster tracking algorithm enable the automatic assessment of the origin, displacement and growth of the cloud and the correct labeling of cluster pixels. The evolution of the optical depth and particle size of the selected cluster pixels were derived using the CiPS algorithm, a neural network primarily based on SEVIRI images. The CoCiP forecast of the cluster compared to the actual cluster tracking show that the model correctly predicts the occurrence of the cluster and its advection direction although the cluster spreads faster than simulated. The optical depth derived from CiPS and from the airborne high spectral resolution lidar WALES are compared and show a remarkably good agreement. This confirms that the new CiPS algorithm is a very powerful tool for the assessment of the optical depth of even optically thinner cirrus clouds. References: Schumann, U.: A contrail cirrus prediction model, Geosci. Model Dev., 5, 543-580, doi: 10.5194/gmd-5-543-2012, 2012. Voigt, C., Schumann, U., Minikin, A., Abdelmonem, A., Afchine, A., Borrmann, S., Boettcher, M., Buchholz, B., Bugliaro, L., Costa, A., Curtius, J., Dollner, M., Dörnbrack, A., Dreiling, V., Ebert, V., Ehrlich, A., Fix, A., Forster, L., Frank, F., Fütterer, D., Giez, A., Graf, K., Grooß, J.-U., Groß, S., Heimerl, K., Heinold, B., Hüneke, T., Järvinen, E., Jurkat, T., Kaufmann, S., Kenntner, M., Klingebiel, M., Klimach, T., Kohl, R., Krämer, M., Krisna, T. C., Luebke, A., Mayer, B., Mertes, S., Molleker, S., Petzold, A., Pfeilsticker, K., Port, M., Rapp, M., Reutter, P., Rolf, C., Rose, D., Sauer, D., Schäfler, A., Schlage, R., Schnaiter, M., Schneider, J., Spelten, N., Spichtinger, P., Stock, P., Walser, A., Weigel, R., Weinzierl, B., Wendisch, M., Werner, F., Wernli, H., Wirth, M., Zahn, A., Ziereis, H., and Zöger, M.: ML-CIRRUS - The airborne experiment on natural cirrus and contrail cirrus with the high-altitude long-range research aircraft HALO, Bull. Amer. Meteorol. Soc., in press, doi: 10.1175/BAMS-D-15-00213.1, 2017.

  5. Distance-based over-segmentation for single-frame RGB-D images

    NASA Astrophysics Data System (ADS)

    Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao

    2017-11-01

    Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.

  6. Full-range k-domain linearization in spectral-domain optical coherence tomography.

    PubMed

    Jeon, Mansik; Kim, Jeehyun; Jung, Unsang; Lee, Changho; Jung, Woonggyu; Boppart, Stephen A

    2011-03-10

    A full-bandwidth k-domain linearization method for spectral-domain optical coherence tomography (SD-OCT) is demonstrated. The method uses information of the wavenumber-pixel-position provided by a translating-slit-based wavelength filter. For calibration purposes, the filter is placed either after a broadband source or at the end of the sample path, and the filtered spectrum with a narrowed line width (∼0.5 nm) is incident on a line-scan camera in the detection path. The wavelength-swept spectra are co-registered with the pixel positions according to their central wavelengths, which can be automatically measured with an optical spectrum analyzer. For imaging, the method does not require a filter or a software recalibration algorithm; it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. The accuracy of k-linearization is maximized by increasing the k-linearization order, which is known to be a crucial parameter for maintaining a narrow point-spread function (PSF) width at increasing depths. The broadening effect is studied by changing the k-linearization order by undersampling to search for the optimal value. The system provides more position information, surpassing the optimum without compromising the imaging speed. The proposed full-range k-domain linearization method can be applied to SD-OCT systems to simplify their hardware/software, increase their speed, and improve the axial image resolution. The experimentally measured width of PSF in air has an FWHM of 8 μm at the edge of the axial measurement range. At an imaging depth of 2.5 mm, the sensitivity of the full-range calibration case drops less than 10 dB compared with the uncompensated case.

  7. 32 x 16 CMOS smart pixel array for optical interconnects

    NASA Astrophysics Data System (ADS)

    Kim, Jongwoo; Guilfoyle, Peter S.; Stone, Richard V.; Hessenbruch, John M.; Choquette, Kent D.; Kiamilev, Fouad E.

    2000-05-01

    Free space optical interconnects can increase throughput capacities and eliminate much of the energy consumption required for `all electronic' systems. High speed optical interconnects can be achieved by integrating optoelectronic devices with conventional electronics. Smart pixel arrays have been developed which use optical interconnects. An individual smart pixel cell is composed of a vertical cavity surface emitting laser (VCSEL), a photodetector, an optical receiver, a laser driver, and digital logic circuitry. Oxide-confined VCSELs are being developed to operate at 850 nm with a threshold current of approximately 1 mA. Multiple quantum well photodetectors are being fabricated from AlGaAs for use with the 850 nm VCSELs. The VCSELs and photodetectors are being integrated with complementary metal oxide semiconductor (CMOS) circuitry using flip-chip bonding. CMOS circuitry is being integrated with a 32 X 16 smart pixel array. The 512 smart pixels are serially linked. Thus, an entire data stream may be clocked through the chip and output electrically by the last pixel. Electrical testing is being performed on the CMOS smart pixel array. Using an on-chip pseudo random number generator, a digital data sequence was cycled through the chip verifying operation of the digital circuitry. Although, the prototype chip was fabricated in 1.2 micrometers technology, simulations have demonstrated that the array can operate at 1 Gb/s per pixel using 0.5 micrometers technology.

  8. The phase 1 upgrade of the CMS Pixel Front-End Driver

    NASA Astrophysics Data System (ADS)

    Friedl, M.; Pernicka, M.; Steininger, H.

    2010-12-01

    The pixel detector of the CMS experiment at the LHC is read out by analog optical links, sending the data to 9U VME Front-End Driver (FED) boards located in the electronics cavern. There are plans for the phase 1 upgrade of the pixel detector (2016) to add one more layer, while significantly cutting down the overall material budget. At the same time, the optical data transmission will be replaced by a serialized digital scheme. A plug-in board solution with a high-speed digital optical receiver has been developed for the Pixel-FED readout boards and will be presented along with first tests of the future optical link.

  9. A design of optical modulation system with pixel-level modulation accuracy

    NASA Astrophysics Data System (ADS)

    Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu

    2018-01-01

    Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.

  10. 1024-Pixel CMOS Multimodality Joint Cellular Sensor/Stimulator Array for Real-Time Holistic Cellular Characterization and Cell-Based Drug Screening.

    PubMed

    Park, Jong Seok; Aziz, Moez Karim; Li, Sensen; Chi, Taiyun; Grijalva, Sandra Ivonne; Sung, Jung Hoon; Cho, Hee Cheol; Wang, Hua

    2018-02-01

    This paper presents a fully integrated CMOS multimodality joint sensor/stimulator array with 1024 pixels for real-time holistic cellular characterization and drug screening. The proposed system consists of four pixel groups and four parallel signal-conditioning blocks. Every pixel group contains 16 × 16 pixels, and each pixel includes one gold-plated electrode, four photodiodes, and in-pixel circuits, within a pixel footprint. Each pixel supports real-time extracellular potential recording, optical detection, charge-balanced biphasic current stimulation, and cellular impedance measurement for the same cellular sample. The proposed system is fabricated in a standard 130-nm CMOS process. Rat cardiomyocytes are successfully cultured on-chip. Measured high-resolution optical opacity images, extracellular potential recordings, biphasic current stimulations, and cellular impedance images demonstrate the unique advantages of the system for holistic cell characterization and drug screening. Furthermore, this paper demonstrates the use of optical detection on the on-chip cultured cardiomyocytes to real-time track their cyclic beating pattern and beating rate.

  11. Laser pixelation of thick scintillators for medical imaging applications: x-ray studies

    NASA Astrophysics Data System (ADS)

    Sabet, Hamid; Kudrolli, Haris; Marton, Zsolt; Singh, Bipin; Nagarkar, Vivek V.

    2013-09-01

    To achieve high spatial resolution required in nuclear imaging, scintillation light spread has to be controlled. This has been traditionally achieved by introducing structures in the bulk of scintillation materials; typically by mechanical pixelation of scintillators and fill the resultant inter-pixel gaps by reflecting materials. Mechanical pixelation however, is accompanied by various cost and complexity issues especially for hard, brittle and hygroscopic materials. For example LSO and LYSO, hard and brittle scintillators of interest to medical imaging community, are known to crack under thermal and mechanical stress; the material yield drops quickly with large arrays with high aspect ratio pixels and therefore the pixelation process cost increases. We are utilizing a novel technique named Laser Induced Optical Barriers (LIOB) for pixelation of scintillators that overcomes the issues associated with mechanical pixelation. In this technique, we can introduce optical barriers within the bulk of scintillator crystals to form pixelated arrays with small pixel size and large thickness. We applied LIOB to LYSO using a high-frequency solid-state laser. Arrays with different crystal thickness (5 to 20 mm thick), and pixel size (0.8×0.8 to 1.5×1.5 mm2) were fabricated and tested. The width of the optical barriers were controlled by fine-tuning key parameters such as lens focal spot size and laser energy density. Here we report on LIOB process, its optimization, and the optical crosstalk measurements using X-rays. There are many applications that can potentially benefit from LIOB including but not limited to clinical/pre-clinical PET and SPECT systems, and photon counting CT detectors.

  12. Coupling sky images with radiative transfer models: a new method to estimate cloud optical depth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mejia, Felipe A.; Kurtz, Ben; Murray, Keenan

    A method for retrieving cloud optical depth ( τ c) using a UCSD developed ground-based sky imager (USI) is presented. The radiance red–blue ratio (RRBR) method is motivated from the analysis of simulated images of various τ c produced by a radiative transfer model (RTM). From these images the basic parameters affecting the radiance and red–blue ratio (RBR) of a pixel are identified as the solar zenith angle ( θ 0), τ c, solar pixel angle/scattering angle ( θ s), and pixel zenith angle/view angle ( θ z). The effects of these parameters are described and the functions for radiance,more » I λ τ c, θ 0, θ s, θ z , and RBR τ c, θ 0, θ s, θ z are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τ c, where RBR increases with τ c up to about τ c = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured I λ meas θ s, θ z , in addition to RBR meas θ s, θ z , to obtain a unique solution for τ c. The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement (ARM) program site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min et al. (2003) method for overcast skies. τ c values ranged from 0 to 80 with values over 80, being capped and registered as 80. A τ c RMSE of 2.5 between the Min et al. (2003) method and the USI are observed. The MWR and USI  have an RMSE of 2.2, which is well within the uncertainty of the MWR. In conclusion, the procedure developed here provides a foundation to test and develop other cloud detection algorithms.« less

  13. Coupling sky images with radiative transfer models: a new method to estimate cloud optical depth

    DOE PAGES

    Mejia, Felipe A.; Kurtz, Ben; Murray, Keenan; ...

    2016-08-30

    A method for retrieving cloud optical depth ( τ c) using a UCSD developed ground-based sky imager (USI) is presented. The radiance red–blue ratio (RRBR) method is motivated from the analysis of simulated images of various τ c produced by a radiative transfer model (RTM). From these images the basic parameters affecting the radiance and red–blue ratio (RBR) of a pixel are identified as the solar zenith angle ( θ 0), τ c, solar pixel angle/scattering angle ( θ s), and pixel zenith angle/view angle ( θ z). The effects of these parameters are described and the functions for radiance,more » I λ τ c, θ 0, θ s, θ z , and RBR τ c, θ 0, θ s, θ z are retrieved from the RTM results. RBR, which is commonly used for cloud detection in sky images, provides non-unique solutions for τ c, where RBR increases with τ c up to about τ c = 1 (depending on other parameters) and then decreases. Therefore, the RRBR algorithm uses the measured I λ meas θ s, θ z , in addition to RBR meas θ s, θ z , to obtain a unique solution for τ c. The RRBR method is applied to images of liquid water clouds taken by a USI at the Oklahoma Atmospheric Radiation Measurement (ARM) program site over the course of 220 days and compared against measurements from a microwave radiometer (MWR) and output from the Min et al. (2003) method for overcast skies. τ c values ranged from 0 to 80 with values over 80, being capped and registered as 80. A τ c RMSE of 2.5 between the Min et al. (2003) method and the USI are observed. The MWR and USI  have an RMSE of 2.2, which is well within the uncertainty of the MWR. In conclusion, the procedure developed here provides a foundation to test and develop other cloud detection algorithms.« less

  14. DMD-based LED-illumination super-resolution and optical sectioning microscopy.

    PubMed

    Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei

    2013-01-01

    Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×10(7) pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens.

  15. DMD-based LED-illumination Super-resolution and optical sectioning microscopy

    PubMed Central

    Dan, Dan; Lei, Ming; Yao, Baoli; Wang, Wen; Winterhalder, Martin; Zumbusch, Andreas; Qi, Yujiao; Xia, Liang; Yan, Shaohui; Yang, Yanlong; Gao, Peng; Ye, Tong; Zhao, Wei

    2013-01-01

    Super-resolution three-dimensional (3D) optical microscopy has incomparable advantages over other high-resolution microscopic technologies, such as electron microscopy and atomic force microscopy, in the study of biological molecules, pathways and events in live cells and tissues. We present a novel approach of structured illumination microscopy (SIM) by using a digital micromirror device (DMD) for fringe projection and a low-coherence LED light for illumination. The lateral resolution of 90 nm and the optical sectioning depth of 120 μm were achieved. The maximum acquisition speed for 3D imaging in the optical sectioning mode was 1.6×107 pixels/second, which was mainly limited by the sensitivity and speed of the CCD camera. In contrast to other SIM techniques, the DMD-based LED-illumination SIM is cost-effective, ease of multi-wavelength switchable and speckle-noise-free. The 2D super-resolution and 3D optical sectioning modalities can be easily switched and applied to either fluorescent or non-fluorescent specimens. PMID:23346373

  16. RGB-D depth-map restoration using smooth depth neighborhood supports

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Xue, Haoyang; Yu, Zhongjie; Wu, Qiang; Yang, Jie

    2015-05-01

    A method to restore the depth map of an RGB-D image using smooth depth neighborhood (SDN) supports is presented. The SDN supports are computed based on the corresponding color image of the depth map. Compared with the most widely used square supports, the proposed SDN supports can well-capture the local structure of the object. Only pixels with similar depth values are allowed to be included in the support. We combine our SDN supports with the joint bilateral filter (JBF) to form the SDN-JBF and use it to restore depth maps. Experimental results show that our SDN-JBF can not only rectify the misaligned depth pixels but also preserve sharp depth discontinuities.

  17. The multifocus plenoptic camera

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Lumsdaine, Andrew

    2012-01-01

    The focused plenoptic camera is based on the Lippmann sensor: an array of microlenses focused on the pixels of a conventional image sensor. This device samples the radiance, or plenoptic function, as an array of cameras with large depth of field, focused at a certain plane in front of the microlenses. For the purpose of digital refocusing (which is one of the important applications) the depth of field needs to be large, but there are fundamental optical limitations to this. The solution of the above problem is to use and array of interleaved microlenses of different focal lengths, focused at two or more different planes. In this way a focused image can be constructed at any depth of focus, and a really wide range of digital refocusing can be achieved. This paper presents our theory and results of implementing such camera. Real world images are demonstrating the extended capabilities, and limitations are discussed.

  18. Comparision of Bathymetry and Bottom Characteristics From Hyperspectral Remote Sensing Data and Shipborne Acoustic Measurements

    NASA Astrophysics Data System (ADS)

    McIntyre, M. L.; Naar, D. F.; Carder, K. L.; Howd, P. A.; Lewis, J. M.; Donahue, B. T.; Chen, F. R.

    2002-12-01

    There is growing interest in applying optical remote sensing techniques to shallow-water geological applications such as bathymetry and bottom characterization. Model inversions of hyperspectral remote-sensing reflectance imagery can provide estimates of bottom albedo and depth. This research was conducted in support of the HyCODE (Hyperspectral Coupled Ocean Dynamics Experiment) project in order to test optical sensor performance and the use of a hyperspectral remote-sensing reflectance algorithm for shallow waters in estimating bottom depths and reflectance. The objective of this project was to compare optically derived products of bottom depths and reflectance to shipborne acoustic measurements of bathymetry and backscatter. A set of three high-resolution, multibeam surveys within an 18 km by 1.5 km shore-perpendicular transect 5 km offshore of Sarasota, Florida were collected at water depths ranging from 8 m to 16 m. These products are compared to bottom depths derived from aircraft remote-sensing data collected with the AVIRIS (Airborne Visible-Infrared Imaging Spectrometer) instrument data by means of a semi-analytical remote sensing reflectance model. The pixel size of the multibeam bathymetry and AVIRIS data are 0.25 m and 10 m, respectively. When viewed at full resolution, the multibeam bathymetry data show small-scale sedimentary bedforms (wavelength ~10m, amplitude ~1m) that are not observed in the lower resolution hyperspectral bathymetry. However, model-derived bottom depths agree well with a smoothed version of the multibeam bathymetry. Depths derived from shipborne hyperspectral measurements were accurate within 13%. In areas where diver observations confirmed biological growth and bioturbation, derived bottom depths were less accurate. Acoustic backscatter corresponds well with the aircraft hyperspectral imagery and in situ measurements of bottom reflectance. Acoustic backscatter was used to define the distribution of different bottom types. Acoustic backscatter imagery corresponds well with the AVIRIS data in the middle to outer study area, implying a close correspondence between seafloor character and optical reflectance. AVIRIS data in the inner study area show poorer correspondence with the acoustic facies, indicating greater water column effects (turbidity). Acoustic backscatter as a proxy for bottom albedo, in conjunction with multibeam bathymetry data, will allow for more precise modeling of the optical signal in coastal environments.

  19. SU-E-T-296: Dosimetric Analysis of Small Animal Image-Guided Irradiator Using High Resolution Optical CT Imaging of 3D Dosimeters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Na, Y; Qian, X; Wuu, C

    Purpose: To verify the dosimetric characteristics of a small animal image-guided irradiator using a high-resolution of optical CT imaging of 3D dosimeters. Methods: PRESAEGE 3D dosimeters were used to determine dosimetric characteristics of a small animal image-guided irradiator and compared with EBT2 films. Cylindrical PRESAGE dosimeters with 7cm height and 6cm diameter were placed along the central axis of the beam. The films were positioned between 6×6cm{sup 2} cubed plastic water phantoms perpendicular to the beam direction with multiple depths. PRESAGE dosimeters and EBT2 films were then irradiated with the irradiator beams at 220kVp and 13mA. Each of irradiated PRESAGEmore » dosimeters named PA1, PA2, PB1, and PB2, was independently scanned using a high-resolution single laser beam optical CT scanner. The transverse images were reconstructed with a 0.1mm high-resolution pixel. A commercial Epson Expression 10000XL flatbed scanner was used for readout of irradiated EBT2 films at a 0.4mm pixel resolution. PDD curves and beam profiles were measured for the irradiated PRESAGE dosimeters and EBT2 films. Results: The PDD agreements between the irradiated PRESAGE dosimeter PA1, PA2, PB1, PB2 and the EB2 films were 1.7, 2.3, 1.9, and 1.9% for the multiple depths at 1, 5, 10, 15, 20, 30, 40 and 50mm, respectively. The FWHM measurements for each PRESAEGE dosimeter and film agreed with 0.5, 1.1, 0.4, and 1.7%, respectively, at 30mm depth. Both PDD and FWHM measurements for the PRESAGE dosimeters and the films agreed overall within 2%. The 20%–80% penumbral widths of each PRESAGE dosimeter and the film at a given depth were respectively found to be 0.97, 0.91, 0.79, 0.88, and 0.37mm. Conclusion: Dosimetric characteristics of a small animal image-guided irradiator have been demonstrated with the measurements of PRESAGE dosimeter and EB2 film. With the high resolution and accuracy obtained from this 3D dosimetry system, precise targeting small animal irradiation can be achieved.« less

  20. Wavelength-Filter Based Spectral Calibrated Wave number - Linearization in 1.3 mm Spectral Domain Optical Coherence.

    PubMed

    Wijeisnghe, Ruchire Eranga Henry; Cho, Nam Hyun; Park, Kibeom; Shin, Yongseung; Kim, Jeehyun

    2013-12-01

    In this study, we demonstrate the enhanced spectral calibration method for 1.3 μm spectral-domain optical coherence tomography (SD-OCT). The calibration method using wavelength-filter simplifies the SD-OCT system, and also the axial resolution and the entire speed of the OCT system can be dramatically improved as well. An externally connected wavelength-filter is utilized to obtain the information of the wavenumber and the pixel position. During the calibration process the wavelength-filter is placed after a broadband source by connecting through an optical circulator. The filtered spectrum with a narrow line width of 0.5 nm is detected by using a line-scan camera. The method does not require a filter or a software recalibration algorithm for imaging as it simply resamples the OCT signal from the detector array without employing rescaling or interpolation methods. One of the main drawbacks of SD-OCT is the broadened point spread functions (PSFs) with increasing imaging depth can be compensated by increasing the wavenumber-linearization order. The sensitivity of our system was measured at 99.8 dB at an imaging depth of 2.1 mm compared with the uncompensated case.

  1. Modulation transfer function of a triangular pixel array detector.

    PubMed

    Karimzadeh, Ayatollah

    2014-07-01

    The modulation transfer function (MTF) is the main parameter that is used to evaluate image quality in electro-optical systems. Detector sampling MTF in most electro-optical systems determines the cutoff frequency of the system. The MTF of the detector depends on its pixel shape. In this work, we calculated the MTF of a detector with an equilateral triangular pixel shape. Some new results were found in deriving the MTF for the equilateral triangular pixel shape.

  2. Aerosol Optical Depth Value-Added Product for the SAS-He Instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ermold, B; Flynn, CJ; Barnard, J

    2013-11-27

    The Shortwave Array Spectroradiometer – Hemispheric (SAS-He) is a ground-based, shadowband instrument that measures the direct and diffuse solar irradiance. In this regard, the instrument is similar to the Multi-Filter Rotating Shadowband Radiometer (MFRSR) – an instrument that has been in the ARM suite of instruments for more than 15 years. However, the two instruments differ significantly in wavelength resolution and range. In particular, the MFRSR only observes the spectrum in six discrete wavelength channels of about 10 nm width from 415 to 940 nm. The SAS-He, in contrast, incorporates two fiber-coupled grating spectrometers: a Si CCD spectrometer with overmore » 2000 pixels covering the range from 325-1040 nm with ~ 2.5 nm resolution ,and an InGaAs array spectrometer with 256 pixels covering the wavelength range from 960-1700 nm with ~ 6 nm resolution.« less

  3. Fast, Deep-Record-Length, Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas

    2015-11-01

    HyperV Technologies has been developing an imaging diagnostic comprised of an array of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers can be constructed. By interfacing analog photodiode systems directly to commercial analog-to-digital converters and modern memory chips, a scalable solution for 100 to 1000 pixel systems with 14 bit resolution and record-lengths of 128k frames has been developed. HyperV is applying these techniques to construct a prototype 1000 Pixel framing camera with up to 100 Msamples/sec rate and 10 to 14 bit depth. Preliminary experimental results as well as future plans will be discussed. Work supported by USDOE Phase 2 SBIR Grant DE-SC0009492.

  4. Super-resolution optics for virtual reality

    NASA Astrophysics Data System (ADS)

    Grabovičkić, Dejan; Benitez, Pablo; Miñano, Juan C.; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj; Nikolic, Milena I.; Lopez, Jesus; Gorospe, Jorge; Sanchez, Eduardo; Lastres, Carmen; Mohedano, Ruben

    2017-06-01

    In present commercial Virtual Reality (VR) headsets the resolution perceived is still limited, since the VR pixel density (typically 10-15 pixels/deg) is well below what the human eye can resolve (60 pixels/deg). We present here novel advanced optical design approaches that dramatically increase the perceived resolution of the VR keeping the large FoV required in VR applications. This approach can be applied to a vast number of optical architectures, including some advanced configurations, as multichannel designs. All this is done at the optical design stage, and no eye tracker is needed in the headset.

  5. Planck early results. XXV. Thermal dust in nearby molecular clouds

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Abergel, A.; Ade, P. A. R.; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Balbi, A.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoît, A.; Bernard, J.-P.; Bersanelli, M.; Bhatia, R.; Bock, J. J.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Cabella, P.; Cardoso, J.-F.; Catalano, A.; Cayón, L.; Challinor, A.; Chamballu, A.; Chiang, L.-Y.; Chiang, C.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Couchot, F.; Coulais, A.; Crill, B. P.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Gasperis, G.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Delouis, J.-M.; Désert, F.-X.; Dickinson, C.; Dobashi, K.; Donzelli, S.; Doré, O.; Dörl, U.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Guillet, V.; Hansen, F. K.; Harrison, D.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hovest, W.; Hoyland, R. J.; Huffenberger, K. M.; Jaffe, A. H.; Jones, A.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knox, L.; Kurki-Suonio, H.; Lagache, G.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leach, S.; Leonardi, R.; Leroy, C.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; MacTavish, C. J.; Maffei, B.; Mandolesi, N.; Mann, R.; Maris, M.; Marshall, D. J.; Martin, P.; Martínez-González, E.; Masi, S.; Matarrese, S.; Matthai, F.; Mazzotta, P.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, A.; Naselsky, P.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Osborne, S.; Pajot, F.; Paladini, R.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Perrotta, F.; Piacentini, F.; Piat, M.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Poutanen, T.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, P.; Smoot, G. F.; Starck, J.-L.; Stivoli, F.; Stolyarov, V.; Sudiwala, R.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Torre, J.-P.; Tristram, M.; Tuovinen, J.; Umana, G.; Valenziano, L.; Verstraete, L.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2011-12-01

    Planck allows unbiased mapping of Galactic sub-millimetre and millimetre emission from the most diffuse regions to the densest parts of molecular clouds. We present an early analysis of the Taurus molecular complex, on line-of-sight-averaged data and without component separation. The emission spectrum measured by Planck and IRAS can be fitted pixel by pixel using a single modified blackbody. Some systematic residuals are detected at 353 GHz and 143 GHz, with amplitudes around -7% and +13%, respectively, indicating that the measured spectra are likely more complex than a simple modified blackbody. Significant positive residuals are also detected in the molecular regions and in the 217 GHz and 100 GHz bands, mainly caused by the contribution of the J = 2 → 1 and J = 1 → 0 12CO and 13CO emission lines. We derive maps of the dust temperature T, the dust spectral emissivity index β, and the dust optical depth at 250 μm τ250. The temperature map illustrates the cooling of the dust particles in thermal equilibrium with the incident radiation field, from 16 - 17 K in the diffuse regions to 13 - 14 K in the dense parts. The distribution of spectral indices is centred at 1.78, with a standard deviation of 0.08 and a systematic error of 0.07. We detect a significant T - β anti-correlation. The dust optical depth map reveals the spatial distribution of the column density of the molecular complex from the densest molecular regions to the faint diffuse regions. We use near-infrared extinction and Hi data at 21-cm to perform a quantitative analysis of the spatial variations of the measured dust optical depth at 250 μm per hydrogen atom τ250/NH. We report an increase of τ250/NH by a factor of about 2 between the atomic phase and the molecular phase, which has a strong impact on the equilibrium temperature of the dust particles. Corresponding author: A. Abergel, e-mail: alain.abergel@ias.u-psud.fr

  6. A Novel Method for Estimating Shortwave Direct Radiative Effect of Above-Cloud Aerosols Using CALIOP and MODIS Data

    NASA Technical Reports Server (NTRS)

    Zhang, Zhibo; Meyer, Kerry G.; Platnick, Steven; Oreopoulos, Lazaros; Lee, Dongmin; Yu, Hongbin

    2014-01-01

    This paper describes an efficient and unique method for computing the shortwave direct radiative effect (DRE) of aerosol residing above low-level liquid-phase clouds using CALIOP and MODIS data. It addresses the overlap of aerosol and cloud rigorously by utilizing the joint histogram of cloud optical depth and cloud top pressure while also accounting for subgrid-scale variations of aerosols. The method is computationally efficient because of its use of grid-level cloud and aerosol statistics, instead of pixel-level products, and a pre-computed look-up table based on radiative transfer calculations. We verify that for smoke over the southeast Atlantic Ocean the method yields a seasonal mean instantaneous (approximately 1:30PM local time) shortwave DRE of above cloud aerosol (ACA) that generally agrees with more rigorous pixel-level computation within 4 percent. We also estimate the impact of potential CALIOP aerosol optical depth (AOD) retrieval bias of ACA on DRE. We find that the regional and seasonal mean instantaneous DRE of ACA over southeast Atlantic Ocean would increase, from the original value of 6.4 W m(-2) based on operational CALIOP AOD to 9.6 W m(-2) if CALIOP AOD retrieval are biased low by a factor of 1.5 (Meyer et al., 2013) and further to 30.9 W m(-2) if CALIOP AOD retrieval are biased low by a factor of 5 as suggested in (Jethva et al., 2014). In contrast, the instantaneous ACA radiative forcing efficiency (RFE) remains relatively invariant in all cases at about 53 W m(-2) AOD(-1), suggesting a near linear relation between the instantaneous RFE and AOD. We also compute the annual mean instantaneous shortwave DRE of light-absorbing aerosols (i.e., smoke and polluted dust) over global oceans based on 4 years of CALIOP and MODIS data. We find that the variability of the annual mean shortwave DRE of above-cloud light-absorbing aerosol is mainly driven by the optical depth of the underlying clouds. While we demonstrate our method using CALIOP and MODIS data, it can also be extended to other satellite data sets, as well as climate model outputs.

  7. Comparative analysis of respiratory motion tracking using Microsoft Kinect v2 sensor.

    PubMed

    Silverstein, Evan; Snyder, Michael

    2018-05-01

    To present and evaluate a straightforward implementation of a marker-less, respiratory motion-tracking process utilizing Kinect v2 camera as a gating tool during 4DCT or during radiotherapy treatments. Utilizing the depth sensor on the Kinect as well as author written C# code, respiratory motion of a subject was tracked by recording depth values obtained at user selected points on the subject, with each point representing one pixel on the depth image. As a patient breathes, specific anatomical points on the chest/abdomen will move slightly within the depth image across pixels. By tracking how depth values change for a specific pixel, instead of how the anatomical point moves throughout the image, a respiratory trace can be obtained based on changing depth values of the selected pixel. Tracking these values was implemented via marker-less setup. Varian's RPM system and the Anzai belt system were used in tandem with the Kinect to compare respiratory traces obtained by each using two different subjects. Analysis of the depth information from the Kinect for purposes of phase- and amplitude-based binning correlated well with the RPM and Anzai systems. Interquartile Range (IQR) values were obtained comparing times correlated with specific amplitude and phase percentages against each product. The IQR time spans indicated the Kinect would measure specific percentage values within 0.077 s for Subject 1 and 0.164 s for Subject 2 when compared to values obtained with RPM or Anzai. For 4DCT scans, these times correlate to less than 1 mm of couch movement and would create an offset of 1/2 an acquired slice. By tracking depth values of user selected pixels within the depth image, rather than tracking specific anatomical locations, respiratory motion can be tracked and visualized utilizing the Kinect with results comparable to that of the Varian RPM and Anzai belt. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  8. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  9. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  10. Anti-aliasing techniques in photon-counting depth imaging using GHz clock rates

    NASA Astrophysics Data System (ADS)

    Krichel, Nils J.; McCarthy, Aongus; Collins, Robert J.; Buller, Gerald S.

    2010-04-01

    Single-photon detection technologies in conjunction with low laser illumination powers allow for the eye-safe acquisition of time-of-flight range information on non-cooperative target surfaces. We previously presented a photon-counting depth imaging system designed for the rapid acquisition of three-dimensional target models by steering a single scanning pixel across the field angle of interest. To minimise the per-pixel dwelling times required to obtain sufficient photon statistics for accurate distance resolution, periodic illumination at multi- MHz repetition rates was applied. Modern time-correlated single-photon counting (TCSPC) hardware allowed for depth measurements with sub-mm precision. Resolving the absolute target range with a fast periodic signal is only possible at sufficiently short distances: if the round-trip time towards an object is extended beyond the timespan between two trigger pulses, the return signal cannot be assigned to an unambiguous range value. Whereas constructing a precise depth image based on relative results may still be possible, problems emerge for large or unknown pixel-by-pixel separations or in applications with a wide range of possible scene distances. We introduce a technique to avoid range ambiguity effects in time-of-flight depth imaging systems at high average pulse rates. A long pseudo-random bitstream is used to trigger the illuminating laser. A cyclic, fast-Fourier supported analysis algorithm is used to search for the pattern within return photon events. We demonstrate this approach at base clock rates of up to 2 GHz with varying pattern lengths, allowing for unambiguous distances of several kilometers. Scans at long stand-off distances and of scenes with large pixel-to-pixel range differences are presented. Numerical simulations are performed to investigate the relative merits of the technique.

  11. The Level 0 Pixel Trigger system for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Aglieri Rinella, G.; Kluge, A.; Krivda, M.; ALICE Silicon Pixel Detector project

    2007-01-01

    The ALICE Silicon Pixel Detector contains 1200 readout chips. Fast-OR signals indicate the presence of at least one hit in the 8192 pixel matrix of each chip. The 1200 bits are transmitted every 100 ns on 120 data readout optical links using the G-Link protocol. The Pixel Trigger System extracts and processes them to deliver an input signal to the Level 0 trigger processor targeting a latency of 800 ns. The system is compact, modular and based on FPGA devices. The architecture allows the user to define and implement various trigger algorithms. The system uses advanced 12-channel parallel optical fiber modules operating at 1310 nm as optical receivers and 12 deserializer chips closely packed in small area receiver boards. Alternative solutions with multi-channel G-Link deserializers implemented directly in programmable hardware devices were investigated. The design of the system and the progress of the ALICE Pixel Trigger project are described in this paper.

  12. The 27-28 October 1986 FIRE IFO Cirrus Case Study: Cirrus Parameter Relationships Derived from Satellite and Lidar Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Young, David F.; Sassen, Kenneth; Alvarez, Joseph M.; Grund, Christian J.

    1990-01-01

    Cirrus cloud radiative and physical characteristics are determined using a combination of ground-based, aircraft, and satellite measurements taken as part of the FIRE Cirrus Intensive Field Observations (IFO) during October and November 1986. Lidar backscatter data are used with rawinsonde data to define cloud base, center, and top heights and the corresponding temperatures. Coincident GOES 4-km visible (0.65 micro-m) and 8-km infrared window (11.5 micro-m) radiances are analyzed to determine cloud emittances and reflectances. Infrared optical depth is computed from the emittance results. Visible optical depth is derived from reflectance using a theoretical ice crystal scattering model and an empirical bidirectional reflectance model. No clouds with visible optical depths greater than 5 or infrared optical depths less than 0.1 were used in the analysis. Average cloud thickness ranged from 0.5 km to 8.0 km for the 71 scenes. Mean vertical beam emittances derived from cloud-center temperatures were 0.62 for all scenes compared to 0.33 for the case study (27-28 October) reflecting the thinner clouds observed for the latter scenes. Relationships between cloud emittance, extinction coefficients, and temperature for the case study are very similar to those derived from earlier surface- based studies. The thicker clouds seen during the other IFO days yield different results. Emittances derived using cloud-top temperature were ratioed to those determined from cloud-center temperature. A nearly linear relationship between these ratios and cloud-center temperature holds promise for determining actual cloud-top temperatures and cloud thicknesses from visible and infrared radiance pairs. The mean ratio of the visible scattering optical depth to the infrared absorption optical depth was 2.13 for these data. This scattering efficiency ratio shows a significant dependence on cloud temperature. Values of mean scattering efficiency as high as 2.6 suggest the presence of small ice particles at temperatures below 230 K. The parameterization of visible reflectance in terms of cloud optical depth and clear-sky reflectance shows promise as a simplified method for interpreting visible satellite data reflected from cirrus clouds. Large uncertainties in the optical parameters due to cloud reflectance anisotropy and shading were found by analyzing data for various solar zenith angles and for simultaneous AVHRR data. Inhomogeneities in the cloud fields result in uneven cloud shading that apparently causes the occurrence of anomalously dark, cloudy pixels in the GOES data. These shading effects complicate the interpretation of the satellite data. The results highlight the need for additional study of cirrus cloud scattering processes and remote sensing techniques.

  13. The 27-28 October 1986 FIRE IFO Cirrus Case Study: Cirrus Parameter Relationships Derived from Satellite and Lidar Data

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Young, David F.; Sassen, Kenneth; Alvarez, Joseph M.; Grund, Christian J.

    1996-01-01

    Cirrus cloud radiative and physical characteristics are determined using a combination of ground based, aircraft, and satellite measurements taken as part of the First ISCCP Region Experiment (FIRE) cirrus intensive field observations (IFO) during October and November 1986. Lidar backscatter data are used with rawinsonde data to define cloud base, center and top heights and the corresponding temperatures. Coincident GOES-4 4-km visible (0.65 micrometer) and 8-km infrared window (11.5 micrometer) radiances are analyzed to determine cloud emittances and reflectances. Infrared optical depth is computed from the emittance results. Visible optical depth is derived from reflectance using a theoretical ice crystal scattering model and an empirical bidirectional reflectance model. No clouds with visible optical depths greater than 5 or infrared optical depths less than 0.1 were used in the analysis. Average cloud thickness ranged from 0.5 km to 8.0 km for the 71 scenes. Mean vertical beam emittances derived from cloud-center temperatures were 062 for all scenes compared to 0.33 for the case study (27-28 October) reflecting the thinner clouds observed for the latter scenes. Relationships between cloud emittance , extinction coefficients, and temperature for the case study are very similar to those derived from earlier surface-based studies. The thicker clouds seen during the other IFO days yield different results. Emittances derived using cloud-top temperature wer ratioed to those determined from cloud-center temperature. A nearly linear relationship between these ratios and cloud-center temperature holds promise for determining actual cloud-top temperature and cloud thickness from visible and infrared radiance pairs. The mean ratio of the visible scattering optical depth to the infrared absorption optical depth was 2.13 for these data. This scattering efficiency ratio shows a significant dependence on cloud temperature. Values of mean scattering efficiency as high as 2.6 suggest the presence of small ice particles at temperatures below 230 K. the parameterization of visible reflectance in terms of cloud optical depth and clear sky reflectance shows promise as a simplified method for interpreting visible satellite data reflected from cirrus clouds. Large uncertainties in the optical parameters due to cloud reflectance anisotropy and shading were found by analyzing data for various solar zenith angles and for simultaneous advanced very high resolution radiometer (AVHRR) data. Inhomogeneities in the cloud fields result in uneven cloud shading that apparently causes the occurrence of anomalously dark, cloud pixels in the GOES data. These shading effects complicate the interpretation of the satellite data. The results highlight the need for additional study or cirrus cloud scattering processes and remote sensing techniques.

  14. A calibration method immune to the projector errors in fringe projection profilometry

    NASA Astrophysics Data System (ADS)

    Zhang, Ruihua; Guo, Hongwei

    2017-08-01

    In fringe projection technique, system calibration is a tedious task to establish the mapping relationship between the object depths and the fringe phases. Especially, it is not easy to accurately determine the parameters of the projector in this system, which may induce errors in the measurement results. To solve this problem, this paper proposes a new calibration by using the cross-ratio invariance in the system geometry for determining the phase-to-depth relations. In it, we analyze the epipolar eometry of the fringe projection system. On each epipolar plane, the depth variation along an incident ray induces the pixel movement along the epipolar line on the image plane of the camera. These depth variations and pixel movements can be connected by use of the projective transformations, under which condition the cross-ratio for each of them keeps invariant. Based on this fact, we suggest measuring the depth map by use of this cross-ratio invariance. Firstly, we shift the reference board in its perpendicular direction to three positions with known depths, and measure their phase maps as the reference phase maps; and secondly, when measuring an object, we calculate the object depth at each pixel by equating the cross-ratio of the depths to that of the corresponding pixels having the same phase on the image plane of the camera. This method is immune to the errors sourced from the projector, including the distortions both in the geometric shapes and in the intensity profiles of the projected fringe patterns.The experimental results demonstrate the proposed method to be feasible and valid.

  15. Full-color large-scaled computer-generated holograms using RGB color filters.

    PubMed

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  16. Retrieval of aerosol optical depth over bare soil surfaces using time series of MODIS imagery

    NASA Astrophysics Data System (ADS)

    Yuan, Zhengwu; Yuan, Ranyin; Zhong, Bo

    2014-11-01

    Aerosol Optical Depth (AOD) is one of the key parameters which can not only reflect the characterization of atmospheric turbidity, but also identify the climate effects of aerosol. The current MODIS aerosol estimation algorithm over land is based on the "dark-target" approach which works only over densely vegetated surfaces. For non-densely vegetated surfaces (such as snow/ice, desert, and bare soil surfaces), this method will be failed. In this study, we develop an algorithm to derive AOD over the bare soil surfaces. Firstly, this method uses the time series of MODIS imagery to detect the " clearest" observations during the non-growing season in multiple years for each pixel. Secondly, the "clearest" observations after suitable atmospheric correction are used to fit the bare soil's bidirectional reflectance distribution function (BRDF) using Kernel model. As long as the bare soil's BRDF is established, the surface reflectance of "hazy" observations can be simulated. Eventually, the AOD over the bare soil surfaces are derived. Preliminary validation results by comparing with the ground measurements from AERONET at Xianghe sites show a good agreement.

  17. NEUTRAL HYDROGEN OPTICAL DEPTH NEAR STAR-FORMING GALAXIES AT z Almost-Equal-To 2.4 IN THE KECK BARYONIC STRUCTURE SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rakic, Olivera; Schaye, Joop; Steidel, Charles C.

    We study the interface between galaxies and the intergalactic medium by measuring the absorption by neutral hydrogen in the vicinity of star-forming galaxies at z Almost-Equal-To 2.4. Our sample consists of 679 rest-frame UV-selected galaxies with spectroscopic redshifts that have impact parameters <2 (proper) Mpc to the line of sight of one of the 15 bright, background QSOs and that fall within the redshift range of its Ly{alpha} forest. We present the first two-dimensional maps of the absorption around galaxies, plotting the median Ly{alpha} pixel optical depth as a function of transverse and line-of-sight separation from galaxies. The Ly{alpha} opticalmore » depths are measured using an automatic algorithm that takes advantage of all available Lyman series lines. The median optical depth, and hence the median density of atomic hydrogen, drops by more than an order of magnitude around 100 kpc, which is similar to the virial radius of the halos thought to host the galaxies. The median remains enhanced, at the >3{sigma} level, out to at least 2.8 Mpc (i.e., >9 comoving Mpc), but the scatter at a given distance is large compared with the median excess optical depth, suggesting that the gas is clumpy. Within 100 (200) kpc, and over {+-}165 km s{sup -1}, the covering fraction of gas with Ly{alpha} optical depth greater than unity is 100{sup +0}{sub -32}% (66% {+-} 16%). Absorbers with {tau}{sub Ly{alpha}} > 0.1 are typically closer to galaxies than random. The mean galaxy overdensity around absorbers increases with the optical depth and also as the length scale over which the galaxy overdensity is evaluated is decreased. Absorbers with {tau}{sub Ly{alpha}} {approx} 1 reside in regions where the galaxy number density is close to the cosmic mean on scales {>=}0.25 Mpc. We clearly detect two types of redshift space anisotropies. On scales <200 km s{sup -1}, or <1 Mpc, the absorption is stronger along the line of sight than in the transverse direction. This 'finger of God' effect may be due to redshift errors, but is probably dominated by gas motions within or very close to the halos. On the other hand, on scales of 1.4-2.0 Mpc the absorption is compressed along the line of sight (with >3{sigma} significance), an effect that we attribute to large-scale infall (i.e., the Kaiser effect).« less

  18. Multiocular image sensor with on-chip beam-splitter and inner meta-micro-lens for single-main-lens stereo camera.

    PubMed

    Koyama, Shinzo; Onozawa, Kazutoshi; Tanaka, Keisuke; Saito, Shigeru; Kourkouss, Sahim Mohamed; Kato, Yoshihisa

    2016-08-08

    We developed multiocular 1/3-inch 2.75-μm-pixel-size 2.1M- pixel image sensors by co-design of both on-chip beam-splitter and 100-nm-width 800-nm-depth patterned inner meta-micro-lens for single-main-lens stereo camera systems. A camera with the multiocular image sensor can capture horizontally one-dimensional light filed by both the on-chip beam-splitter horizontally dividing ray according to incident angle, and the inner meta-micro-lens collecting the divided ray into pixel with small optical loss. Cross-talks between adjacent light field images of a fabricated binocular image sensor and of a quad-ocular image sensor are as low as 6% and 7% respectively. With the selection of two images from one-dimensional light filed images, a selective baseline for stereo vision is realized to view close objects with single-main-lens. In addition, by adding multiple light field images with different ratios, baseline distance can be tuned within an aperture of a main lens. We suggest the electrically selective or tunable baseline stereo vision to reduce 3D fatigue of viewers.

  19. Matrix light and pixel light: optical system architecture and requirements to the light source

    NASA Astrophysics Data System (ADS)

    Spinger, Benno; Timinger, Andreas L.

    2015-09-01

    Modern Automotive headlamps enable improved functionality for more driving comfort and safety. Matrix or Pixel light headlamps are not restricted to either pure low beam functionality or pure high beam. Light in direction of oncoming traffic is selectively switched of, potential hazard can be marked via an isolated beam and the illumination on the road can even follow a bend. The optical architectures that enable these advanced functionalities are diverse. Electromechanical shutters and lens units moved by electric motors were the first ways to realize these systems. Switching multiple LED light sources is a more elegant and mechanically robust solution. While many basic functionalities can already be realized with a limited number of LEDs, an increasing number of pixels will lead to more driving comfort and better visibility. The required optical system needs not only to generate a desired beam distribution with a high angular dynamic, but also needs to guarantee minimal stray light and cross talk between the different pixels. The direct projection of the LED array via a lens is a simple but not very efficient optical system. We discuss different optical elements for pre-collimating the light with minimal cross talk and improved contrast between neighboring pixels. Depending on the selected optical system, we derive the basic light source requirements: luminance, surface area, contrast, flux and color homogeneity.

  20. Focusing light through scattering media by polarization modulation based generalized digital optical phase conjugation

    NASA Astrophysics Data System (ADS)

    Yang, Jiamiao; Shen, Yuecheng; Liu, Yan; Hemphill, Ashton S.; Wang, Lihong V.

    2017-11-01

    Optical scattering prevents light from being focused through thick biological tissue at depths greater than ˜1 mm. To break this optical diffusion limit, digital optical phase conjugation (DOPC) based wavefront shaping techniques are being actively developed. Previous DOPC systems employed spatial light modulators that modulated either the phase or the amplitude of the conjugate light field. Here, we achieve optical focusing through scattering media by using polarization modulation based generalized DOPC. First, we describe an algorithm to extract the polarization map from the measured scattered field. Then, we validate the algorithm through numerical simulations and find that the focusing contrast achieved by polarization modulation is similar to that achieved by phase modulation. Finally, we build a system using an inexpensive twisted nematic liquid crystal based spatial light modulator (SLM) and experimentally demonstrate light focusing through 3-mm thick chicken breast tissue. Since the polarization modulation based SLMs are widely used in displays and are having more and more pixel counts with the prevalence of 4 K displays, these SLMs are inexpensive and valuable devices for wavefront shaping.

  1. Terahertz imaging with compressive sensing

    NASA Astrophysics Data System (ADS)

    Chan, Wai Lam

    Most existing terahertz imaging systems are generally limited by slow image acquisition due to mechanical raster scanning. Other systems using focal plane detector arrays can acquire images in real time, but are either too costly or limited by low sensitivity in the terahertz frequency range. To design faster and more cost-effective terahertz imaging systems, the first part of this thesis proposes two new terahertz imaging schemes based on compressive sensing (CS). Both schemes can acquire amplitude and phase-contrast images efficiently with a single-pixel detector, thanks to the powerful CS algorithms which enable the reconstruction of N-by- N pixel images with much fewer than N2 measurements. The first CS Fourier imaging approach successfully reconstructs a 64x64 image of an object with pixel size 1.4 mm using a randomly chosen subset of the 4096 pixels which defines the image in the Fourier plane. Only about 12% of the pixels are required for reassembling the image of a selected object, equivalent to a 2/3 reduction in acquisition time. The second approach is single-pixel CS imaging, which uses a series of random masks for acquisition. Besides speeding up acquisition with a reduced number of measurements, the single-pixel system can further cut down acquisition time by electrical or optical spatial modulation of random patterns. In order to switch between random patterns at high speed in the single-pixel imaging system, the second part of this thesis implements a multi-pixel electrical spatial modulator for terahertz beams using active terahertz metamaterials. The first generation of this device consists of a 4x4 pixel array, where each pixel is an array of sub-wavelength-sized split-ring resonator elements fabricated on a semiconductor substrate, and is independently controlled by applying an external voltage. The spatial modulator has a uniform modulation depth of around 40 percent across all pixels, and negligible crosstalk, at the resonant frequency. The second-generation spatial terahertz modulator, also based on metamaterials with a higher resolution (32x32), is under development. A FPGA-based circuit is designed to control the large number of modulator pixels. Once fully implemented, this second-generation device will enable fast terahertz imaging with both pulsed and continuous-wave terahertz sources.

  2. Collection of holes in thick TlBr detectors at low temperature

    NASA Astrophysics Data System (ADS)

    Dönmez, Burçin; He, Zhong; Kim, Hadong; Cirignano, Leonard J.; Shah, Kanai S.

    2012-10-01

    A 3.5×3.5×4.6 mm3 thick TlBr detector with pixellated Au/Cr anodes made by Radiation Monitoring Devices Inc. was studied. The detector has a planar cathode and nine anode pixels surrounded by a guard ring. The pixel pitch is 1.0 mm. Digital pulse waveforms of preamplifier outputs were recorded using a multi-channel GaGe PCI digitizer board. Several experiments were carried out at -20 °C, with the detector under bias for over a month. An energy resolution of 1.7% FWHM at 662 keV was measured without any correction at -2400 V bias. Holes generated at all depths can be collected by the cathode at -2400 V bias which made depth correction using the cathode-to-anode ratio technique difficult since both charge carriers contribute to the signal. An energy resolution of 5.1% FWHM at 662 keV was obtained from the best pixel electrode without depth correction at +1000 V bias. In this positive bias case, the pixel electrode was actually collecting holes. A hole mobility-lifetime of 0.95×10-4 cm2/V has been estimated from measurement data.

  3. Optical sectioning in wide-field microscopy obtained by dynamic structured light illumination and detection based on a smart pixel detector array.

    PubMed

    Mitić, Jelena; Anhut, Tiemo; Meier, Matthias; Ducros, Mathieu; Serov, Alexander; Lasser, Theo

    2003-05-01

    Optical sectioning in wide-field microscopy is achieved by illumination of the object with a continuously moving single-spatial-frequency pattern and detecting the image with a smart pixel detector array. This detector performs an on-chip electronic signal processing that extracts the optically sectioned image. The optically sectioned image is directly observed in real time without any additional postprocessing.

  4. Neutral Hydrogen Optical Depth near Star-forming Galaxies at z ≈ 2.4 in the Keck Baryonic Structure Survey

    NASA Astrophysics Data System (ADS)

    Rakic, Olivera; Schaye, Joop; Steidel, Charles C.; Rudie, Gwen C.

    2012-06-01

    We study the interface between galaxies and the intergalactic medium by measuring the absorption by neutral hydrogen in the vicinity of star-forming galaxies at z ≈ 2.4. Our sample consists of 679 rest-frame UV-selected galaxies with spectroscopic redshifts that have impact parameters <2 (proper) Mpc to the line of sight of one of the 15 bright, background QSOs and that fall within the redshift range of its Lyα forest. We present the first two-dimensional maps of the absorption around galaxies, plotting the median Lyα pixel optical depth as a function of transverse and line-of-sight separation from galaxies. The Lyα optical depths are measured using an automatic algorithm that takes advantage of all available Lyman series lines. The median optical depth, and hence the median density of atomic hydrogen, drops by more than an order of magnitude around 100 kpc, which is similar to the virial radius of the halos thought to host the galaxies. The median remains enhanced, at the >3σ level, out to at least 2.8 Mpc (i.e., >9 comoving Mpc), but the scatter at a given distance is large compared with the median excess optical depth, suggesting that the gas is clumpy. Within 100 (200) kpc, and over ±165 km s-1, the covering fraction of gas with Lyα optical depth greater than unity is 100+0 - 32% (66% ± 16%). Absorbers with τLyα > 0.1 are typically closer to galaxies than random. The mean galaxy overdensity around absorbers increases with the optical depth and also as the length scale over which the galaxy overdensity is evaluated is decreased. Absorbers with τLyα ~ 1 reside in regions where the galaxy number density is close to the cosmic mean on scales >=0.25 Mpc. We clearly detect two types of redshift space anisotropies. On scales <200 km s-1, or <1 Mpc, the absorption is stronger along the line of sight than in the transverse direction. This "finger of God" effect may be due to redshift errors, but is probably dominated by gas motions within or very close to the halos. On the other hand, on scales of 1.4-2.0 Mpc the absorption is compressed along the line of sight (with >3σ significance), an effect that we attribute to large-scale infall (i.e., the Kaiser effect). Based on data obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and NASA, and was made possible by the generous financial support of the W. M. Keck Foundation.

  5. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing

    PubMed Central

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-01-01

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R2-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R2-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications. PMID:26437410

  6. Depth Estimation of Submerged Aquatic Vegetation in Clear Water Streams Using Low-Altitude Optical Remote Sensing.

    PubMed

    Visser, Fleur; Buis, Kerst; Verschoren, Veerle; Meire, Patrick

    2015-09-30

    UAVs and other low-altitude remote sensing platforms are proving very useful tools for remote sensing of river systems. Currently consumer grade cameras are still the most commonly used sensors for this purpose. In particular, progress is being made to obtain river bathymetry from the optical image data collected with such cameras, using the strong attenuation of light in water. No studies have yet applied this method to map submergence depth of aquatic vegetation, which has rather different reflectance characteristics from river bed substrate. This study therefore looked at the possibilities to use the optical image data to map submerged aquatic vegetation (SAV) depth in shallow clear water streams. We first applied the Optimal Band Ratio Analysis method (OBRA) of Legleiter et al. (2009) to a dataset of spectral signatures from three macrophyte species in a clear water stream. The results showed that for each species the ratio of certain wavelengths were strongly associated with depth. A combined assessment of all species resulted in equally strong associations, indicating that the effect of spectral variation in vegetation is subsidiary to spectral variation due to depth changes. Strongest associations (R²-values ranging from 0.67 to 0.90 for different species) were found for combinations including one band in the near infrared (NIR) region between 825 and 925 nm and one band in the visible light region. Currently data of both high spatial and spectral resolution is not commonly available to apply the OBRA results directly to image data for SAV depth mapping. Instead a novel, low-cost data acquisition method was used to obtain six-band high spatial resolution image composites using a NIR sensitive DSLR camera. A field dataset of SAV submergence depths was used to develop regression models for the mapping of submergence depth from image pixel values. Band (combinations) providing the best performing models (R²-values up to 0.77) corresponded with the OBRA findings. A 10% error was achieved under sub-optimal data collection conditions, which indicates that the method could be suitable for many SAV mapping applications.

  7. Structure-aware depth super-resolution using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon

    2015-03-01

    This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.

  8. Novel laser-processed CsI:Tl detector for SPECT

    PubMed Central

    Sabet, H.; Bläckberg, L.; Uzun-Ozsahin, D.; El-Fakhri, G.

    2016-01-01

    Purpose: The aim of this work is to demonstrate the feasibility of a novel technique for fabrication of high spatial resolution CsI:Tl scintillation detectors for single photon emission computed tomography systems. Methods: The scintillators are fabricated using laser-induced optical barriers technique to create optical microstructures (or optical barriers) inside the CsI:Tl crystal bulk. The laser-processed CsI:Tl crystals are 3, 5, and 10 mm in thickness. In this work, the authors focus on the simplest pattern of optical barriers in that the barriers are created in the crystal bulk to form pixel-like patterns resembling mechanically pixelated scintillators. The monolithic CsI:Tl scintillator samples are fabricated with optical barrier patterns with 1.0 × 1.0 mm2 and 0.625 × 0.625 mm2 pixels. Experiments were conducted to characterize the fabricated arrays in terms of pixel separation and energy resolution. A 4 × 4 array of multipixel photon counter was used to collect the scintillation light in all the experiments. Results: The process yield for fabricating the CsI:Tl arrays is 100% with processing time under 50 min. From the flood maps of the fabricated detectors exposed to 122 keV gammas, peak-to-valley (P/V) ratios of greater than 2.3 are calculated. The P/V values suggest that regardless of the crystal thickness, the pixels can be resolved. Conclusions: The results suggest that optical barriers can be considered as a robust alternative to mechanically pixelated arrays and can provide high spatial resolution while maintaining the sensitivity in a high-throughput and cost-effective manner. PMID:27147372

  9. Double-pass imaging through scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Tajahuerce, Enrique; Andrés Bou, Pedro; Artal, Pablo; Lancis, Jesús

    2017-02-01

    In the last years, single-pixel imaging (SPI) was established as a suitable tool for non-invasive imaging of an absorbing object completely embedded in an inhomogeneous medium. One of the main characteristics of the technique is that it uses very simple sensors (bucket detectors such as photodiodes or photomultiplier tubes) combined with structured illumination and mathematical algorithms to recover the image. This reduction in complexity of the sensing device gives these systems the opportunity to obtain images at shallow depth overcoming the scattering problem. Nonetheless, some challenges, such as the need for improved signal-to-noise or the frame rate, remain to be tackled before extensive use in practical systems. Also, for intact or live optically thick tissues, epi-detection is commonly used, while present implementations of SPI are limited to transillumination geometries. In this work we present new features and some recent advances in SPI that involve either the use of computationally efficient algorithms for adaptive sensing or a balanced detection mechanism. Additionally, SPI has been adapted to handle reflected light to create a double pass optical system. Such developments represent a significant step towards the use of SPI in more realistic scenarios, especially in biophotonics applications. In particular, we show the design of a single-pixel ophtalmoscope as a novel way of imaging the retina in real time.

  10. Optical stent inspection of surface texture and coating thickness

    NASA Astrophysics Data System (ADS)

    Bermudez, Carlos; Laguarta, Ferran; Cadevall, Cristina; Matilla, Aitor; Ibañez, Sergi; Artigas, Roger

    2017-02-01

    Stent quality control is a critical process. Coronary stents have to be inspected 100% so no defective stent is implanted into a human body. We have developed a high numerical aperture optical stent inspection system able to acquire both 2D and 3D images. Combining a rotational stage, an area camera with line-scan capability and a triple illumination arrangement, unrolled sections of the outer, inner, and sidewalls surfaces are obtained with high resolution. During stent inspection, surface roughness and coating thickness uniformity is of high interest. Due to the non-planar shape of the surface of the stents, the thickness values of the coating need to be corrected with the 3D surface local slopes. A theoretical model and a simulation are proposed, and a measurement with white light interferometry is shown. Confocal and spectroscopic reflectometry showed to be limited in this application due to stent surface roughness. Due to the high numerical aperture of the optical system, only certain parts of the stent are in focus, which is a problem for defect detection, specifically on the sidewalls. In order to obtain fully focused 2D images, an extended depth of field algorithm has been implemented. A comparison between pixel variance and Laplacian filtering is shown. To recover the stack image, two different methods are proposed: maximum projection and weighted intensity. Finally, we also discuss the implementation of the processing algorithms in both the CPU and GPU, targeting real-time 2-Million pixel image acquisition at 50 frames per second.

  11. A head-mounted compressive three-dimensional display system with polarization-dependent focus switching

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Kun; Moon, Seokil; Lee, Byounghyo; Jeong, Youngmo; Lee, Byoungho

    2016-10-01

    A head-mounted compressive three-dimensional (3D) display system is proposed by combining polarization beam splitter (PBS), fast switching polarization rotator and micro display with high pixel density. According to the polarization state of the image controlled by polarization rotator, optical path of image in the PBS can be divided into transmitted and reflected components. Since optical paths of each image are spatially separated, it is possible to independently focus both images at different depth positions. Transmitted p-polarized and reflected s-polarized images can be focused by convex lens and mirror, respectively. When the focal lengths of the convex lens and mirror are properly determined, two image planes can be located in intended positions. The geometrical relationship is easily modulated by replacement of the components. The fast switching of polarization realizes the real-time operation of multi-focal image planes with a single display panel. Since it is possible to conserve the device characteristic of single panel, the high image quality, reliability and uniformity can be retained. For generating 3D images, layer images for compressive light field display between two image planes are calculated. Since the display panel with high pixel density is adopted, high quality 3D images are reconstructed. In addition, image degradation by diffraction between physically stacked display panels can be mitigated. Simple optical configuration of the proposed system is implemented and the feasibility of the proposed method is verified through experiments.

  12. Collagen Content Limits Optical Coherence Tomography Image Depth in Porcine Vocal Fold Tissue.

    PubMed

    Garcia, Jordan A; Benboujja, Fouzi; Beaudette, Kathy; Rogers, Derek; Maurer, Rie; Boudoux, Caroline; Hartnick, Christopher J

    2016-11-01

    Vocal fold scarring, a condition defined by increased collagen content, is challenging to treat without a method of noninvasively assessing vocal fold structure in vivo. The goal of this study was to observe the effects of vocal fold collagen content on optical coherence tomography imaging to develop a quantifiable marker of disease. Excised specimen study. Massachusetts Eye and Ear Infirmary. Porcine vocal folds were injected with collagenase to remove collagen from the lamina propria. Optical coherence tomography imaging was performed preinjection and at 0, 45, 90, and 180 minutes postinjection. Mean pixel intensity (or image brightness) was extracted from images of collagenase- and control-treated hemilarynges. Texture analysis of the lamina propria at each injection site was performed to extract image contrast. Two-factor repeated measure analysis of variance and t tests were used to determine statistical significance. Picrosirius red staining was performed to confirm collagenase activity. Mean pixel intensity was higher at injection sites of collagenase-treated vocal folds than control vocal folds (P < .0001). Fold change in image contrast was significantly increased in collagenase-treated vocal folds than control vocal folds (P = .002). Picrosirius red staining in control specimens revealed collagen fibrils most prominent in the subepithelium and above the thyroarytenoid muscle. Specimens treated with collagenase exhibited a loss of these structures. Collagen removal from vocal fold tissue increases image brightness of underlying structures. This inverse relationship may be useful in treating vocal fold scarring in patients. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.

  13. Spatial optical crosstalk in CMOS image sensors integrated with plasmonic color filters.

    PubMed

    Yu, Yan; Chen, Qin; Wen, Long; Hu, Xin; Zhang, Hui-Fang

    2015-08-24

    Imaging resolution of complementary metal oxide semiconductor (CMOS) image sensor (CIS) keeps increasing to approximately 7k × 4k. As a result, the pixel size shrinks down to sub-2μm, which greatly increases the spatial optical crosstalk. Recently, plasmonic color filter was proposed as an alternative to conventional colorant pigmented ones. However, there is little work on its size effect and the spatial optical crosstalk in a model of CIS. By numerical simulation, we investigate the size effect of nanocross array plasmonic color filters and analyze the spatial optical crosstalk of each pixel in a Bayer array of a CIS with a pixel size of 1μm. It is found that the small pixel size deteriorates the filtering performance of nanocross color filters and induces substantial spatial color crosstalk. By integrating the plasmonic filters in the low Metal layer in standard CMOS process, the crosstalk reduces significantly, which is compatible to pigmented filters in a state-of-the-art backside illumination CIS.

  14. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Zhang, Zhibo; Ackerman, Steven A.; Maddux, Brent

    2012-01-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the l.6, 2.1, and 3.7 m spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "notclear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud'edges as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the ID cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  15. Modelling of microcracks image treated with fluorescent dye

    NASA Astrophysics Data System (ADS)

    Glebov, Victor; Lashmanov, Oleg U.

    2015-06-01

    The main reasons of catastrophes and accidents are high level of wear of equipment and violation of the production technology. The methods of nondestructive testing are designed to find out defects timely and to prevent break down of aggregates. These methods allow determining compliance of object parameters with technical requirements without destroying it. This work will discuss dye penetrant inspection or liquid penetrant inspection (DPI or LPI) methods and computer model of microcracks image treated with fluorescent dye. Usually cracks on image look like broken extended lines with small width (about 1 to 10 pixels) and ragged edges. The used method of inspection allows to detect microcracks with depth about 10 or more micrometers. During the work the mathematical model of image of randomly located microcracks treated with fluorescent dye was created in MATLAB environment. Background noises and distortions introduced by the optical systems are considered in the model. The factors that have influence on the image are listed below: 1. Background noise. Background noise is caused by the bright light from external sources and it reduces contrast on the objects edges. 2. Noises on the image sensor. Digital noise manifests itself in the form of randomly located points that are differing in their brightness and color. 3. Distortions caused by aberrations of optical system. After passing through the real optical system the homocentricity of the bundle of rays is violated or homocentricity remains but rays intersect at the point that doesn't coincide with the point of the ideal image. The stronger the influence of the above-listed factors, the worse the image quality and therefore the analysis of the image for control of the item finds difficulty. The mathematical model is created using the following algorithm: at the beginning the number of cracks that will be modeled is entered from keyboard. Then the point with random position is choosing on the matrix whose size is 1024x1024 pixels (result image size). This random pixel and two adjacent points are painted with random brightness, the points, located at the edges have lower brightness than the central pixel. The width of the paintbrush is 3 pixels. Further one of the eight possible directions is chosen and the painting continues in this direction. The number of `steps' is also entered at the beginning of the program. This method of cracks simulating is based on theory A.N. Galybin and A.V. Dyskin, which describe cracks propagation as random walk process. These operations are repeated as many times as many cracks it's necessary to simulate. After that background noises and Gaussian blur (for simulating bad focusing of optical system) are applied.

  16. Feasibility study of multi-pixel retrieval of optical thickness and droplet effective radius of inhomogeneous clouds using deep learning

    NASA Astrophysics Data System (ADS)

    Okamura, Rintaro; Iwabuchi, Hironobu; Schmidt, K. Sebastian

    2017-12-01

    Three-dimensional (3-D) radiative-transfer effects are a major source of retrieval errors in satellite-based optical remote sensing of clouds. The challenge is that 3-D effects manifest themselves across multiple satellite pixels, which traditional single-pixel approaches cannot capture. In this study, we present two multi-pixel retrieval approaches based on deep learning, a technique that is becoming increasingly successful for complex problems in engineering and other areas. Specifically, we use deep neural networks (DNNs) to obtain multi-pixel estimates of cloud optical thickness and column-mean cloud droplet effective radius from multispectral, multi-pixel radiances. The first DNN method corrects traditional bispectral retrievals based on the plane-parallel homogeneous cloud assumption using the reflectances at the same two wavelengths. The other DNN method uses so-called convolutional layers and retrieves cloud properties directly from the reflectances at four wavelengths. The DNN methods are trained and tested on cloud fields from large-eddy simulations used as input to a 3-D radiative-transfer model to simulate upward radiances. The second DNN-based retrieval, sidestepping the bispectral retrieval step through convolutional layers, is shown to be more accurate. It reduces 3-D radiative-transfer effects that would otherwise affect the radiance values and estimates cloud properties robustly even for optically thick clouds.

  17. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  18. Eliminating Bias In Acousto-Optical Spectrum Analysis

    NASA Technical Reports Server (NTRS)

    Ansari, Homayoon; Lesh, James R.

    1992-01-01

    Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.

  19. First Retrieval of Surface Lambert Albedos From Mars Reconnaissance Orbiter CRISM Data

    NASA Astrophysics Data System (ADS)

    McGuire, P. C.; Arvidson, R. E.; Murchie, S. L.; Wolff, M. J.; Smith, M. D.; Martin, T. Z.; Milliken, R. E.; Mustard, J. F.; Pelkey, S. M.; Lichtenberg, K. A.; Cavender, P. J.; Humm, D. C.; Titus, T. N.; Malaret, E. R.

    2006-12-01

    We have developed a pipeline-processing software system to convert radiance-on-sensor for each of 72 out of 544 CRISM spectral bands used in global mapping to the corresponding surface Lambert albedo, accounting for atmospheric, thermal, and photoclinometric effects. We will present and interpret first results from this software system for the retrieval of Lambert albedos from CRISM data. For the multispectral mapping modes, these pipeline-processed 72 spectral bands constitute all of the available bands, for wavelengths from 0.362-3.920 μm, at 100-200 m/pixel spatial resolution, and ~ 0.006\\spaceμm spectral resolution. For the hyperspectral targeted modes, these pipeline-processed 72 spectral bands are only a selection of all of the 544 spectral bands, but at a resolution of 15-38 m/pixel. The pipeline processing for both types of observing modes (multispectral and hyperspectral) will use climatology, based on data from MGS/TES, in order to estimate ice- and dust-aerosol optical depths, prior to the atmospheric correction with lookup tables based upon radiative-transport calculations via DISORT. There is one DISORT atmospheric-correction lookup table for converting radiance-on-sensor to Lambert albedo for each of the 72 spectral bands. The measurements of the Emission Phase Function (EPF) during targeting will not be employed in this pipeline processing system. We are developing a separate system for extracting more accurate aerosol optical depths and surface scattering properties. This separate system will use direct calls (instead of lookup tables) to the DISORT code for all 544 bands, and it will use the EPF data directly, bootstrapping from the climatology data for the aerosol optical depths. The pipeline processing will thermally correct the albedos for the spectral bands above ~ 2.6 μm, by a choice between 4 different techniques for determining surface temperature: 1) climatology, 2) empirical estimation of the albedo at 3.9 μm from the measured albedo at 2.5 μm, 3) a physical thermal model (PTM) based upon maps of thermal inertia from TES and coarse-resolution surface slopes (SS) from MOLA, and 4) a photoclinometric extension to the PTM that uses CRISM albedos at 0.41 μm to compute the SS at CRISM spatial resolution. For the thermal correction, we expect that each of these 4 different techniques will be valuable for some fraction of the observations.

  20. Dependence of optical phase modulation on anchoring strength of dielectric shield wall surfaces in small liquid crystal pixels

    NASA Astrophysics Data System (ADS)

    Isomae, Yoshitomo; Shibata, Yosei; Ishinabe, Takahiro; Fujikake, Hideo

    2018-03-01

    We demonstrated that the uniform phase modulation in a pixel can be realized by optimizing the anchoring strength on the walls and the wall width in the dielectric shield wall structure, which is the needed pixel structure for realizing a 1-µm-pitch optical phase modulator. The anchoring force degrades the uniformity of the phase modulation in ON-state pixels, but it also keeps liquid crystals from rotating against the leakage of an electric field. We clarified that the optimal wall width and anchoring strength are 250 nm and less than 10-4 J/m2, respectively.

  1. Sensitivity of Marine Warm Cloud Retrieval Statistics to Algorithm Choices: Examples from MODIS Collection 6

    NASA Astrophysics Data System (ADS)

    Platnick, S.; Wind, G.; Zhang, Z.; Ackerman, S. A.; Maddux, B. C.

    2012-12-01

    The optical and microphysical structure of warm boundary layer marine clouds is of fundamental importance for understanding a variety of cloud radiation and precipitation processes. With the advent of MODIS (Moderate Resolution Imaging Spectroradiometer) on the NASA EOS Terra and Aqua platforms, simultaneous global/daily 1km retrievals of cloud optical thickness and effective particle size are provided, as well as the derived water path. In addition, the cloud product (MOD06/MYD06 for MODIS Terra and Aqua, respectively) provides separate effective radii results using the 1.6, 2.1, and 3.7 μm spectral channels. Cloud retrieval statistics are highly sensitive to how a pixel identified as being "not-clear" by a cloud mask (e.g., the MOD35/MYD35 product) is determined to be useful for an optical retrieval based on a 1-D cloud model. The Collection 5 MODIS retrieval algorithm removed pixels associated with cloud edges (defined by immediate adjacency to "clear" MOD/MYD35 pixels) as well as ocean pixels with partly cloudy elements in the 250m MODIS cloud mask - part of the so-called Clear Sky Restoral (CSR) algorithm. Collection 6 attempts retrievals for those two pixel populations, but allows a user to isolate or filter out the populations via CSR pixel-level Quality Assessment (QA) assignments. In this paper, using the preliminary Collection 6 MOD06 product, we present global and regional statistical results of marine warm cloud retrieval sensitivities to the cloud edge and 250m partly cloudy pixel populations. As expected, retrievals for these pixels are generally consistent with a breakdown of the 1D cloud model. While optical thickness for these suspect pixel populations may have some utility for radiative studies, the retrievals should be used with extreme caution for process and microphysical studies.

  2. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera.

    PubMed

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G; Nagarkar, Vivek V

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional "straight-cut" (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  3. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    PubMed Central

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-01-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors. PMID:21731108

  4. Depth-of-Interaction Compensation Using a Focused-Cut Scintillator for a Pinhole Gamma Camera

    NASA Astrophysics Data System (ADS)

    Alhassen, Fares; Kudrolli, Haris; Singh, Bipin; Kim, Sangtaek; Seo, Youngho; Gould, Robert G.; Nagarkar, Vivek V.

    2011-06-01

    Preclinical SPECT offers a powerful means to understand the molecular pathways of drug interactions in animal models by discovering and testing new pharmaceuticals and therapies for potential clinical applications. A combination of high spatial resolution and sensitivity are required in order to map radiotracer uptake within small animals. Pinhole collimators have been investigated, as they offer high resolution by means of image magnification. One of the limitations of pinhole geometries is that increased magnification causes some rays to travel through the detection scintillator at steep angles, introducing parallax errors due to variable depth-of-interaction in scintillator material, especially towards the edges of the detector field of view. These parallax errors ultimately limit the resolution of pinhole preclinical SPECT systems, especially for higher energy isotopes that can easily penetrate through millimeters of scintillator material. A pixellated, focused-cut (FC) scintillator, with its pixels laser-cut so that they are collinear with incoming rays, can potentially compensate for these parallax errors and thus improve the system resolution. We performed the first experimental evaluation of a newly developed focused-cut scintillator. We scanned a Tc-99 m source across the field of view of pinhole gamma camera with a continuous scintillator, a conventional “straight-cut” (SC) pixellated scintillator, and a focused-cut scintillator, each coupled to an electron-multiplying charge coupled device (EMCCD) detector by a fiber-optic taper, and compared the measured full-width half-maximum (FWHM) values. We show that the FWHMs of the focused-cut scintillator projections are comparable to the FWHMs of the thinner SC scintillator, indicating the effectiveness of the focused-cut scintillator in compensating parallax errors.

  5. The progress of sub-pixel imaging methods

    NASA Astrophysics Data System (ADS)

    Wang, Hu; Wen, Desheng

    2014-02-01

    This paper reviews the Sub-pixel imaging technology principles, characteristics, the current development status at home and abroad and the latest research developments. As Sub-pixel imaging technology has achieved the advantages of high resolution of optical remote sensor, flexible working ways and being miniaturized with no moving parts. The imaging system is suitable for the application of space remote sensor. Its application prospect is very extensive. It is quite possible to be the research development direction of future space optical remote sensing technology.

  6. Robust design study on the wide angle lens with free distortion for mobile lens

    NASA Astrophysics Data System (ADS)

    Kim, Taeyoung; Yong, Liu; Xu, Qing

    2017-10-01

    Recently new trend applying wide angle in mobile imaging lens is attracting. Specially, customer requirements for capturing wider scene result that a field of view of lens be wider than 100deg. Introduction of retro-focus type lens in mobile imaging lens is required. However, imaging lens in mobile phone always face to many constraints such as lower total length, low F/# and higher performance. The sensitivity for fabrication may become more severe because of wide angle FOV. In this paper, we investigate an optical lens design satisfy all requirements for mobile imaging lens. In order to accomplish Low cost and small depth of optical system, we used plastic materials for all element and the productivity is considered for realization. The lateral color is minimized less than 2 pixels and optical distortion is less than 5%. Also, we divided optical system into 2 part for robust design. The compensation between 2 groups can help us to increase yield in practice. The 2 group alignment for high yield may be a promising solution for wide angle lens.

  7. Reflective coherent spatial light modulator

    DOEpatents

    Simpson, John T.; Richards, Roger K.; Hutchinson, Donald P.; Simpson, Marcus L.

    2003-04-22

    A reflective coherent spatial light modulator (RCSLM) includes a subwavelength resonant grating structure (SWS), the SWS including at least one subwavelength resonant grating layer (SWL) have a plurality of areas defining a plurality of pixels. Each pixel represents an area capable of individual control of its reflective response. A structure for modulating the resonant reflective response of at least one pixel is provided. The structure for modulating can include at least one electro-optic layer in optical contact with the SWS. The RCSLM is scalable in both pixel size and wavelength. A method for forming a RCSLM includes the steps of selecting a waveguide material and forming a SWS in the waveguide material, the SWS formed from at least one SWL, the SWL having a plurality of areas defining a plurality of pixels.

  8. The central pixel of the MAGIC telescope for optical observations

    NASA Astrophysics Data System (ADS)

    Lucarelli, F.; Barrio, J. A.; Antoranz, P.; Asensio, M.; Camara, M.; Contreras, J. L.; Fonseca, M. V.; Lopez, M.; Miranda, J. M.; Oya, I.; Reyes, R. De Los; Firpo, R.; Sidro, N.; Goebel, F.; Lorenz, E.; Otte, N.

    2008-05-01

    The MAGIC telescope has been designed for the observation of Cherenkov light generated in Extensive Air Showers initiated by cosmic particles. However, its 17 m diameter mirror and optical design makes the telescope suitable for direct optical observations as well. In this paper, we report about the development of a system based on the use of a dedicated photo-multiplier (PMT) for optical observations. This PMT is installed in the centre of the MAGIC camera (the so-called central pixel). An electro-to-optical system has been developed in order to transmit the PMT output signal by an optical fibre to the counting room, where it is digitized and stored for off-line analysis. The performance of the system using the optical pulsation of the Crab nebula as calibration source is presented. The time required for a 5σ detection of the Crab pulsar in the optical band is less than 20 s. The central pixel will be mainly used to perform simultaneous observations of the Crab pulsar both in the optical and γ-ray regimes. It will also allow for periodic testing of the precision of the MAGIC timing system using the Crab rotational optical pulses as a very precise timing reference.

  9. The MODIS cloud optical and microphysical products: Collection 6 updates and examples from Terra and Aqua.

    PubMed

    Platnick, Steven; Meyer, Kerry G; King, Michael D; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G Thomas; Zhang, Zhibo; Hubanks, Paul A; Holz, Robert E; Yang, Ping; Ridgway, William L; Riedi, Jérôme

    2017-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties (optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases-daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations. The C6 algorithm changes collectively can result in significant changes relative to C5, though the magnitude depends on the dataset and the pixel's retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud optical property datasets, other MODIS cloud datasets are discussed when relevant.

  10. Ensembles of satellite aerosol retrievals based on three AATSR algorithms within aerosol_cci

    NASA Astrophysics Data System (ADS)

    Kosmale, Miriam; Popp, Thomas

    2016-04-01

    Ensemble techniques are widely used in the modelling community, combining different modelling results in order to reduce uncertainties. This approach could be also adapted to satellite measurements. Aerosol_cci is an ESA funded project, where most of the European aerosol retrieval groups work together. The different algorithms are homogenized as far as it makes sense, but remain essentially different. Datasets are compared with ground based measurements and between each other. Three AATSR algorithms (Swansea university aerosol retrieval, ADV aerosol retrieval by FMI and Oxford aerosol retrieval ORAC) provide within this project 17 year global aerosol records. Each of these algorithms provides also uncertainty information on pixel level. Within the presented work, an ensembles of the three AATSR algorithms is performed. The advantage over each single algorithm is the higher spatial coverage due to more measurement pixels per gridbox. A validation to ground based AERONET measurements shows still a good correlation of the ensemble, compared to the single algorithms. Annual mean maps show the global aerosol distribution, based on a combination of the three aerosol algorithms. In addition, pixel level uncertainties of each algorithm are used for weighting the contributions, in order to reduce the uncertainty of the ensemble. Results of different versions of the ensembles for aerosol optical depth will be presented and discussed. The results are validated against ground based AERONET measurements. A higher spatial coverage on daily basis allows better results in annual mean maps. The benefit of using pixel level uncertainties is analysed.

  11. Optical modeling of waveguide coupled TES detectors towards the SAFARI instrument for SPICA

    NASA Astrophysics Data System (ADS)

    Trappe, N.; Bracken, C.; Doherty, S.; Gao, J. R.; Glowacka, D.; Goldie, D.; Griffin, D.; Hijmering, R.; Jackson, B.; Khosropanah, P.; Mauskopf, P.; Morozov, D.; Murphy, A.; O'Sullivan, C.; Ridder, M.; Withington, S.

    2012-09-01

    The next generation of space missions targeting far-infrared wavelengths will require large-format arrays of extremely sensitive detectors. The development of Transition Edge Sensor (TES) array technology is being developed for future Far-Infrared (FIR) space applications such as the SAFARI instrument for SPICA where low-noise and high sensitivity is required to achieve ambitious science goals. In this paper we describe a modal analysis of multi-moded horn antennas feeding integrating cavities housing TES detectors with superconducting film absorbers. In high sensitivity TES detector technology the ability to control the electromagnetic and thermo-mechanical environment of the detector is critical. Simulating and understanding optical behaviour of such detectors at far IR wavelengths is difficult and requires development of existing analysis tools. The proposed modal approach offers a computationally efficient technique to describe the partial coherent response of the full pixel in terms of optical efficiency and power leakage between pixels. Initial wok carried out as part of an ESA technical research project on optical analysis is described and a prototype SAFARI pixel design is analyzed where the optical coupling between the incoming field and the pixel containing horn, cavity with an air gap, and thin absorber layer are all included in the model to allow a comprehensive optical characterization. The modal approach described is based on the mode matching technique where the horn and cavity are described in the traditional way while a technique to include the absorber was developed. Radiation leakage between pixels is also included making this a powerful analysis tool.

  12. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlamp, Michael C.; Alivisatos, A. Paul

    2010-04-13

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit light of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  13. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlamp, Michael C.; Alivisatos, A. Paul

    2005-03-08

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit light of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  14. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlamp, Michael C.; Alivisatos, A. Paul

    2015-06-23

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit light of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  15. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlamp, Michael C; Alivisatos, A. Paul

    2014-02-11

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit light of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  16. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlamp, Michael C.; Alivisatos, Paul A.

    2015-11-10

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit tight of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  17. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon [Pinole, CA; Schlamp, Michael C [Plainsboro, NJ; Alivisatos, A Paul [Oakland, CA

    2011-09-27

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit light of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  18. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlam, Michael C; Alivisatos, A. Paul

    2014-03-25

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit tight of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  19. Electronic displays using optically pumped luminescent semiconductor nanocrystals

    DOEpatents

    Weiss, Shimon; Schlamp, Michael C.; Alivisatos, A. Paul

    2017-06-06

    A multicolor electronic display is based on an array of luminescent semiconductor nanocrystals. Nanocrystals which emit tight of different colors are grouped into pixels. The nanocrystals are optically pumped to produce a multicolor display. Different sized nanocrystals are used to produce the different colors. A variety of pixel addressing systems can be used.

  20. Sub-pixel spatial resolution wavefront phase imaging

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip (Inventor); Mooney, James T. (Inventor)

    2012-01-01

    A phase imaging method for an optical wavefront acquires a plurality of phase images of the optical wavefront using a phase imager. Each phase image is unique and is shifted with respect to another of the phase images by a known/controlled amount that is less than the size of the phase imager's pixels. The phase images are then combined to generate a single high-spatial resolution phase image of the optical wavefront.

  1. Pixel level optical-transfer-function design based on the surface-wave-interferometry aperture

    PubMed Central

    Zheng, Guoan; Wang, Yingmin; Yang, Changhuei

    2010-01-01

    The design of optical transfer function (OTF) is of significant importance for optical information processing in various imaging and vision systems. Typically, OTF design relies on sophisticated bulk optical arrangement in the light path of the optical systems. In this letter, we demonstrate a surface-wave-interferometry aperture (SWIA) that can be directly incorporated onto optical sensors to accomplish OTF design on the pixel level. The whole aperture design is based on the bull’s eye structure. It composes of a central hole (diameter of 300 nm) and periodic groove (period of 560 nm) on a 340 nm thick gold layer. We show, with both simulation and experiment, that different types of optical transfer functions (notch, highpass and lowpass filter) can be achieved by manipulating the interference between the direct transmission of the central hole and the surface wave (SW) component induced from the periodic groove. Pixel level OTF design provides a low-cost, ultra robust, highly compact method for numerous applications such as optofluidic microscopy, wavefront detection, darkfield imaging, and computational photography. PMID:20721038

  2. Systems and methods that generate height map models for efficient three dimensional reconstruction from depth information

    DOEpatents

    Frahm, Jan-Michael; Pollefeys, Marc Andre Leon; Gallup, David Robert

    2015-12-08

    Methods of generating a three dimensional representation of an object in a reference plane from a depth map including distances from a reference point to pixels in an image of the object taken from a reference point. Weights are assigned to respective voxels in a three dimensional grid along rays extending from the reference point through the pixels in the image based on the distances in the depth map from the reference point to the respective pixels, and a height map including an array of height values in the reference plane is formed based on the assigned weights. An n-layer height map may be constructed by generating a probabilistic occupancy grid for the voxels and forming an n-dimensional height map comprising an array of layer height values in the reference plane based on the probabilistic occupancy grid.

  3. A practical approach for deriving all-weather soil moisture content using combined satellite and meteorological data

    NASA Astrophysics Data System (ADS)

    Leng, Pei; Li, Zhao-Liang; Duan, Si-Bo; Gao, Mao-Fang; Huo, Hong-Yuan

    2017-09-01

    Soil moisture has long been recognized as one of the essential variables in the water cycle and energy budget between Earth's surface and atmosphere. The present study develops a practical approach for deriving all-weather soil moisture using combined satellite images and gridded meteorological products. In this approach, soil moisture over the Moderate Resolution Imaging Spectroradiometer (MODIS) clear-sky pixels are estimated from the Vegetation Index/Temperature (VIT) trapezoid scheme in which theoretical dry and wet edges were determined pixel to pixel by China Meteorological Administration Land Data Assimilation System (CLDAS) meteorological products, including air temperature, solar radiation, wind speed and specific humidity. For cloudy pixels, soil moisture values are derived by the calculation of surface and aerodynamic resistances from wind speed. The approach is capable of filling the soil moisture gaps over remaining cloudy pixels by traditional optical/thermal infrared methods, allowing for a spatially complete soil moisture map over large areas. Evaluation over agricultural fields indicates that the proposed approach can produce an overall generally reasonable distribution of all-weather soil moisture. An acceptable accuracy between the estimated all-weather soil moisture and in-situ measurements at different depths could be found with an Root Mean Square Error (RMSE) varying from 0.067 m3/m3 to 0.079 m3/m3 and a slight bias ranging from 0.004 m3/m3 to -0.011 m3/m3. The proposed approach reveals significant potential to derive all-weather soil moisture using currently available satellite images and meteorological products at a regional or global scale in future developments.

  4. Imaging properties of pixellated scintillators with deep pixels

    PubMed Central

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2015-01-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10×10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm × 1mm × 20 mm pixels) made by Proteus, Inc. with similar 10×10 arrays of LSO:Ce and BGO (1mm × 1mm × 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10×10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors. PMID:26236070

  5. Imaging properties of pixellated scintillators with deep pixels

    NASA Astrophysics Data System (ADS)

    Barber, H. Bradford; Fastje, David; Lemieux, Daniel; Grim, Gary P.; Furenlid, Lars R.; Miller, Brian W.; Parkhurst, Philip; Nagarkar, Vivek V.

    2014-09-01

    We have investigated the light-transport properties of scintillator arrays with long, thin pixels (deep pixels) for use in high-energy gamma-ray imaging. We compared 10x10 pixel arrays of YSO:Ce, LYSO:Ce and BGO (1mm x 1mm x 20 mm pixels) made by Proteus, Inc. with similar 10x10 arrays of LSO:Ce and BGO (1mm x 1mm x 15mm pixels) loaned to us by Saint-Gobain. The imaging and spectroscopic behaviors of these scintillator arrays are strongly affected by the choice of a reflector used as an inter-pixel spacer (3M ESR in the case of the Proteus arrays and white, diffuse-reflector for the Saint-Gobain arrays). We have constructed a 3700-pixel LYSO:Ce Prototype NIF Gamma-Ray Imager for use in diagnosing target compression in inertial confinement fusion. This system was tested at the OMEGA Laser and exhibited significant optical, inter-pixel cross-talk that was traced to the use of a single-layer of ESR film as an inter-pixel spacer. We show how the optical cross-talk can be mapped, and discuss correction procedures. We demonstrate a 10x10 YSO:Ce array as part of an iQID (formerly BazookaSPECT) imager and discuss issues related to the internal activity of 176Lu in LSO:Ce and LYSO:Ce detectors.

  6. The interpretation of remotely sensed cloud properties from a model paramterization perspective

    NASA Technical Reports Server (NTRS)

    HARSHVARDHAN; Wielicki, Bruce A.; Ginger, Kathryn M.

    1994-01-01

    A study has been made of the relationship between mean cloud radiative properties and cloud fraction in stratocumulus cloud systems. The analysis is of several Land Resources Satellite System (LANDSAT) images and three hourly International Satellite Cloud Climatology Project (ISCCP) C-1 data during daylight hours for two grid boxes covering an area typical of a general circulation model (GCM) grid increment. Cloud properties were inferred from the LANDSAT images using two thresholds and several pixel resolutions ranging from roughly 0.0625 km to 8 km. At the finest resolution, the analysis shows that mean cloud optical depth (or liquid water path) increases somewhat with increasing cloud fraction up to 20% cloud coverage. More striking, however, is the lack of correlation between the two quantities for cloud fractions between roughly 0.2 and 0.8. When the scene is essentially overcast, the mean cloud optical tends to be higher. Coarse resolution LANDSAT analysis and the ISCCP 8-km data show lack of correlation between mean cloud optical depth and cloud fraction for coverage less than about 90%. This study shows that there is perhaps a local mean liquid water path (LWP) associated with partly cloudy areas of stratocumulus clouds. A method has been suggested to use this property to construct the cloud fraction paramterization in a GCM when the model computes a grid-box-mean LWP.

  7. Detection of X-ray spectra and images by Timepix

    NASA Astrophysics Data System (ADS)

    Urban, M.; Nentvich, O.; Stehlikova, V.; Sieger, L.

    2017-07-01

    X-ray monitoring for astrophysical applications mainly consists of two parts - optics and detector. The article describes an approach based on a combination of Lobster Eye (LE) optics with Timepix detector. Timepix is a semiconductor detector with 256 × 256 pixels on one electrode and a second electrode is common. Usage of the back-side-pulse from an common electrode of pixelated detector brings the possibility of an additional spectroscopic or trigger signal. In this article are described effects of the thermal stabilisation, and the cooling effect of the detector working as single pixel.

  8. Time-Resolved and Spectroscopic Three-Dimensional Optical Breast Tomography

    DTIC Science & Technology

    2008-04-01

    Appendix 1. Each raw image was then cropped to select out the information-rich region, and binned to enhance the signal-to-noise ratio. All the binned...component analysis, near infrared (NIR) imaging, optical mammography , optical imaging using independent component analysis (OPTICA). I. INTRODUCTION N EAR...merging 5 × 5 pixels into one to enhance the SNR, resulting in a total of 352 images of 54 × 55 pixels each. All the binned images corresponding to

  9. PANIC: current status

    NASA Astrophysics Data System (ADS)

    Cárdenas, M. C.; Rodríguez Gómez, J.

    2011-11-01

    PANIC, the PAnoramic Near Infrared Camera, is a new instrument for Calar Alto Observatory (CAHA) is a wide-field infraredimager for the CAHA 2.2 m and 3.5 m telescopes. The optics is a folded single optical train, pure lens optics, with a pixel scale of 0.45 arcsec/pixel (18 microns) at the 2.2 m telescope and 0.23 arcsec/pixel at the 3.5 m. A mosaic of four Hawaii-2RG detectorsprovides a field of view (FOV) of 0.5x0.5 degrees and 0.25x0.25 degrees, respectively. It will cover the photometric bandsfrom Z to K_s (0.8 to 2.5 microns) with a low thermal background due to cold stops. Here we present the current status of the project.

  10. Single-pixel imaging based on compressive sensing with spectral-domain optical mixing

    NASA Astrophysics Data System (ADS)

    Zhu, Zhijing; Chi, Hao; Jin, Tao; Zheng, Shilie; Jin, Xiaofeng; Zhang, Xianmin

    2017-11-01

    In this letter a single-pixel imaging structure is proposed based on compressive sensing using a spatial light modulator (SLM)-based spectrum shaper. In the approach, an SLM-based spectrum shaper, the pattern of which is a predetermined pseudorandom bit sequence (PRBS), spectrally codes the optical pulse carrying image information. The energy of the spectrally mixed pulse is detected by a single-pixel photodiode and the measurement results are used to reconstruct the image via a sparse recovery algorithm. As the mixing of the image signal and the PRBS is performed in the spectral domain, optical pulse stretching, modulation, compression and synchronization in the time domain are avoided. Experiments are implemented to verify the feasibility of the approach.

  11. A Multi-Modality CMOS Sensor Array for Cell-Based Assay and Drug Screening.

    PubMed

    Chi, Taiyun; Park, Jong Seok; Butts, Jessica C; Hookway, Tracy A; Su, Amy; Zhu, Chengjie; Styczynski, Mark P; McDevitt, Todd C; Wang, Hua

    2015-12-01

    In this paper, we present a fully integrated multi-modality CMOS cellular sensor array with four sensing modalities to characterize different cell physiological responses, including extracellular voltage recording, cellular impedance mapping, optical detection with shadow imaging and bioluminescence sensing, and thermal monitoring. The sensor array consists of nine parallel pixel groups and nine corresponding signal conditioning blocks. Each pixel group comprises one temperature sensor and 16 tri-modality sensor pixels, while each tri-modality sensor pixel can be independently configured for extracellular voltage recording, cellular impedance measurement (voltage excitation/current sensing), and optical detection. This sensor array supports multi-modality cellular sensing at the pixel level, which enables holistic cell characterization and joint-modality physiological monitoring on the same cellular sample with a pixel resolution of 80 μm × 100 μm. Comprehensive biological experiments with different living cell samples demonstrate the functionality and benefit of the proposed multi-modality sensing in cell-based assay and drug screening.

  12. The MODIS Cloud Optical and Microphysical Products: Collection 6 Up-dates and Examples From Terra and Aqua

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin G.; Arnold, G. Thomas; Zhang, Zhibo; Hubanks, Paul A.; Holz, Robert E.; hide

    2016-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties(optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations.The C6 algorithm changes collectively can result in significant changes relative to C5,though the magnitude depends on the dataset and the pixels retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud opticalproperty datasets, other MODIS cloud datasets are discussed when relevant.

  13. X-Ray Spectroscopy of Optically Bright Planets using the Chandra Observatory

    NASA Technical Reports Server (NTRS)

    Ford, P. G.; Elsner, R. F.

    2005-01-01

    Since its launch in July 1999, Chandra's Advanced CCD Imaging Spectrometer (ACIS) has observed several planets (Venus, Mars, Jupiter and Saturn) and 6 comets. At 0.5 arc-second spatial resolution, ACIS detects individual x-ray photons with good quantum efficiency (25% at 0.6 KeV) and energy resolution (20% FWHM at 0.6 KeV). However, the ACIS CCDs are also sensitive to optical and near-infrared light, which is absorbed by optical blocking filters (OBFs) that eliminate optical contamination from all but the brightest extended sources, e.g., planets. .Jupiter at opposition subseconds approx.45 arc-seconds (90 CCD pixels.) Since Chandra is incapable of tracking a moving target, the planet takes 10 - 20 kiloseconds to move across the most sensitive ACIS CCD, after which the observatory must be re-pointed. Meanwhile, the OBF covering that CCD adds an opt,ical signal equivalent to approx.110 eV to each pixel that lies within thc outline of the Jovian disk. This has three consequences: (1) the observatory must be pointed away from Jupiter while CCD bias maps are constructed; (2) most x-rays from within the optical image will be misidentified as charged-particle background and ignored; and (3) those x-rays that are reported will bc assigned anomalously high energies. The same also applies to thc other planets, but is less serious since they are either dimmer at optical wavelengths, or they show less apparent motion across the sky, permitting reduced CCD exposure times: the optical contamination from Saturn acids approx.15 eV per pixel, and from Mars and Venus approx.31 eV. After analyzing a series of short .Jupiter observations in December 2000, ACIS parameters were optimized for the February 2003 opposition. CCD bias maps were constructed while Chandra pointed away from Jupiter, and the subsequent observations employed on-board software to ignore any pixel that contained less charge than that expected from optical leakage. In addition, ACIS was commanded to report 5 x 5 arrays of pixel values surrounding each x-ray event, and the outlying values were employed during ground processing to correct for the optical contamination.

  14. A New Digital Holographic Instrument for Measuring Microphysical Properties of Contrails in the SASS (Subsonic Assessment) Program

    NASA Technical Reports Server (NTRS)

    Lawson, R. Paul

    2000-01-01

    SPEC incorporated designed, built and operated a new instrument, called a pi-Nephelometer, on the NASA DC-8 for the SUCCESS field project. The pi-Nephelometer casts an image of a particle on a 400,000 pixel solid-state camera by freezing the motion of the particle using a 25 ns pulsed, high-power (60 W) laser diode. Unique optical imaging and particle detection systems precisely detect particles and define the depth-of-field so that at least one particle in the image is almost always in focus. A powerful image processing engine processes frames from the solid-state camera, identifies and records regions of interest (i.e. particle images) in real time. Images of ice crystals are displayed and recorded with 5 micron pixel resolution. In addition, a scattered light system simultaneously measures the scattering phase function of the imaged particle. The system consists of twenty-eight 1-mm optical fibers connected to microlenses bonded on the surface of avalanche photo diodes (APDs). Data collected with the pi-Nephelometer during the SUCCESS field project was reported in a special issue of Geophysical Research Letters. The pi-Nephelometer provided the basis for development of a commercial imaging probe, called the cloud particle imager (CPI), which has been installed on several research aircraft and used in More than a dozen field programs.

  15. Downsampling Photodetector Array with Windowing

    NASA Technical Reports Server (NTRS)

    Patawaran, Ferze D.; Farr, William H.; Nguyen, Danh H.; Quirk, Kevin J.; Sahasrabudhe, Adit

    2012-01-01

    In a photon counting detector array, each pixel in the array produces an electrical pulse when an incident photon on that pixel is detected. Detection and demodulation of an optical communication signal that modulated the intensity of the optical signal requires counting the number of photon arrivals over a given interval. As the size of photon counting photodetector arrays increases, parallel processing of all the pixels exceeds the resources available in current application-specific integrated circuit (ASIC) and gate array (GA) technology; the desire for a high fill factor in avalanche photodiode (APD) detector arrays also precludes this. Through the use of downsampling and windowing portions of the detector array, the processing is distributed between the ASIC and GA. This allows demodulation of the optical communication signal incident on a large photon counting detector array, as well as providing architecture amenable to algorithmic changes. The detector array readout ASIC functions as a parallel-to-serial converter, serializing the photodetector array output for subsequent processing. Additional downsampling functionality for each pixel is added to this ASIC. Due to the large number of pixels in the array, the readout time of the entire photodetector is greater than the time between photon arrivals; therefore, a downsampling pre-processing step is done in order to increase the time allowed for the readout to occur. Each pixel drives a small counter that is incremented at every detected photon arrival or, equivalently, the charge in a storage capacitor is incremented. At the end of a user-configurable counting period (calculated independently from the ASIC), the counters are sampled and cleared. This downsampled photon count information is then sent one counter word at a time to the GA. For a large array, processing even the downsampled pixel counts exceeds the capabilities of the GA. Windowing of the array, whereby several subsets of pixels are designated for processing, is used to further reduce the computational requirements. The grouping of the designated pixel frame as the photon count information is sent one word at a time to the GA, the aggregation of the pixels in a window can be achieved by selecting only the designated pixel counts from the serial stream of photon counts, thereby obviating the need to store the entire frame of pixel count in the gate array. The pixel count se quence from each window can then be processed, forming lower-rate pixel statistics for each window. By having this processing occur in the GA rather than in the ASIC, future changes to the processing algorithm can be readily implemented. The high-bandwidth requirements of a photon counting array combined with the properties of the optical modulation being detected by the array present a unique problem that has not been addressed by current CCD or CMOS sensor array solutions.

  16. Centroid measurement error of CMOS detector in the presence of detector noise for inter-satellite optical communications

    NASA Astrophysics Data System (ADS)

    Li, Xin; Zhou, Shihong; Ma, Jing; Tan, Liying; Shen, Tao

    2013-08-01

    CMOS is a good candidate tracking detector for satellite optical communications systems with outstanding feature of sub-window for the development of APS (Active Pixel Sensor) technology. For inter-satellite optical communications it is critical to estimate the direction of incident laser beam precisely by measuring the centroid position of incident beam spot. The presence of detector noise results in measurement error, which degrades the tracking performance of systems. In this research, the measurement error of CMOS is derived taking consideration of detector noise. It is shown that the measurement error depends on pixel noise, size of the tracking sub-window (pixels number), intensity of incident laser beam, relative size of beam spot. The influences of these factors are analyzed by numerical simulation. We hope the results obtained in this research will be helpful in the design of CMOS detector satellite optical communications systems.

  17. Changes of Dust Opacity with Density in the Orion A Molecular Cloud

    NASA Astrophysics Data System (ADS)

    Roy, Arabindo; Martin, Peter G.; Polychroni, Danae; Bontemps, Sylvain; Abergel, Alain; André, Philippe; Arzoumanian, Doris; Di Francesco, James; Hill, Tracey; Konyves, Vera; Nguyen-Luong, Quang; Pezzuto, Stefano; Schneider, Nicola; Testi, Leonardo; White, Glenn

    2013-01-01

    We have studied the opacity of dust grains at submillimeter wavelengths by estimating the optical depth from imaging at 160, 250, 350, and 500 μm from the Herschel Gould Belt Survey and comparing this to a column density obtained from the Two Micron All Sky Survey derived color excess E(J - K s). Our main goal was to investigate the spatial variations of the opacity due to "big" grains over a variety of environmental conditions and thereby quantify how emission properties of the dust change with column (and volume) density. The central and southern areas of the Orion A molecular cloud examined here, with N H ranging from 1.5 × 1021 cm-2 to 50 × 1021 cm-2, are well suited to this approach. We fit the multi-frequency Herschel spectral energy distributions (SEDs) of each pixel with a modified blackbody to obtain the temperature, T, and optical depth, τ1200, at a fiducial frequency of 1200 GHz (250 μm). Using a calibration of N H/E(J - Ks ) for the interstellar medium (ISM) we obtained the opacity (dust emission cross-section per H nucleon), σe(1200), for every pixel. From a value ~1 × 10-25 cm2 H-1 at the lowest column densities that is typical of the high-latitude diffuse ISM, σe(1200) increases as N 0.28 H over the range studied. This is suggestive of grain evolution. Integrating the SEDs over frequency, we also calculated the specific power P (emission power per H) for the big grains. In low column density regions where dust clouds are optically thin to the interstellar radiation field (ISRF), P is typically 3.7 × 10-31 W H-1, again close to that in the high-latitude diffuse ISM. However, we find evidence for a decrease of P in high column density regions, which would be a natural outcome of attenuation of the ISRF that heats the grains, and for localized increases for dust illuminated by nearby stars or embedded protostars.

  18. 1024x1024 Pixel MWIR and LWIR QWIP Focal Plane Arrays and 320x256 MWIR:LWIR Pixel Colocated Simultaneous Dualband QWIP Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Gunapala, Sarath D.; Bandara, Sumith V.; Liu, John K.; Hill, Cory J.; Rafol, S. B.; Mumolo, Jason M.; Trinh, Joseph T.; Tidrow, M. Z.; Le Van, P. D.

    2005-01-01

    Mid-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) 1024x1024 pixel quantum well infrared photodetector (QWIP) focal planes have been demonstrated with excellent imaging performance. The MWIR QWIP detector array has demonstrated a noise equivalent differential temperature (NE(Delta)T) of 17 mK at a 95K operating temperature with f/2.5 optics at 300K background and the LWIR detector array has demonstrated a NE(Delta)T of 13 mK at a 70K operating temperature with the same optical and background conditions as the MWIR detector array after the subtraction of system noise. Both MWIR and LWIR focal planes have shown background limited performance (BLIP) at 90K and 70K operating-temperatures respectively, with similar optical and background conditions. In addition, we are in the process of developing MWIR and LWIR pixel collocated simultaneously readable dualband QWIP focal plane arrays.

  19. Optical design of an athermalised dual field of view step zoom optical system in MWIR

    NASA Astrophysics Data System (ADS)

    Kucukcelebi, Doruk

    2017-08-01

    In this paper, the optical design of an athermalised dual field of view step zoom optical system in MWIR (3.7μm - 4.8μm) is described. The dual field of view infrared optical system is designed based on the principle of passive athermalization method not only to achieve athermal optical system but also to keep the high image quality within the working temperature between -40°C and +60°C. The infrared optical system used in this study had a 320 pixel x 256 pixel resolution, 20μm pixel pitch size cooled MWIR focal plane array detector. In this study, the step zoom mechanism, which has the axial motion due to consisting of a lens group, is considered to simplify mechanical structure. The optical design was based on moving a single lens along the optical axis for changing the optical system's field of view not only to reduce the number of moving parts but also to athermalize for the optical system. The optical design began with an optimization process using paraxial optics when first-order optics parameters are determined. During the optimization process, in order to reduce aberrations, such as coma, astigmatism, spherical and chromatic aberrations, aspherical surfaces were used. As a result, athermalised dual field of view step zoom optical design is proposed and the performance of the design using proposed method was verified by providing the focus shifts, spot diagrams and MTF analyzes' plots.

  20. High linearity SPAD and TDC array for TCSPC and 3D ranging applications

    NASA Astrophysics Data System (ADS)

    Villa, Federica; Lussana, Rudi; Bronzi, Danilo; Dalla Mora, Alberto; Contini, Davide; Tisa, Simone; Tosi, Alberto; Zappa, Franco

    2015-01-01

    An array of 32x32 Single-Photon Avalanche-Diodes (SPADs) and Time-to-Digital Converters (TDCs) has been fabricated in a 0.35 μm automotive-certified CMOS technology. The overall dimension of the chip is 9x9 mm2. Each pixel is able to detect photons in the 300 nm - 900 nm wavelength range with a fill-factor of 3.14% and either to count them or to time stamp their arrival time. In photon-counting mode an in-pixel 6-bit counter provides photon-numberresolved intensity movies at 100 kfps, whereas in photon-timing mode the 10-bit in-pixel TDC provides time-resolved maps (Time-Correlated Single-Photon Counting measurements) or 3D depth-resolved (through direct time-of-flight technique) images and movies, with 312 ps resolution. The photodetector is a 30 μm diameter SPAD with low Dark Count Rate (120 cps at room temperature, 3% hot-pixels) and 55% peak Photon Detection Efficiency (PDE) at 450 nm. The TDC has a 6-bit counter and a 4-bit fine interpolator, based on a Delay Locked Loop (DLL) line, which makes the TDC insensitive to process, voltage, and temperature drifts. The implemented sliding-scale technique improves linearity, giving 2% LSB DNL and 10% LSB INL. The single-shot precision is 260 ps rms, comprising SPAD, TDC and driving board jitter. Both optical and electrical crosstalk among SPADs and TDCs are negligible. 2D fast movies and 3D reconstructions with centimeter resolution are reported.

  1. Microlens performance limits in sub-2mum pixel CMOS image sensors.

    PubMed

    Huo, Yijie; Fesenmaier, Christian C; Catrysse, Peter B

    2010-03-15

    CMOS image sensors with smaller pixels are expected to enable digital imaging systems with better resolution. When pixel size scales below 2 mum, however, diffraction affects the optical performance of the pixel and its microlens, in particular. We present a first-principles electromagnetic analysis of microlens behavior during the lateral scaling of CMOS image sensor pixels. We establish for a three-metal-layer pixel that diffraction prevents the microlens from acting as a focusing element when pixels become smaller than 1.4 microm. This severely degrades performance for on and off-axis pixels in red, green and blue color channels. We predict that one-metal-layer or backside-illuminated pixels are required to extend the functionality of microlenses beyond the 1.4 microm pixel node.

  2. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  3. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging.

    PubMed

    Marques, Manuel J; Bradu, Adrian; Podoleanu, Adrian Gh

    2014-05-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer's dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners' scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential "on-demand" mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented.

  4. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging

    PubMed Central

    Marques, Manuel J.; Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer’s dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners’ scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential “on-demand” mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented. PMID:24877006

  5. Removing sun glint from optical remote sensing images of shallow rivers

    USGS Publications Warehouse

    Overstreet, Brandon T.; Legleiter, Carl

    2017-01-01

    Sun glint is the specular reflection of light from the water surface, which often causes unusually bright pixel values that can dominate fluvial remote sensing imagery and obscure the water-leaving radiance signal of interest for mapping bathymetry, bottom type, or water column optical characteristics. Although sun glint is ubiquitous in fluvial remote sensing imagery, river-specific methods for removing sun glint are not yet available. We show that existing sun glint-removal methods developed for multispectral images of marine shallow water environments over-correct shallow portions of fluvial remote sensing imagery resulting in regions of unreliable data along channel margins. We build on existing marine glint-removal methods to develop a river-specific technique that removes sun glint from shallow areas of the channel without overcorrection by accounting for non-negligible water-leaving near-infrared radiance. This new sun glint-removal method can improve the accuracy of spectrally-based depth retrieval in cases where sun glint dominates the at-sensor radiance. For an example image of the gravel-bed Snake River, Wyoming, USA, observed-vs.-predicted R2 values for depth retrieval improved from 0.66 to 0.76 following sun glint removal. The methodology presented here is straightforward to implement and could be incorporated into image processing workflows for multispectral images that include a near-infrared band.

  6. Time multiplexing for increased FOV and resolution in virtual reality

    NASA Astrophysics Data System (ADS)

    Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj

    2017-06-01

    We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.

  7. Indium-bump-free antimonide superlattice membrane detectors on silicon substrates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamiri, M., E-mail: mzamiri@chtm.unm.edu, E-mail: skrishna@chtm.unm.edu; Klein, B.; Schuler-Sandy, T.

    2016-02-29

    We present an approach to realize antimonide superlattices on silicon substrates without using conventional Indium-bump hybridization. In this approach, PIN superlattices are grown on top of a 60 nm Al{sub 0.6}Ga{sub 0.4}Sb sacrificial layer on a GaSb host substrate. Following the growth, the individual pixels are transferred using our epitaxial-lift off technique, which consists of a wet-etch to undercut the pixels followed by a dry-stamp process to transfer the pixels to a silicon substrate prepared with a gold layer. Structural and optical characterization of the transferred pixels was done using an optical microscope, scanning electron microscopy, and photoluminescence. The interface betweenmore » the transferred pixels and the new substrate was abrupt, and no significant degradation in the optical quality was observed. An Indium-bump-free membrane detector was then fabricated using this approach. Spectral response measurements provided a 100% cut-off wavelength of 4.3 μm at 77 K. The performance of the membrane detector was compared to a control detector on the as-grown substrate. The membrane detector was limited by surface leakage current. The proposed approach could pave the way for wafer-level integration of photonic detectors on silicon substrates, which could dramatically reduce the cost of these detectors.« less

  8. Validation of aerosol optical depth uncertainties within the ESA Climate Change Initiative

    NASA Astrophysics Data System (ADS)

    Stebel, Kerstin; Povey, Adam; Popp, Thomas; Capelle, Virginie; Clarisse, Lieven; Heckel, Andreas; Kinne, Stefan; Klueser, Lars; Kolmonen, Pekka; de Leeuw, Gerrit; North, Peter R. J.; Pinnock, Simon; Sogacheva, Larisa; Thomas, Gareth; Vandenbussche, Sophie

    2017-04-01

    Uncertainty is a vital component of any climate data record as it provides the context with which to understand the quality of the data and compare it to other measurements. Therefore, pixel-level uncertainties are provided for all aerosol products that have been developed in the framework of the Aerosol_cci project within ESA's Climate Change Initiative (CCI). Validation of these estimated uncertainties is necessary to demonstrate that they provide a useful representation of the distribution of error. We propose a technique for the statistical validation of AOD (aerosol optical depth) uncertainty by comparison to high-quality ground-based observations and present results for ATSR (Along Track Scanning Radiometer) and IASI (Infrared Atmospheric Sounding Interferometer) data records. AOD at 0.55 µm and its uncertainty was calculated with three AOD retrieval algorithms using data from the ATSR instruments (ATSR-2 (1995-2002) and AATSR (2002-2012)). Pixel-level uncertainties were calculated through error propagation (ADV/ASV, ORAC algorithms) or parameterization of the error's dependence on the geophysical retrieval conditions (SU algorithm). Level 2 data are given as super-pixels of 10 km x 10 km. As validation data, we use direct-sun observations of AOD from the AERONET (AErosol RObotic NETwork) and MAN (Maritime Aerosol Network) sun-photometer networks, which are substantially more accurate than satellite retrievals. Neglecting the uncertainty in AERONET observations and possible issues with their ability to represent a satellite pixel area, the error in the retrieval can be approximated by the difference between the satellite and AERONET retrievals (herein referred to as "error"). To evaluate how well the pixel-level uncertainty represents the observed distribution of error, we look at the distribution of the ratio D between the "error" and the ATSR uncertainty. If uncertainties are well represented, D should be normally distributed and 68.3% of values should fall within the range [-1, +1]. A non-zero mean of D indicates the presence of residual systematic errors. If the fraction is smaller than 68%, uncertainties are underestimated; if it is larger, uncertainties are overestimated. For the three ATSR algorithms, we provide statistics and an evaluation at a global scale (separately for land and ocean/coastal regions), for high/low AOD regimes, and seasonal and regional statistics (e.g. Europe, N-Africa, East-Asia, N-America). We assess the long-term stability of the uncertainty estimates over the 17-year time series, and the consistency between ATSR-2 and AATSR results (during their period of overlap). Furthermore, we exploit the possibility to adapt the uncertainty validation concept to the IASI datasets. Ten-year data records (2007-2016) of dust AOD have been generated with four algorithms using IASI observations over the greater Sahara region [80°W - 120°E, 0°N - 40°N]. For validation, the coarse mode AOD at 0.55 μm from the AERONET direct-sun spectral deconvolution algorithm (SDA) product may be used as a proxy for desert dust. The uncertainty validation results for IASI are still tentative, as larger IASI pixel sizes and the conversion of the IASI AOD values from infrared to visible wavelengths for comparison to ground-based observations introduces large uncertainties.

  9. Characterization of pixelated TlBr detectors with Tl electrodes

    NASA Astrophysics Data System (ADS)

    Hitomi, Keitaro; Onodera, Toshiyuki; Kim, Seong-Yun; Shoji, Tadayoshi; Ishii, Keizo

    2014-05-01

    A 4.36-mm-thick pixelated thallium bromide (TlBr) detector with Tl electrodes was fabricated from a crystal grown by the traveling molten zone method using zone-purified material. The detector had four 1×1 mm2 pixelated anodes. The detector performance was characterized at room temperature. The mobility-lifetime products of electrons for each pixel of the TlBr detector were measured to be >2.8×10-3 cm2/V. The four pixelated anodes of the detector exhibited energy resolutions of 1.5-1.8% full width at half maximum (FWHM) for 662-keV gamma rays for single-pixel events with the depth correction method. An energy resolution of 4.5% FWHM for 662-keV gamma rays was obtained from a reconstructed energy spectrum using two-pixel events from the two pixelated anodes on the detector.

  10. Experimental investigation on aero-optical aberration of shock wave/boundary layer interactions

    NASA Astrophysics Data System (ADS)

    Ding, Haolin; Yi, Shihe; Fu, Jia; He, Lin

    2016-10-01

    After streaming through the flow field which including the expansion, shock wave, boundary, etc., the optical wave would be distorted by fluctuations in the density field. Interactions between laminar/turbulent boundary layer and shock wave contain large number complex flow structures, which offer a condition for studying the influences that different flow structures of the complex flow field have on the aero-optical aberrations. Interactions between laminar/turbulent boundary layer and shock wave are investigated in a Mach 3.0 supersonic wind tunnel, based on nanoparticle-tracer planar laser scattering (NPLS) system. Boundary layer separation/attachment, induced suppression waves, induced shock wave, expansion fan and boundary layer are presented by NPLS images. Its spatial resolution is 44.15 μm/pixel. Time resolution is 6ns. Based on the NPLS images, the density fields with high spatial-temporal resolution are obtained by the flow image calibration, and then the optical path difference (OPD) fluctuations of the original 532nm planar wavefront are calculated using Ray-tracing theory. According to the different flow structures in the flow field, four parts are selected, (1) Y=692 600pixel; (2) Y=600 400pixel; (3) Y=400 268pixel; (4) Y=268 0pixel. The aerooptical effects of different flow structures are quantitatively analyzed, the results indicate that: the compressive waves such as incident shock wave, induced shock wave, etc. rise the density, and then uplift the OPD curve, but this kind of shock are fixed in space position and intensity, the aero-optics induced by it can be regarded as constant; The induced shock waves are induced by the coherent structure of large size vortex in the interaction between turbulent boundary layer, its unsteady characteristic decides the induced waves unsteady characteristic; The space position and intensity of the induced shock wave are fixed in the interaction between turbulent boundary layer; The boundary layer aero-optics are induced by the coherent structure of large size vortex, which result in the fluctuation of OPD.

  11. How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2011-01-01

    In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.

  12. The MODIS cloud optical and microphysical products: Collection 6 updates and examples from Terra and Aqua

    PubMed Central

    Platnick, Steven; Meyer, Kerry G.; King, Michael D.; Wind, Galina; Amarasinghe, Nandana; Marchant, Benjamin; Arnold, G. Thomas; Zhang, Zhibo; Hubanks, Paul A.; Holz, Robert E.; Yang, Ping; Ridgway, William L.; Riedi, Jérôme

    2018-01-01

    The MODIS Level-2 cloud product (Earth Science Data Set names MOD06 and MYD06 for Terra and Aqua MODIS, respectively) provides pixel-level retrievals of cloud-top properties (day and night pressure, temperature, and height) and cloud optical properties (optical thickness, effective particle radius, and water path for both liquid water and ice cloud thermodynamic phases–daytime only). Collection 6 (C6) reprocessing of the product was completed in May 2014 and March 2015 for MODIS Aqua and Terra, respectively. Here we provide an overview of major C6 optical property algorithm changes relative to the previous Collection 5 (C5) product. Notable C6 optical and microphysical algorithm changes include: (i) new ice cloud optical property models and a more extensive cloud radiative transfer code lookup table (LUT) approach, (ii) improvement in the skill of the shortwave-derived cloud thermodynamic phase, (iii) separate cloud effective radius retrieval datasets for each spectral combination used in previous collections, (iv) separate retrievals for partly cloudy pixels and those associated with cloud edges, (v) failure metrics that provide diagnostic information for pixels having observations that fall outside the LUT solution space, and (vi) enhanced pixel-level retrieval uncertainty calculations. The C6 algorithm changes collectively can result in significant changes relative to C5, though the magnitude depends on the dataset and the pixel’s retrieval location in the cloud parameter space. Example Level-2 granule and Level-3 gridded dataset differences between the two collections are shown. While the emphasis is on the suite of cloud optical property datasets, other MODIS cloud datasets are discussed when relevant. PMID:29657349

  13. Real-time calibration-free C-scan images of the eye fundus using Master Slave swept source optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Fred; Garway-Heath, David F.; Rajendram, Ranjan; Keane, Pearce; Podoleanu, Adrian G.

    2015-03-01

    Recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), specialized for delivering en-face images. This method uses principles of spectral domain interfereometry in two stages. MS-OCT operates like a time domain OCT, selecting only signals from a chosen depth only while scanning the laser beam across the eye. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. The tremendous advantage in terms of parallel provision of data from numerous depths could not be fully employed by using multi core processors only. The data processing required to generate images at multiple depths simultaneously is not achievable with commodity multicore processors only. We compare here the major improvement in processing and display, brought about by using graphic cards. We demonstrate images obtained with a swept source at 100 kHz (which determines an acquisition time [Ta] for a frame of 200×200 pixels2 of Ta =1.6 s). By the end of the acquired frame being scanned, using our computing capacity, 4 simultaneous en-face images could be created in T = 0.8 s. We demonstrate that by using graphic cards, 32 en-face images can be displayed in Td 0.3 s. Other faster swept source engines can be used with no difference in terms of Td. With 32 images (or more), volumes can be created for 3D display, using en-face images, as opposed to the current technology where volumes are created using cross section OCT images.

  14. Automatic SAR/optical cross-matching for GCP monograph generation

    NASA Astrophysics Data System (ADS)

    Nutricato, Raffaele; Morea, Alberto; Nitti, Davide Oscar; La Mantia, Claudio; Agrimano, Luigi; Samarelli, Sergio; Chiaradia, Maria Teresa

    2016-10-01

    Ground Control Points (GCP), automatically extracted from Synthetic Aperture Radar (SAR) images through 3D stereo analysis, can be effectively exploited for an automatic orthorectification of optical imagery if they can be robustly located in the basic optical images. The present study outlines a SAR/Optical cross-matching procedure that allows a robust alignment of radar and optical images, and consequently to derive automatically the corresponding sub-pixel position of the GCPs in the optical image in input, expressed as fractional pixel/line image coordinates. The cross-matching in performed in two subsequent steps, in order to gradually gather a better precision. The first step is based on the Mutual Information (MI) maximization between optical and SAR chips while the last one uses the Normalized Cross-Correlation as similarity metric. This work outlines the designed algorithmic solution and discusses the results derived over the urban area of Pisa (Italy), where more than ten COSMO-SkyMed Enhanced Spotlight stereo images with different beams and passes are available. The experimental analysis involves different satellite images, in order to evaluate the performances of the algorithm w.r.t. the optical spatial resolution. An assessment of the performances of the algorithm has been carried out, and errors are computed by measuring the distance between the GCP pixel/line position in the optical image, automatically estimated by the tool, and the "true" position of the GCP, visually identified by an expert user in the optical images.

  15. Coupled retrieval of water cloud and above-cloud aerosol properties using the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI)

    NASA Astrophysics Data System (ADS)

    Xu, F.; van Harten, G.; Diner, D. J.; Rheingans, B. E.; Tosca, M.; Seidel, F. C.; Bull, M. A.; Tkatcheva, I. N.; McDuffie, J. L.; Garay, M. J.; Davis, A. B.; Jovanovic, V. M.; Brian, C.; Alexandrov, M. D.; Hostetler, C. A.; Ferrare, R. A.; Burton, S. P.

    2017-12-01

    The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. AirMSPI acquires radiance and polarization data in bands centered at 355, 380, 445, 470*, 555, 660*, 865*, and 935 nm (*denotes polarimetric bands). In sweep mode, georectified images cover an area of 80-100 km (along track) by 10-25 km (across track) between ±66° off nadir, with a map-projected spatial resolution of 25 meters. An efficient and flexible retrieval algorithm has been developed using AirMSPI polarimetric bands for simultaneous retrieval of cloud and above-cloud aerosol microphysical properties. We design a three-step retrieval approach, namely 1) estimating effective droplet size distribution using polarimetric cloudbow observations and using it as initial guess for Step 2; 2) combining water cloud and aerosol above cloud retrieval by fitting polarimetric signals at all scattering angles (e.g. from 80° to 180°); and 3) constructing a lookup table of radiance for a set of cloud optical depth grids using aerosol and cloud information retrieved from Step 2 and then estimating pixel-scale cloud optical depth based on 1D radiative transfer (RT) theory by fitting the AirMSPI radiance. Retrieval uncertainty is formulated by accounting for instrumental errors and constraints imposed on spectral variations of aerosol and cloud droplet optical properties. As the forward RT model, a hybrid approach is developed to combine the computational strengths of Markov-chain and adding-doubling methods to model polarized RT in a coupled aerosol, Rayleigh and cloud system. Our retrieval approach is tested using 134 AirMSPI datasets acquired during NASA ORACLES field campaign in 09/2016, with low to high aerosol loadings. For validation, the retrieved aerosol optical depths and cloud-top heights are compared to coincident High Spectral Resolution Lidar-2 (HSRL-2) data, and the droplet size parameters including effective radius and effective variance and cloud optical thickness are compared to coincident Research Scanning Polarimeter (RSP) data.

  16. LSA SAF Meteosat FRP Products: Part 2 - Evaluation and demonstration of use in the Copernicus Atmosphere Monitoring Service (CAMS)

    NASA Astrophysics Data System (ADS)

    Roberts, G.; Wooster, M. J.; Xu, W.; Freeborn, P. H.; Morcrette, J.-J.; Jones, L.; Benedetti, A.; Kaiser, J.

    2015-06-01

    Characterising the dynamics of landscape scale wildfires at very high temporal resolutions is best achieved using observations from Earth Observation (EO) sensors mounted onboard geostationary satellites. As a result, a number of operational active fire products have been developed from the data of such sensors. An example of which are the Fire Radiative Power (FRP) products, the FRP-PIXEL and FRP-GRID products, generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from imagery collected by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board the Meteosat Second Generation (MSG) series of geostationary EO satellites. The processing chain developed to deliver these FRP products detects SEVIRI pixels containing actively burning fires and characterises their FRP output across four geographic regions covering Europe, part of South America and northern and southern Africa. The FRP-PIXEL product contains the highest spatial and temporal resolution FRP dataset, whilst the FRP-GRID product contains a spatio-temporal summary that includes bias adjustments for cloud cover and the non-detection of low FRP fire pixels. Here we evaluate these two products against active fire data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS), and compare the results to those for three alternative active fire products derived from SEVIRI imagery. The FRP-PIXEL product is shown to detect a substantially greater number of active fire pixels than do alternative SEVIRI-based products, and comparison to MODIS on a per-fire basis indicates a strong agreement and low bias in terms of FRP values. However, low FRP fire pixels remain undetected by SEVIRI, with errors of active fire pixel detection commission and omission compared to MODIS ranging between 9-13 and 65-77% respectively in Africa. Higher errors of omission result in greater underestimation of regional FRP totals relative to those derived from simultaneously collected MODIS data, ranging from 35% over the Northern Africa region to 89% over the European region. High errors of active fire omission and FRP underestimation are found over Europe and South America, and result from SEVIRI's larger pixel area over these regions. An advantage of using FRP for characterising wildfire emissions is the ability to do so very frequently and in near real time (NRT). To illustrate the potential of this approach, wildfire fuel consumption rates derived from the SEVIRI FRP-PIXEL product are used to characterise smoke emissions of the 2007 Peloponnese wildfires within the European Centre for Medium-Range Weather Forecasting (ECMWF) Integrated Forecasting System (IFS), as a demonstration of what can be achieved when using geostationary active fire data within the Copernicus Atmosphere Monitoring System (CAMS). Qualitative comparison of the modelled smoke plumes with MODIS optical imagery illustrates that the model captures the temporal and spatial dynamics of the plume very well, and that high temporal resolution emissions estimates such as those available from geostationary orbit are important for capturing the sub-daily variability in smoke plume parameters such as aerosol optical depth (AOD), which are increasingly less well resolved using daily or coarser temporal resolution emissions datasets. Quantitative comparison of modelled AOD with coincident MODIS and AERONET AOD indicates that the former is overestimated by ∼ 20-30%, but captures the observed AOD dynamics with a high degree of fidelity. The case study highlights the potential of using geostationary FRP data to drive fire emissions estimates for use within atmospheric transport models such as those currently implemented as part of the Monitoring Atmospheric Composition and Climate (MACC) programme within the CAMS.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Ling-Jian

    A gamma ray detector apparatus comprises a solid state detector that includes a plurality of anode pixels and at least one cathode. The solid state detector is configured for receiving gamma rays during an interaction and inducing a signal in an anode pixel and in a cathode. An anode pixel readout circuit is coupled to the plurality of anode pixels and is configured to read out and process the induced signal in the anode pixel and provide triggering and addressing information. A waveform sampling circuit is coupled to the at least one cathode and configured to read out and processmore » the induced signal in the cathode and determine energy of the interaction, timing of the interaction, and depth of interaction.« less

  18. The neuron net method for processing the clear pixels and method of the analytical formulas for processing the cloudy pixels of POLDER instrument images

    NASA Astrophysics Data System (ADS)

    Melnikova, I.; Mukai, S.; Vasilyev, A.

    Data of remote measurements of reflected radiance with the POLDER instrument on board of ADEOS satellite are used for retrieval of the optical thickness, single scattering albedo and phase function parameter of cloudy and clear atmosphere. The method of perceptron neural network that from input values of multiangle radiance and Solar incident angle allows to obtain surface albedo, the optical thickness, single scattering albedo and phase function parameter in case of clear sky. Two last parameters are determined as optical average for atmospheric column. The calculation of solar radiance with using the MODTRAN-3 code with taking into account multiple scattering is accomplished for neural network learning. All mentioned parameters were randomly varied on the base of statistical models of possible measured parameters variation. Results of processing one frame of remote observation that consists from 150,000 pixels are presented. The methodology elaborated allows operative determining optical characteristics as cloudy as clear atmosphere. Further interpretation of these results gives the possibility to extract the information about total contents of atmospheric aerosols and absorbing gases in the atmosphere and create models of the real cloudiness An analytical method of interpretation that based on asymptotic formulas of multiple scattering theory is applied to remote observations of reflected radiance in case of cloudy pixel. Details of the methodology and error analysis were published and discussed earlier. Here we present results of data processing of pixel size 6x6 km In many studies the optical thickness is evaluated earlier in the assumption of the conservative scattering. But in case of true absorption in clouds the large errors in parameter obtained are possible. The simultaneous retrieval of two parameters at every wavelength independently is the advantage comparing with earlier studies. The analytical methodology is based on the transfer theory asymptotic formula inversion for optically thick stratus clouds. The model of horizontally infinite layer is considered. The slight horizontal heterogeneity is approximately taken into account. Formulas containing only the measured values of two-direction radiance and functions of solar and view angles were derived earlier. The 6 azimuth harmonics of reflection function are taken into account. The simple approximation of the cloud top boarder heterogeneity is used. The clouds, projecting upper the cloud top plane causes the increase of diffuse radiation in the incident flux. It is essential for calculation of radiative characteristics, which depends on lighting conditions. Escape and reflection functions describe this dependence for reflected radiance and local albedo of semi-infinite medium - for irradiance. Thus the functions depending on solar incident angle is to replace by their modifications. Firstly optical thickness of every pixel is obtained with simple formula assuming conservative scattering for all available view directions. Deviations between obtained values may be taken as a measure of the cloud top deviation from the plane. The special parameter is obtained, which takes into account the shadowing effect. Then single scattering albedo and optical thickness (with the true absorption assuming) are obtained for pairs of view directions with equal optical thickness. After that the averaging of values obtained and relative error evaluation is accomplished for all viewing directions of every pixel. The procedure is repeated for all wavelengths and pixels independently.

  19. Assessing Temporal Stability for Coarse Scale Satellite Moisture Validation in the Maqu Area, Tibet

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Verhoef, Wouter; Yaseen, Muhammad

    2013-01-01

    This study evaluates if the temporal stability concept is applicable to a time series of satellite soil moisture images so to extend the common procedure of satellite image validation. The area of study is the Maqu area, which is located in the northeastern part of the Tibetan plateau. The network serves validation purposes of coarse scale (25–50 km) satellite soil moisture products and comprises 20 stations with probes installed at depths of 5, 10, 20, 40, 80 cm. The study period is 2009. The temporal stability concept is applied to all five depths of the soil moisture measuring network and to a time series of satellite-based moisture products from the Advance Microwave Scanning Radiometer (AMSR-E). The in-situ network is also assessed by Pearsons's correlation analysis. Assessments by the temporal stability concept proved to be useful and results suggest that probe measurements at 10 cm depth best match to the satellite observations. The Mean Relative Difference plot for satellite pixels shows that a RMSM pixel can be identified but in our case this pixel does not overlay any in-situ station. Also, the RMSM pixel does not overlay any of the Representative Mean Soil Moisture (RMSM) stations of the five probe depths. Pearson's correlation analysis on in-situ measurements suggests that moisture patterns over time are more persistent than over space. Since this study presents first results on the application of the temporal stability concept to a series of satellite images, we recommend further tests to become more conclusive on effectiveness to broaden the procedure of satellite validation. PMID:23959237

  20. Large holographic 3D display for real-time computer-generated holography

    NASA Astrophysics Data System (ADS)

    Häussler, R.; Leister, N.; Stolle, H.

    2017-06-01

    SeeReal's concept of real-time holography is based on Sub-Hologram encoding and tracked Viewing Windows. This solution leads to significant reduction of pixel count and computation effort compared to conventional holography concepts. Since the first presentation of the concept, improved full-color holographic displays were built with dedicated components. The hologram is encoded on a spatial light modulator that is a sandwich of a phase-modulating and an amplitude-modulating liquid-crystal display and that modulates amplitude and phase of light. Further components are based on holographic optical elements for light collimation and focusing which are exposed in photopolymer films. Camera photographs show that only the depth region on which the focus of the camera lens is set is in focus while the other depth regions are out of focus. These photographs demonstrate that the 3D scene is reconstructed in depth and that accommodation of the eye lenses is supported. Hence, the display is a solution to overcome the accommodationconvergence conflict that is inherent for stereoscopic 3D displays. The main components, progress and results of the holographic display with 300 mm x 200 mm active area are described. Furthermore, photographs of holographic reconstructed 3D scenes are shown.

  1. Smartphone Based Platform for Colorimetric Sensing of Dyes

    NASA Astrophysics Data System (ADS)

    Dutta, Sibasish; Nath, Pabitra

    We demonstrate the working of a smartphone based optical sensor for measuring absorption band of coloured dyes. By integration of simple laboratory optical components with the camera unit of the smartphone we have converted it into a visible spectrometer with a pixel resolution of 0.345 nm/pixel. Light from a broadband optical source is allowed to transmit through a specific dye solution. The transmitted light signal is captured by the camera of the smartphone. The present sensor is inexpensive, portable and light weight making it an ideal handy sensor suitable for different on-field sensing.

  2. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  3. Designs of Optoelectronic Trinary Signed-Digit Multiplication by use of Joint Spatial Encodings and Optical Correlation

    NASA Astrophysics Data System (ADS)

    Cherri, Abdallah K.

    1999-02-01

    Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.

  4. Designs of optoelectronic trinary signed-digit multiplication by use of joint spatial encodings and optical correlation.

    PubMed

    Cherri, A K

    1999-02-10

    Trinary signed-digit (TSD) symbolic-substitution-based (SS-based) optical adders, which were recently proposed, are used as the basic modules for designing highly parallel optical multiplications by use of cascaded optical correlators. The proposed multiplications perform carry-free generation of the multiplication partial products of two words in constant time. Also, three different multiplication designs are presented, and new joint spatial encodings for the TSD numbers are introduced. The proposed joint spatial encodings allow one to reduce the SS computation rules involved in optical multiplication. In addition, the proposed joint spatial encodings increase the space-bandwidth product of the spatial light modulators of the optical system. This increase is achieved by reduction of the numbers of pixels in the joint spatial encodings for the input TSD operands as well as reduction of the number of pixels used in the proposed matched spatial filters for the optical multipliers.

  5. Assessing the utility of passive microwave data for Snow Water Equivalent (SWE) estimation in the Sutlej River Basin of the northwestern Himalaya

    NASA Astrophysics Data System (ADS)

    Brandt, T.; Bookhagen, B.; Dozier, J.

    2014-12-01

    Since 1978, space based passive microwave (PM) radiometers have been used to comprehensively measure Snow Water Equivalent (SWE) on a global basis. The ability of PM radiometers to directly measure SWE at high temporal frequencies offers some distinct advantages over optical remote sensors. Nevertheless, in mountainous terrain PM radiometers often struggle to accurately measure SWE because of wet snow, saturation in deep snow, forests, depth hoar and stratigraphy, variable relief, and subpixel heterogeneity inherent in large pixel sizes. The Himalaya, because of their high elevation and high relief—much above tree line—offer an opportunity to examine PM products in the mountains without the added complication of trees. The upper Sutlej River basin— the third largest Himalayan catchment—lies in the western Himalaya. The river is a tributary of the Indus River and seasonal snow constitutes a substantial part of the basin's hydrologic budget. The basin has a few surface stations and river gauges, which is unique for the region. As such, the Sutlej River basin is a good location to analyze the accuracy and effectiveness of the current National Snow and Ice Data Center's (NSIDC) standard AMSR-E/Aqua Daily SWE product in mountainous terrain. So far, we have observed that individual pixels can "flicker", i.e. fluctuate from day to day, over large parts of the basin. We consider whether this is an artifact of the algorithm or whether this is embedded in the raw brightness temperatures themselves. In addition, we examine how well the standard product registers winter storms, and how it varies over heavily glaciated pixels. Finally, we use a few common measures of algorithm performance (precision, recall and accuracy) to test how well the standard product detects the presence of snow, using optical imagery for validation. An improved understanding of the effectiveness of PM imagery in the mountains will help to clarify the technology's limits.

  6. Nuclear resonant scattering measurements on (57)Fe by multichannel scaling with a 64-pixel silicon avalanche photodiode linear-array detector.

    PubMed

    Kishimoto, S; Mitsui, T; Haruki, R; Yoda, Y; Taniguchi, T; Shimazaki, S; Ikeno, M; Saito, M; Tanaka, M

    2014-11-01

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm(2)) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10(7) cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrum of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on (57)Fe.

  7. Time-of-flight camera via a single-pixel correlation image sensor

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Dai, Huidong; Ye, Ling; Gu, Guohua

    2018-04-01

    A time-of-flight imager based on single-pixel correlation image sensors is proposed for noise-free depth map acquisition in presence of ambient light. Digital micro-mirror device and time-modulated IR-laser provide spatial and temporal illumination on the unknown object. Compressed sensing and ‘four bucket principle’ method are combined to reconstruct the depth map from a sequence of measurements at a low sampling rate. Second-order correlation transform is also introduced to reduce the noise from the detector itself and direct ambient light. Computer simulations are presented to validate the computational models and improvement of reconstructions.

  8. Information-efficient spectral imaging sensor

    DOEpatents

    Sweatt, William C.; Gentry, Stephen M.; Boye, Clinton A.; Grotbeck, Carter L.; Stallard, Brian R.; Descour, Michael R.

    2003-01-01

    A programmable optical filter for use in multispectral and hyperspectral imaging. The filter splits the light collected by an optical telescope into two channels for each of the pixels in a row in a scanned image, one channel to handle the positive elements of a spectral basis filter and one for the negative elements of the spectral basis filter. Each channel for each pixel disperses its light into n spectral bins, with the light in each bin being attenuated in accordance with the value of the associated positive or negative element of the spectral basis vector. The spectral basis vector is constructed so that its positive elements emphasize the presence of a target and its negative elements emphasize the presence of the constituents of the background of the imaged scene. The attenuated light in the channels is re-imaged onto separate detectors for each pixel and then the signals from the detectors are combined to give an indication of the presence or not of the target in each pixel of the scanned scene. This system provides for a very efficient optical determination of the presence of the target, as opposed to the very data intensive data manipulations that are required in conventional hyperspectral imaging systems.

  9. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  10. k(+)-buffer: An Efficient, Memory-Friendly and Dynamic k-buffer Framework.

    PubMed

    Vasilakis, Andreas-Alexandros; Papaioannou, Georgios; Fudos, Ioannis

    2015-06-01

    Depth-sorted fragment determination is fundamental for a host of image-based techniques which simulates complex rendering effects. It is also a challenging task in terms of time and space required when rasterizing scenes with high depth complexity. When low graphics memory requirements are of utmost importance, k-buffer can objectively be considered as the most preferred framework which advantageously ensures the correct depth order on a subset of all generated fragments. Although various alternatives have been introduced to partially or completely alleviate the noticeable quality artifacts produced by the initial k-buffer algorithm in the expense of memory increase or performance downgrade, appropriate tools to automatically and dynamically compute the most suitable value of k are still missing. To this end, we introduce k(+)-buffer, a fast framework that accurately simulates the behavior of k-buffer in a single rendering pass. Two memory-bounded data structures: (i) the max-array and (ii) the max-heap are developed on the GPU to concurrently maintain the k-foremost fragments per pixel by exploring pixel synchronization and fragment culling. Memory-friendly strategies are further introduced to dynamically (a) lessen the wasteful memory allocation of individual pixels with low depth complexity frequencies, (b) minimize the allocated size of k-buffer according to different application goals and hardware limitations via a straightforward depth histogram analysis and (c) manage local GPU cache with a fixed-memory depth-sorting mechanism. Finally, an extensive experimental evaluation is provided demonstrating the advantages of our work over all prior k-buffer variants in terms of memory usage, performance cost and image quality.

  11. Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers

    NASA Astrophysics Data System (ADS)

    Jiang, Chufan; Li, Beiwen; Zhang, Song

    2017-04-01

    This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.

  12. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  13. Small Pixel Hybrid CMOS X-ray Detectors

    NASA Astrophysics Data System (ADS)

    Hull, Samuel; Bray, Evan; Burrows, David N.; Chattopadhyay, Tanmoy; Falcone, Abraham; Kern, Matthew; McQuaide, Maria; Wages, Mitchell

    2018-01-01

    Concepts for future space-based X-ray observatories call for a large effective area and high angular resolution instrument to enable precision X-ray astronomy at high redshift and low luminosity. Hybrid CMOS detectors are well suited for such high throughput instruments, and the Penn State X-ray detector lab, in collaboration with Teledyne Imaging Sensors, has recently developed new small pixel hybrid CMOS X-ray detectors. These prototype 128x128 pixel devices have 12.5 micron pixel pitch, 200 micron fully depleted depth, and include crosstalk eliminating CTIA amplifiers and in-pixel correlated double sampling (CDS) capability. We report on characteristics of these new detectors, including the best read noise ever measured for an X-ray hybrid CMOS detector, 5.67 e- (RMS).

  14. Lessons learned and way forward from 6 years of Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2017-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve and qualify algorithms for the retrieval of aerosol information from European sensors. Meanwhile, several validated (multi-) decadal time series of different aerosol parameters from complementary sensors are available: Aerosol Optical Depth (AOD), stratospheric extinction profiles, a qualitative Absorbing Aerosol Index (AAI), fine mode AOD, mineral dust AOD; absorption information and aerosol layer height are in an evaluation phase and the multi-pixel GRASP algorithm for the POLDER instrument is used for selected regions. Validation (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account in an iterative evolution cycle. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. The use of an ensemble method was tested, where several algorithms are applied to the same sensor. The presentation will summarize and discuss the lessons learned from the 6 years of intensive collaboration and highlight major achievements (significantly improved AOD quality, fine mode AOD, dust AOD, pixel level uncertainties, ensemble approach); also limitations and remaining deficits shall be discussed. An outlook will discuss the way forward for the continuous algorithm improvement and re-processing together with opportunities for time series extension with successor instruments of the Sentinel family and the complementarity of the different satellite aerosol products.

  15. Current Status of Aerosol Retrievals from TOMS

    NASA Technical Reports Server (NTRS)

    Torres, O.; Herman, J. R.; Bhartia, P. K.; Ginoux, P.

    1999-01-01

    Properties of atmospheric aerosols over all land and water surfaces are retrieved from TOMS measurements of backscattered radiances. The TOMS technique, uses observations at two wavelengths. In the near ultraviolet (330-380 nm) range, where the effects of gaseous absorption are negligible. The retrieved properties are optical depth and a measure of aerosol absorptivity, generally expressed as single scattering albedo. The main sources of error of the TOMS aerosol products are sub-pixel cloud contamination and uncertainty on the height above the surface of UV-absorbing aerosol layers. The first error source is related to the large footprint (50 x 50 km at nadir) of the sensor, and the lack of detection capability of sub-pixel size clouds. The uncertainty associated with the height of the absorbing aerosol layers, on the other hand, is related to the pressure dependence of the molecular scattering process, which is the basis of the near-UV method of absorbing aerosol detection. The detection of non-absorbing aerosols is not sensitive to aerosol layer height. We will report on the ongoing work to overcome both of these difficulties. Coincident measurements of high spatial resolution thermal infrared radiances are used to address the cloud contamination issue. Mostly clear scenes for aerosol retrieval are selected by examining the spatial homogeneity of the IR radiance measurements within a TOMS pixel. The approach to reduce the uncertainty associated with the height of the aerosol layer by making use of a chemical transport model will also be discussed.

  16. Near-Real Time Cloud Retrievals from Operational and Research Meteorological Satellites

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Nguyen, Louis; Palilonda, Rabindra; Heck, Patrick W.; Spangenberg, Douglas A.; Doelling, David R.; Ayers, J. Kirk; Smith, William L., Jr.; Khaiyer, Mandana M.; Trepte, Qing Z.; hide

    2008-01-01

    A set of cloud retrieval algorithms developed for CERES and applied to MODIS data have been adapted to analyze other satellite imager data in near-real time. The cloud products, including single-layer cloud amount, top and base height, optical depth, phase, effective particle size, and liquid and ice water paths, are being retrieved from GOES- 10/11/12, MTSAT-1R, FY-2C, and Meteosat imager data as well as from MODIS. A comprehensive system to normalize the calibrations to MODIS has been implemented to maximize consistency in the products across platforms. Estimates of surface and top-of-atmosphere broadband radiative fluxes are also provided. Multilayered cloud properties are retrieved from GOES-12, Meteosat, and MODIS data. Native pixel resolution analyses are performed over selected domains, while reduced sampling is used for full-disk retrievals. Tools have been developed for matching the pixel-level results with instrumented surface sites and active sensor satellites. The calibrations, methods, examples of the products, and comparisons with the ICESat GLAS lidar are discussed. These products are currently being used for aircraft icing diagnoses, numerical weather modeling assimilation, and atmospheric radiation research and have potential for use in many other applications.

  17. Multifocal multiphoton microscopy with adaptive optical correction

    NASA Astrophysics Data System (ADS)

    Coelho, Simao; Poland, Simon; Krstajic, Nikola; Li, David; Monypenny, James; Walker, Richard; Tyndall, David; Ng, Tony; Henderson, Robert; Ameer-Beg, Simon

    2013-02-01

    Fluorescence lifetime imaging microscopy (FLIM) is a well established approach for measuring dynamic signalling events inside living cells, including detection of protein-protein interactions. The improvement in optical penetration of infrared light compared with linear excitation due to Rayleigh scattering and low absorption have provided imaging depths of up to 1mm in brain tissue but significant image degradation occurs as samples distort (aberrate) the infrared excitation beam. Multiphoton time-correlated single photon counting (TCSPC) FLIM is a method for obtaining functional, high resolution images of biological structures. In order to achieve good statistical accuracy TCSPC typically requires long acquisition times. We report the development of a multifocal multiphoton microscope (MMM), titled MegaFLI. Beam parallelization performed via a 3D Gerchberg-Saxton (GS) algorithm using a Spatial Light Modulator (SLM), increases TCSPC count rate proportional to the number of beamlets produced. A weighted 3D GS algorithm is employed to improve homogeneity. An added benefit is the implementation of flexible and adaptive optical correction. Adaptive optics performed by means of Zernike polynomials are used to correct for system induced aberrations. Here we present results with significant improvement in throughput obtained using a novel complementary metal-oxide-semiconductor (CMOS) 1024 pixel single-photon avalanche diode (SPAD) array, opening the way to truly high-throughput FLIM.

  18. Preliminary investigations of active pixel sensors in Nuclear Medicine imaging

    NASA Astrophysics Data System (ADS)

    Ott, Robert; Evans, Noel; Evans, Phil; Osmond, J.; Clark, A.; Turchetta, R.

    2009-06-01

    Three CMOS active pixel sensors have been investigated for their application to Nuclear Medicine imaging. Startracker with 525×525 25 μm square pixels has been coupled via a fibre optic stud to a 2 mm thick segmented CsI(Tl) crystal. Imaging tests were performed using 99mTc sources, which emit 140 keV gamma rays. The system was interfaced to a PC via FPGA-based DAQ and optical link enabling imaging rates of 10 f/s. System noise was measured to be >100e and it was shown that the majority of this noise was fixed pattern in nature. The intrinsic spatial resolution was measured to be ˜80 μm and the system spatial resolution measured with a slit was ˜450 μm. The second sensor, On Pixel Intelligent CMOS (OPIC), had 64×72 40 μm pixels and was used to evaluate noise characteristics and to develop a method of differentiation between fixed pattern and statistical noise. The third sensor, Vanilla, had 520×520 25 μm pixels and a measured system noise of ˜25e. This sensor was coupled directly to the segmented phosphor. Imaging results show that even at this lower level of noise the signal from 140 keV gamma rays is small as the light from the phosphor is spread over a large number of pixels. Suggestions for the 'ideal' sensor are made.

  19. Artificial Structural Color Pixels: A Review

    PubMed Central

    Zhao, Yuqian; Zhao, Yong; Hu, Sheng; Lv, Jiangtao; Ying, Yu; Gervinskas, Gediminas; Si, Guangyuan

    2017-01-01

    Inspired by natural photonic structures (Morpho butterfly, for instance), researchers have demonstrated varying artificial color display devices using different designs. Photonic-crystal/plasmonic color filters have drawn increasing attention most recently. In this review article, we show the developing trend of artificial structural color pixels from photonic crystals to plasmonic nanostructures. Such devices normally utilize the distinctive optical features of photonic/plasmon resonance, resulting in high compatibility with current display and imaging technologies. Moreover, dynamical color filtering devices are highly desirable because tunable optical components are critical for developing new optical platforms which can be integrated or combined with other existing imaging and display techniques. Thus, extensive promising potential applications have been triggered and enabled including more abundant functionalities in integrated optics and nanophotonics. PMID:28805736

  20. Per-Pixel, Dual-Counter Scheme for Optical Communications

    NASA Technical Reports Server (NTRS)

    Farr, William H.; Bimbaum, Kevin M.; Quirk, Kevin J.; Sburlan, Suzana; Sahasrabudhe, Adit

    2013-01-01

    Free space optical communications links from deep space are projected to fulfill future NASA communication requirements for 2020 and beyond. Accurate laser-beam pointing is required to achieve high data rates at low power levels.This innovation is a per-pixel processing scheme using a pair of three-state digital counters to implement acquisition and tracking of a dim laser beacon transmitted from Earth for pointing control of an interplanetary optical communications system using a focal plane array of single sensitive detectors. It shows how to implement dim beacon acquisition and tracking for an interplanetary optical transceiver with a method that is suitable for both achieving theoretical performance, as well as supporting additional functions of high data rate forward links and precision spacecraft ranging.

  1. Contact CMOS imaging of gaseous oxygen sensor array

    PubMed Central

    Daivasagaya, Daisy S.; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C.; Chodavarapu, Vamsy P.; Bright, Frank V.

    2014-01-01

    We describe a compact luminescent gaseous oxygen (O2) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O2-sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp)3]2+) encapsulated within sol–gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors. PMID:24493909

  2. Contact CMOS imaging of gaseous oxygen sensor array.

    PubMed

    Daivasagaya, Daisy S; Yao, Lei; Yi Yung, Ka; Hajj-Hassan, Mohamad; Cheung, Maurice C; Chodavarapu, Vamsy P; Bright, Frank V

    2011-10-01

    We describe a compact luminescent gaseous oxygen (O 2 ) sensor microsystem based on the direct integration of sensor elements with a polymeric optical filter and placed on a low power complementary metal-oxide semiconductor (CMOS) imager integrated circuit (IC). The sensor operates on the measurement of excited-state emission intensity of O 2 -sensitive luminophore molecules tris(4,7-diphenyl-1,10-phenanthroline) ruthenium(II) ([Ru(dpp) 3 ] 2+ ) encapsulated within sol-gel derived xerogel thin films. The polymeric optical filter is made with polydimethylsiloxane (PDMS) that is mixed with a dye (Sudan-II). The PDMS membrane surface is molded to incorporate arrays of trapezoidal microstructures that serve to focus the optical sensor signals on to the imager pixels. The molded PDMS membrane is then attached with the PDMS color filter. The xerogel sensor arrays are contact printed on top of the PDMS trapezoidal lens-like microstructures. The CMOS imager uses a 32 × 32 (1024 elements) array of active pixel sensors and each pixel includes a high-gain phototransistor to convert the detected optical signals into electrical currents. Correlated double sampling circuit, pixel address, digital control and signal integration circuits are also implemented on-chip. The CMOS imager data is read out as a serial coded signal. The CMOS imager consumes a static power of 320 µW and an average dynamic power of 625 µW when operating at 100 Hz sampling frequency and 1.8 V DC. This CMOS sensor system provides a useful platform for the development of miniaturized optical chemical gas sensors.

  3. An active-optics image-motion compensation technology application for high-speed searching and infrared detection system

    NASA Astrophysics Data System (ADS)

    Wu, Jianping; Lu, Fei; Zou, Kai; Yan, Hong; Wan, Min; Kuang, Yan; Zhou, Yanqing

    2018-03-01

    An ultra-high angular velocity and minor-caliber high-precision stably control technology application for active-optics image-motion compensation, is put forward innovatively in this paper. The image blur problem due to several 100°/s high-velocity relative motion between imaging system and target is theoretically analyzed. The velocity match model of detection system and active optics compensation system is built, and active optics image motion compensation platform experiment parameters are designed. Several 100°/s high-velocity high-precision control optics compensation technology is studied and implemented. The relative motion velocity is up to 250°/s, and image motion amplitude is more than 20 pixel. After the active optics compensation, motion blur is less than one pixel. The bottleneck technology of ultra-high angular velocity and long exposure time in searching and infrared detection system is successfully broke through.

  4. Optical frequency comb profilometry using a single-pixel camera composed of digital micromirror devices.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2015-01-01

    We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.

  5. Toward acquiring comprehensive radiosurgery field commissioning data using the PRESAGE®/ optical-CT 3D dosimetry system

    NASA Astrophysics Data System (ADS)

    Clift, Corey; Thomas, Andrew; Adamovics, John; Chang, Zheng; Das, Indra; Oldham, Mark

    2010-03-01

    Achieving accurate small field dosimetry is challenging. This study investigates the utility of a radiochromic plastic PRESAGE® read with optical-CT for the acquisition of radiosurgery field commissioning data from a Novalis Tx system with a high-definition multileaf collimator (HDMLC). Total scatter factors (Sc, p), beam profiles, and penumbrae were measured for five different radiosurgery fields (5, 10, 20, 30 and 40 mm) using a commercially available optical-CT scanner (OCTOPUS, MGS Research). The percent depth dose (PDD), beam profile and penumbra of the 10 mm field were also measured using a higher resolution in-house prototype CCD-based scanner. Gafchromic EBT® film was used for independent verification. Measurements of Sc, p made with PRESAGE® and film agreed with mini-ion chamber commissioning data to within 4% for every field (range 0.2-3.6% for PRESAGE®, and 1.6-3.6% for EBT). PDD, beam profile and penumbra measurements made with the two PRESAGE®/optical-CT systems and film showed good agreement with the high-resolution diode commissioning measurements with a competitive resolution (0.5 mm pixels). The in-house prototype optical-CT scanner allowed much finer resolution compared with previous applications of PRESAGE®. The advantages of the PRESAGE® system for small field dosimetry include 3D measurements, negligible volume averaging, directional insensitivity, an absence of beam perturbations, energy and dose rate independence.

  6. Toward acquiring comprehensive radiosurgery field commissioning data using the PRESAGE®/optical-CT 3D dosimetry system

    PubMed Central

    Clift, Corey; Thomas, Andrew; Adamovics, John; Chang, Zheng; Das, Indra; Oldham, Mark

    2010-01-01

    Achieving accurate small field dosimetry is challenging. This study investigates the utility of a radiochromic plastic PRESAGE® read with optical-CT for the acquisition of radiosurgery field commissioning data from a Novalis Tx system with a high-definition multileaf collimator (HDMLC). Total scatter factors (Sc, p), beam profiles, and penumbrae were measured for five different radiosurgery fields (5, 10, 20, 30 and 40 mm) using a commercially available optical-CT scanner (OCTOPUS, MGS Research). The percent depth dose (PDD), beam profile and penumbra of the 10 mm field were also measured using a higher resolution in-house prototype CCD-based scanner. Gafchromic EBT® film was used for independent verification. Measurements of Sc, p made with PRESAGE® and film agreed with mini-ion chamber commissioning data to within 4% for every field (range 0.2–3.6% for PRESAGE®, and 1.6–3.6% for EBT). PDD, beam profile and penumbra measurements made with the two PRESAGE®/optical-CT systems and film showed good agreement with the high-resolution diode commissioning measurements with a competitive resolution (0.5 mm pixels). The in-house prototype optical-CT scanner allowed much finer resolution compared with previous applications of PRESAGE®. The advantages of the PRESAGE® system for small field dosimetry include 3D measurements, negligible volume averaging, directional insensitivity, an absence of beam perturbations, energy and dose rate independence. PMID:20134082

  7. Accommodation-based liquid crystal adaptive optics system for large ocular aberration correction.

    PubMed

    Mu, Quanquan; Cao, Zhaoliang; Li, Chao; Jiang, Baoguang; Hu, Lifa; Xuan, Li

    2008-12-15

    According to ocular aberration property and liquid crystal (LC) corrector characteristics, we calculated the minimum pixel demand of the LC corrector used for compensating large ocular aberrations. Then, an accommodation based optical configuration was introduced to reduce the demand. Based on this an adaptive optics (AO) retinal imaging system was built. Subjects with different defocus and astigmatism were tested to prove this. For myopia lower than 5D it performs well. When myopia is as large as 8D the accommodation error increased to nearly 3D, which requires the LC corrector to have 667 x 667 pixels to get a well-corrected image.

  8. CMOS foveal image sensor chip

    NASA Technical Reports Server (NTRS)

    Scott, Peter (Inventor); Sridhar, Ramalingam (Inventor); Bandera, Cesar (Inventor); Xia, Shu (Inventor)

    2002-01-01

    A foveal image sensor integrated circuit comprising a plurality of CMOS active pixel sensors arranged both within and about a central fovea region of the chip. The pixels in the central fovea region have a smaller size than the pixels arranged in peripheral rings about the central region. A new photocharge normalization scheme and associated circuitry normalizes the output signals from the different size pixels in the array. The pixels are assembled into a multi-resolution rectilinear foveal image sensor chip using a novel access scheme to reduce the number of analog RAM cells needed. Localized spatial resolution declines monotonically with offset from the imager's optical axis, analogous to biological foveal vision.

  9. A High Resolution TDI CCD Camera forMicrosatellite (HRCM)

    NASA Astrophysics Data System (ADS)

    Hao, Yuncai; Zheng, You; Dong, Ying; Li, Tao; Yu, Shijie

    In resent years it is a important development direction in the commercial remote sensing field to obtain (1-5)m high ground resolution from space using microsatellite. Thanks to progress of new technologies, new materials and new detectors it is possible to develop 1m ground resolution space imaging system with weight less than 20kg. Based on many years works on optical system design a project of very high resolution TDI CCD camera using in space was proposed by the authors of this paper. The performance parameters and optical lay-out of the HRCM was presented. A compact optical design and results analysis for the system was given in the paper also. and small fold mirror to take a line field of view usable for TDI CCD and short outer size. The length along the largest size direction is about 1/4 of the focal length. And two 4096X96(grades) line TDI CCD will be used as the focal plane detector. The special optical parts are fixed near before the final image for getting the ground pixel resolution higher than the Nyquist resolution of the detector using the sub-pixel technique which will be explained in the paper. In the system optical SiC will be used as the mirror material, the C-C composite material will be used as the material of the mechanical structure framework. The circle frame of the primary and secondary mirrors will use one time turning on a machine tool in order to assuring concentric request for alignment of the system. In general the HRCM have the performance parameters with 2.5m focal length, 20 FOV, 1/11relative aperture, (0.4-0.8) micrometer spectral range, 10 micron pixel size of TDI CCD, weight less than 20kg, 1m ground pixel resolution at flying orbit 500km high. Design and analysis of the HRCM put up in the paper indicate that HRCM have many advantages to use it in space. Keywords High resolution TDI CCD Sub-pixel imaging Light-weighted optical system SiC mirror

  10. Determination of cup-to-disc ratio of optical nerve head for diagnosis of glaucoma on stereo retinal fundus image pairs

    NASA Astrophysics Data System (ADS)

    Muramatsu, Chisako; Nakagawa, Toshiaki; Sawada, Akira; Hatanaka, Yuji; Hara, Takeshi; Yamamoto, Tetsuya; Fujita, Hiroshi

    2009-02-01

    A large cup-to-disc (C/D) ratio, which is the ratio of the diameter of the depression (cup) to that of the optical nerve head (ONH, disc), can be one of the important signs for diagnosis of glaucoma. Eighty eyes, including 25 eyes with the signs of glaucoma, were imaged by a stereo retinal fundus camera. An ophthalmologist provided the outlines of cup and disc on a regular monitor and on the stereo display. The depth image of the ONH was created by determining the corresponding pixels in a pair of images based on the correlation coefficient in localized regions. The areas of the disc and cup were determined by use of the red component in one of the color images and by use of the depth image, respectively. The C/D ratio was determined based on the largest vertical lengths in the cup and disc areas, which was then compared with that by the ophthalmologist. The disc areas determined by the computerized method agreed relatively well with those determined by the ophthalmologist, whereas the agreement for the cup areas was somewhat lower. When C/D ratios were employed for distinction between the glaucomatous and non-glaucomatous eyes, the area under the receiver operating characteristic curve (AUC) was 0.83. The computerized analysis of ONH can be useful for diagnosis of glaucoma.

  11. Application of Spectral Analysis Techniques in the Intercomparison of Aerosol Data: 1. an EOF Approach to the Spatial-Temporal Variability of Aerosol Optical Depth Using Multiple Remote Sensing Data Sets

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2013-01-01

    Many remote sensing techniques and passive sensors have been developed to measure global aerosol properties. While instantaneous comparisons between pixel-level data often reveal quantitative differences, here we use Empirical Orthogonal Function (EOF) analysis, also known as Principal Component Analysis, to demonstrate that satellite-derived aerosol optical depth (AOD) data sets exhibit essentially the same spatial and temporal variability and are thus suitable for large-scale studies. Analysis results show that the first four EOF modes of AOD account for the bulk of the variance and agree well across the four data sets used in this study (i.e., Aqua MODIS, Terra MODIS, MISR, and SeaWiFS). Only SeaWiFS data over land have slightly different EOF patterns. Globally, the first two EOF modes show annual cycles and are mainly related to Sahara dust in the northern hemisphere and biomass burning in the southern hemisphere, respectively. After removing the mean seasonal cycle from the data, major aerosol sources, including biomass burning in South America and dust in West Africa, are revealed in the dominant modes due to the different interannual variability of aerosol emissions. The enhancement of biomass burning associated with El Niño over Indonesia and central South America is also captured with the EOF technique.

  12. Derivation of Aerosol Columnar Mass from MODIS Optical Depth

    NASA Technical Reports Server (NTRS)

    Gasso, Santiago; Hegg, Dean A.

    2003-01-01

    In order to verify performance, aerosol transport models (ATM) compare aerosol columnar mass (ACM) with those derived from satellite measurements. The comparison is inherently indirect since satellites derive optical depths and they use a proportionality constant to derive the ACM. Analogously, ATMs output a four dimensional ACM distribution and the optical depth is linearly derived. In both cases, the proportionality constant requires a direct intervention of the user by prescribing the aerosol composition and size distribution. This study introduces a method that minimizes the direct user intervention by making use of the new aerosol products of MODIS. A parameterization is introduced for the derivation of columnar aerosol mass (AMC) and CCN concentration (CCNC) and comparisons between sunphotometer, MODIS Airborne Simulator (MAS) and in-measurements are shown. The method still relies on the scaling between AMC and optical depth but the proportionality constant is dependent on the MODIS derived r$_{eff}$,\\eta (contribution of the accumulation mode radiance to the total radiance), ambient RH and an assumed constant aerosol composition. The CCNC is derived fkom a recent parameterization of CCNC as a function of the retrieved aerosol volume. By comparing with in-situ data (ACE-2 and TARFOX campaigns), it is shown that retrievals in dry ambient conditions (dust) are improved when using a proportionality constant dependent on r$ {eff}$ and \\eta derived in the same pixel. In high humidity environments, the improvement inthe new method is inconclusive because of the difficulty in accounting for the uneven vertical distribution of relative humidity. Additionally, two detailed comparisons of AMC and CCNC retrieved by the MAS algorithm and the new method are shown. The new method and MAS retrievals of AMC are within the same order of magnitude with respect to the in-situ measurements of aerosol mass. However, the proposed method is closer to the in-situ measurements than the MODIS retrievals. The retrievals of CCNC are also within the same order of magnitude for both methods. The new method is applied to an actual MODIS retrieval and although no in-situ data is available to compare, it is shown that the proposed method yields more credible values than the MODIS retrievals. In addition, recent data available from the PRIDE (Puerto Rico Dust Experiment, July 2000) will be shown by comparing sunphotometer, MODIS and in-situ data.

  13. Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system

    NASA Astrophysics Data System (ADS)

    Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu

    2018-09-01

    A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.

  14. Cloud and aerosol optical depths

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Russell, P. B.; Ackerman, Thomas P.; Colburn, D. C.; Wrigley, R. C.; Spanner, M. A.; Livingston, J. M.

    1988-01-01

    An airborne Sun photometer was used to measure optical depths in clear atmospheres between the appearances of broken stratus clouds, and the optical depths in the vicinity of smokes. Results show that (human) activities can alter the chemical and optical properties of background atmospheres to affect their spectral optical depths. Effects of water vapor adsorption on aerosol optical depths are apparent, based on data of the water vapor absorption band centered around 940 nm. Smoke optical depths show increases above the background atmosphere by up to two orders of magnitude. When the total optical depths measured through clouds were corrected for molecular scattering and gaseous absorption by subtracting the total optical depths measured through the background atmosphere, the resultant values are lower than those of the background aerosol at short wavelengths. The spectral dependence of these cloud optical depths is neutral, however, in contrast to that of the background aerosol or the molecular atmosphere.

  15. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  16. Preliminary optical design of the stereo channel of the imaging system simbiosys for the BepiColombo ESA mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, Vania; Naletto, Giampiero; Cremonese, Gabriele; Debei, Stefano; Flamini, Enrico

    2017-11-01

    The paper describes the optical design and performance budget of a novel catadioptric instrument chosen as baseline for the Stereo Channel (STC) of the imaging system SIMBIOSYS for the BepiColombo ESA mission to Mercury. The main scientific objective is the 3D global mapping of the entire surface of Mercury with a scale factor of 50 m per pixel at periherm in four different spectral bands. The system consists of two twin cameras looking at +/-20° from nadir and sharing some components, such as the relay element in front of the detector and the detector itself. The field of view of each channel is 4° x 4° with a scale factor of 23''/pixel. The system guarantees good optical performance with Ensquared Energy of the order of 80% in one pixel. For the straylight suppression, an intermediate field stop is foreseen, which gives the possibility to design an efficient baffling system.

  17. To zoom or not to zoom: do we have enough pixels?

    NASA Astrophysics Data System (ADS)

    Youngworth, Richard N.; Herman, Eric

    2015-09-01

    Common lexicon in imaging systems includes the frequently used term digital zoom. Of course this term is somewhat of a misnomer as there is no actual zooming in such systems. Instead, digital zoom describes the zoom effect that comes with an image rewriting or reprinting that perhaps can be more accurately described as cropping and enlarging an image (a pixel remapping) for viewing. If done properly, users of the overall hybrid digital-optical system do not know the methodology employed. Hence the essential question, pondered and manipulated since the advent of mature digital image science, really becomes "do we have enough pixels to avoid optical zoom." This paper discusses known imaging factors for hybrid digital-optical systems, most notably resolution considerations. The paper is fundamentally about communication, and thereby includes information useful to the greater consumer, technical, and business community who all have an interest in understanding the key technical details that have driven the amazing technology and development of zoom systems.

  18. Optical performances of the FM JEM-X masks

    NASA Astrophysics Data System (ADS)

    Reglero, V.; Rodrigo, J.; Velasco, T.; Gasent, J. L.; Chato, R.; Alamo, J.; Suso, J.; Blay, P.; Martínez, S.; Doñate, M.; Reina, M.; Sabau, D.; Ruiz-Urien, I.; Santos, I.; Zarauz, J.; Vázquez, J.

    2001-09-01

    The JEM-X Signal Multiplexing Systems are large HURA codes "written" in a pure tungsten plate 0.5 mm thick. 24.247 hexagonal pixels (25% open) are spread over a total area of 535 mm diameter. The tungsten plate is embedded in a mechanical structure formed by a Ti ring, a pretensioning system (Cu-Be) and an exoskeleton structure that provides the required stiffness. The JEM-X masks differ from the SPI and IBIS masks on the absence of a code support structure covering the mask assembly. Open pixels are fully transparent to X-rays. The scope of this paper is to report the optical performances of the FM JEM-X masks defined by uncertainties on the pixel location (centroid) and size coming from the manufacturing and assembly processes. Stability of the code elements under thermoelastic deformations is also discussed. As a general statement, JEM-X Mask optical properties are nearly one order of magnitude better than specified in 1994 during the ESA instrument selection.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kishimoto, S., E-mail: syunji.kishimoto@kek.jp; Haruki, R.; Mitsui, T.

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector for use in nuclear resonant scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm{sup 2}) with a pixel pitch of 150 μm and depletion depth of 10 μm. An ultrafast frontend circuit allows the X-ray detector to obtain a high output rate of >10{sup 7} cps per pixel. High-performance integrated circuits achieve multichannel scaling over 1024 continuous time bins with a 1 ns resolution for each pixel without dead time. The multichannel scaling method enabled us to record a time spectrummore » of the 14.4 keV nuclear radiation at each pixel with a time resolution of 1.4 ns (FWHM). This method was successfully applied to nuclear forward scattering and nuclear small-angle scattering on {sup 57}Fe.« less

  20. Hemispherical Field-of-View Above-Water Surface Imager for Submarines

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid; Kovalik, Joseph M.; Farr, William H.; Dannecker, John D.

    2012-01-01

    A document discusses solutions to the problem of submarines having to rise above water to detect airplanes in the general vicinity. Two solutions are provided, in which a sensor is located just under the water surface, and at a few to tens of meter depth under the water surface. The first option is a Fish Eye Lens (FEL) digital-camera combination, situated just under the water surface that will have near-full- hemisphere (360 azimuth and 90 elevation) field of view for detecting objects on the water surface. This sensor can provide a three-dimensional picture of the airspace both in the marine and in the land environment. The FEL is coupled to a camera and can continuously look at the entire sky above it. The camera can have an Active Pixel Sensor (APS) focal plane array that allows logic circuitry to be built directly in the sensor. The logic circuitry allows data processing to occur on the sensor head without the need for any other external electronics. In the second option, a single-photon sensitive (photon counting) detector-array is used at depth, without the need for any optics in front of it, since at this location, optical signals are scattered and arrive at a wide (tens of degrees) range of angles. Beam scattering through clouds and seawater effectively negates optical imaging at depths below a few meters under cloudy or turbulent conditions. Under those conditions, maximum collection efficiency can be achieved by using a non-imaging photon-counting detector behind narrowband filters. In either case, signals from these sensors may be fused and correlated or decorrelated with other sensor data to get an accurate picture of the object(s) above the submarine. These devices can complement traditional submarine periscopes that have a limited field of view in the elevation direction. Also, these techniques circumvent the need for exposing the entire submarine or its periscopes to the outside environment.

  1. Assessment of minimum permissible geometrical parameters of a near-to-eye display.

    PubMed

    Valyukh, Sergiy; Slobodyanyuk, Oleksandr

    2015-07-20

    Light weight and small dimensions are some of the most important characteristics of near-to-eye displays (NEDs). These displays consist of two basic parts: a microdisplay for generating an image and supplementary optics in order to see the image. Nowadays, the pixel size of microdisplays may be less than 4 μm, which makes the supplementary optics the major factor in defining restrictions on a NED dimensions or at least on the distance between the microdisplay and the eye. The goal of the present work is to find answers to the following two questions: how small this distance can be in principle and what is the microdisplay maximum resolution that stays effective to see through the supplementary optics placed in immediate vicinity of the eye. To explore the first question, we consider an aberration-free magnifier, which is the initial stage in elaboration of a real optical system. In this case, the paraxial approximation and the transfer matrix method are ideal tools for simulation of light propagation from the microdisplay through the magnifier and the human eye's optical system to the retina. The human eye is considered according to the Gullstrand model. Parameters of the magnifier, its location with respect to the eye and the microdisplay, and the depth of field, which can be interpreted as the tolerance of the microdisplay position, are determined and discussed. The second question related to the microdisplay maximum resolution is investigated by using the principles of wave optics.

  2. Programmable diffractive lens for ophthalmic application

    NASA Astrophysics Data System (ADS)

    Millán, María S.; Pérez-Cabré, Elisabet; Romero, Lenny A.; Ramírez, Natalia

    2014-06-01

    Pixelated liquid crystal displays have been widely used as spatial light modulators to implement programmable diffractive optical elements, particularly diffractive lenses. Many different applications of such components have been developed in information optics and optical processors that take advantage of their properties of great flexibility, easy and fast refreshment, and multiplexing capability in comparison with equivalent conventional refractive lenses. We explore the application of programmable diffractive lenses displayed on the pixelated screen of a liquid crystal on silicon spatial light modulator to ophthalmic optics. In particular, we consider the use of programmable diffractive lenses for the visual compensation of refractive errors (myopia, hypermetropia, astigmatism) and presbyopia. The principles of compensation are described and sketched using geometrical optics and paraxial ray tracing. For the proof of concept, a series of experiments with artificial eye in optical bench are conducted. We analyze the compensation precision in terms of optical power and compare the results with those obtained by means of conventional ophthalmic lenses. Practical considerations oriented to feasible applications are provided.

  3. Ophthalmic compensation of visual ametropia based on a programmable diffractive lens

    NASA Astrophysics Data System (ADS)

    Millán, Maria S.; Pérez-Cabré, Elisabet; Romero, Lenny A.; Ramírez, Natalia

    2013-11-01

    Pixelated liquid crystal displays have been widely used as spatial light modulators to implement programmable diffractive optical elements (DOEs), particularly diffractive lenses. Many different applications of such components have been developed in information optics and optical processors that take advantage of their properties of great flexibility, easy and fast refreshment, and multiplexing capability in comparison with equivalent conventional refractive lenses. In this paper, we explore the application of programmable diffractive lenses displayed on the pixelated screen of a liquid crystal on silicon spatial light modulator (LCoS-SLM) to ophthalmic optics. In particular, we consider the use of programmable diffractive lenses for the visual compensation of some refractive errors (myopia, hyperopia). The theoretical principles of compensation are described and sketched using geometrical optics and paraxial ray tracing. A series of experiments with artificial eye in optical bench are conducted to analyze the compensation accuracy in terms of optical power and to compare the results with those obtained by means of conventional ophthalmic lenses. Practical considerations oriented to feasible applications are provided.

  4. Characterisation of crystal matrices and single pixels for nuclear medicine applications

    NASA Astrophysics Data System (ADS)

    Herbert, D. J.; Belcari, N.; Camarda, M.; Guerra, A. Del; Vaiano, A.

    2005-01-01

    Commercially constructed crystal matrices are characterised for use with PSPMT detectors for PET system developments and other nuclear medicine applications. The matrices of different scintillation materials were specified with pixel dimensions of 1.5×1.5 mm2 in cross-section and a length corresponding to one gamma ray interaction length at 511 keV. The materials used in this study were BGO, LSO, LYSO, YSO and CsI(Na). Each matrix was constructed using a white TiO loaded epoxy that forms a 0.2 mm septa between each pixel. The white epoxy is not the optimum choice in terms of the reflective properties, but represents a good compromise between cost and the need for optical isolation between pixels. We also tested a YAP matrix that consisted of pixels of the same size specification but was manufactured by a different company, who instead of white epoxy, used a thin aluminium reflective layer for optical isolation that resulted in a septal thickness of just 0.01 mm, resulting in a much higher packing fraction. The characteristics of the scintillation materials, such as the light output and energy resolution, were first studied in the form of individual crystal elements by using a single pixel HPD. A comparison of individual pixels with and without the epoxy/dielectric coatings was also performed. Then the matrices themselves were coupled to a PSPMT in order to study the imaging performance. In particular, the system pixel resolution and the peak to valley ratio were measured at 511 and 122 keV.

  5. Optical and x-ray characterization of two novel CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Bohndiek, Sarah E.; Arvanitis, Costas D.; Venanzi, Cristian; Royle, Gary J.; Clark, Andy T.; Crooks, Jamie P.; Prydderch, Mark L.; Turchetta, Renato; Blue, Andrew; Speller, Robert D.

    2007-02-01

    A UK consortium (MI3) has been founded to develop advanced CMOS pixel designs for scientific applications. Vanilla, a 520x520 array of 25μm pixels benefits from flushed reset circuitry for low noise and random pixel access for region of interest (ROI) readout. OPIC, a 64x72 test structure array of 30μm digital pixels has thresholding capabilities for sparse readout at 3,700fps. Characterization is performed with both optical illumination and x-ray exposure via a scintillator. Vanilla exhibits 34+/-3e - read noise, interactive quantum efficiency of 54% at 500nm and can read a 6x6 ROI at 24,395fps. OPIC has 46+/-3e - read noise and a wide dynamic range of 65dB due to high full well capacity. Based on these characterization studies, Vanilla could be utilized in applications where demands include high spectral response and high speed region of interest readout while OPIC could be used for high speed, high dynamic range imaging.

  6. Ultralow-dose, feedback imaging with laser-Compton X-ray and laser-Compton gamma ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barty, Christopher P. J.

    Ultralow-dose, x-ray or gamma-ray imaging is based on fast, electronic control of the output of a laser-Compton x-ray or gamma-ray source (LCXS or LCGS). X-ray or gamma-ray shadowgraphs are constructed one (or a few) pixel(s) at a time by monitoring the LCXS or LCGS beam energy required at each pixel of the object to achieve a threshold level of detectability at the detector. An example provides that once the threshold for detection is reached, an electronic or optical signal is sent to the LCXS/LCGS that enables a fast optical switch that diverts, either in space or time the laser pulsesmore » used to create Compton photons. In this way, one prevents the object from being exposed to any further Compton x-rays or gamma-rays until either the laser-Compton beam or the object are moved so that a new pixel location may be illumination.« less

  7. Image Quality Analysis and Optical Performance Requirement for Micromirror-Based Lissajous Scanning Displays

    PubMed Central

    Du, Weiqi; Zhang, Gaofei; Ye, Liangchen

    2016-01-01

    Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions. PMID:27187390

  8. Image Quality Analysis and Optical Performance Requirement for Micromirror-Based Lissajous Scanning Displays.

    PubMed

    Du, Weiqi; Zhang, Gaofei; Ye, Liangchen

    2016-05-11

    Micromirror-based scanning displays have been the focus of a variety of applications. Lissajous scanning displays have advantages in terms of power consumption; however, the image quality is not good enough. The main reason for this is the varying size and the contrast ratio of pixels at different positions of the image. In this paper, the Lissajous scanning trajectory is analyzed and a new method based on the diamond pixel is introduced to Lissajous displays. The optical performance of micromirrors is discussed. A display system demonstrator is built, and tests of resolution and contrast ratio are conducted. The test results show that the new Lissajous scanning method can be used in displays by using diamond pixels and image quality remains stable at different positions.

  9. A Detailed Look at the Performance Characteristics of the Lightning Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zhang, Daile; Cummins, Kenneth L.; Bitzer, Phillip; Koshak, William J.

    2018-01-01

    The Lightning Imaging Sensor (LIS) on board the Tropical Rainfall Measuring Mission (TRMM) effectively reached its end of life on April 15, 2015 after 17+ years of observation. Given the wealth of information in the archived LIS lightning data, and growing use of optical observations of lightning from space throughout the world, it is still of importance to better understand LIS calibration and performance characteristics. In this work, we continue our efforts to quantify the optical characteristics of the LIS pixel array, and to further characterize the detection efficiency and location accuracy of LIS. The LIS pixel array was partitioned into four quadrants, each having its own signal amplifier and digital conversion hardware. In addition, the sensor optics resulted in a decreasing sensitivity with increasing displacement from the center of the array. These engineering limitations resulted in differences in the optical emissions detected across the pixel array. Our work to date has shown a 20% increase in the count of the lightning events detected in one of the LIS quadrants, because of a lower detection threshold. In this study, we will discuss our work in progress on these limitations, and their potential impact on the group- and flash-level parameters.

  10. High-Speed Scanning Interferometer Using CMOS Image Sensor and FPGA Based on Multifrequency Phase-Tracking Detection

    NASA Technical Reports Server (NTRS)

    Ohara, Tetsuo

    2012-01-01

    A sub-aperture stitching optical interferometer can provide a cost-effective solution for an in situ metrology tool for large optics; however, the currently available technologies are not suitable for high-speed and real-time continuous scan. NanoWave s SPPE (Scanning Probe Position Encoder) has been proven to exhibit excellent stability and sub-nanometer precision with a large dynamic range. This same technology can transform many optical interferometers into real-time subnanometer precision tools with only minor modification. The proposed field-programmable gate array (FPGA) signal processing concept, coupled with a new-generation, high-speed, mega-pixel CMOS (complementary metal-oxide semiconductor) image sensor, enables high speed (>1 m/s) and real-time continuous surface profiling that is insensitive to variation of pixel sensitivity and/or optical transmission/reflection. This is especially useful for large optics surface profiling.

  11. Polymer-stabilized liquid crystalline topological defect network for micro-pixelated optical devices

    NASA Astrophysics Data System (ADS)

    Araoka, Fumito; Le, Khoa V.; Fujii, Shuji; Orihara, Hiroshi; Sasaki, Yuji

    2018-02-01

    Spatially and temporally controlled topological defects in nematic liquid crystals (NLCs) are promising for its potential in optical applications. Utilization of self-organization is a key to fabricate complex micro- and nano-structures which are often difficult to obtain by conventional lithographic tools. Using photo-polymerization technique, here we show a polymer-stabilized NLC having a micro-pixelated structure of regularly ordered umbilical defects which are induced by an electric field. Due to the formation of polymer network, the self-organized pattern is kept stable without deterioration. Moreover, the polymer network allows to template other LCs whose optical properties can be tuned with external stimuli such as temperature and electric fields.

  12. Color moiré simulations in contact-type 3-D displays.

    PubMed

    Lee, B-R; Son, J-Y; Chernyshov, O O; Lee, H; Jeong, I-K

    2015-06-01

    A new method of color moiré fringe simulation in the contact-type 3-D displays is introduced. The method allows simulating color moirés appearing in the displays, which cannot be approximated by conventional cosine approximation of a line grating. The color moirés are mainly introduced by the line width of the boundary lines between the elemental optics in and plate thickness of viewing zone forming optics. This is because the lines are hiding some parts of pixels under the viewing zone forming optics, and the plate thickness induces a virtual contraction of the pixels. The simulated color moiré fringes are closely matched with those appearing at the displays.

  13. Long range surface plasmon resonance with ultra-high penetration depth for self-referenced sensing and ultra-low detection limit using diverging beam approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Isaacs, Sivan, E-mail: sivan.isaacs@gmail.com; Abdulhalim, Ibrahim; NEW CREATE Programme, School of Materials Science and Engineering, 1 CREATE Way, Research Wing, #02-06/08, Singapore 138602

    2015-05-11

    Using an insulator-metal-insulator structure with dielectric having refractive index (RI) larger than the analyte, long range surface plasmon (SP) resonance exhibiting ultra-high penetration depth is demonstrated for sensing applications of large bioentities at wavelengths in the visible range. Based on the diverging beam approach in Kretschmann-Raether configuration, one of the SP resonances is shown to shift in response to changes in the analyte RI while the other is fixed; thus, it can be used as a built in reference. The combination of the high sensitivity, high penetration depth and self-reference using the diverging beam approach in which a dark linemore » is detected of the high sensitivity, high penetration depth, self-reference, and the diverging beam approach in which a dark line is detected using large number of camera pixels with a smart algorithm for sub-pixel resolution, a sensor with ultra-low detection limit is demonstrated suitable for large bioentities.« less

  14. Evaluation of color encodings for high dynamic range pixels

    NASA Astrophysics Data System (ADS)

    Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania

    2015-03-01

    Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.

  15. Frequency-multiplexed bias and readout of a 16-pixel superconducting nanowire single-photon detector array

    NASA Astrophysics Data System (ADS)

    Doerner, S.; Kuzmin, A.; Wuensch, S.; Charaev, I.; Boes, F.; Zwick, T.; Siegel, M.

    2017-07-01

    We demonstrate a 16-pixel array of microwave-current driven superconducting nanowire single-photon detectors with an integrated and scalable frequency-division multiplexing architecture, which reduces the required number of bias and readout lines to a single microwave feed line. The electrical behavior of the photon-sensitive nanowires, embedded in a resonant circuit, as well as the optical performance and timing jitter of the single detectors is discussed. Besides the single pixel measurements, we also demonstrate the operation of a 16-pixel array with a temporal, spatial, and photon-number resolution.

  16. An investigation of signal performance enhancements achieved through innovative pixel design across several generations of indirect detection, active matrix, flat-panel arrays

    PubMed Central

    Antonuk, Larry E.; Zhao, Qihua; El-Mohri, Youcef; Du, Hong; Wang, Yi; Street, Robert A.; Ho, Jackson; Weisfield, Richard; Yao, William

    2009-01-01

    Active matrix flat-panel imager (AMFPI) technology is being employed for an increasing variety of imaging applications. An important element in the adoption of this technology has been significant ongoing improvements in optical signal collection achieved through innovations in indirect detection array pixel design. Such improvements have a particularly beneficial effect on performance in applications involving low exposures and∕or high spatial frequencies, where detective quantum efficiency is strongly reduced due to the relatively high level of additive electronic noise compared to signal levels of AMFPI devices. In this article, an examination of various signal properties, as determined through measurements and calculations related to novel array designs, is reported in the context of the evolution of AMFPI pixel design. For these studies, dark, optical, and radiation signal measurements were performed on prototype imagers incorporating a variety of increasingly sophisticated array designs, with pixel pitches ranging from 75 to 127 μm. For each design, detailed measurements of fundamental pixel-level properties conducted under radiographic and fluoroscopic operating conditions are reported and the results are compared. A series of 127 μm pitch arrays employing discrete photodiodes culminated in a novel design providing an optical fill factor of ∼80% (thereby assuring improved x-ray sensitivity), and demonstrating low dark current, very low charge trapping and charge release, and a large range of linear signal response. In two of the designs having 75 and 90 μm pitches, a novel continuous photodiode structure was found to provide fill factors that approach the theoretical maximum of 100%. Both sets of novel designs achieved large fill factors by employing architectures in which some, or all of the photodiode structure was elevated above the plane of the pixel addressing transistor. Generally, enhancement of the fill factor in either discrete or continuous photodiode arrays was observed to result in no degradation in MTF due to charge sharing between pixels. While the continuous designs exhibited relatively high levels of charge trapping and release, as well as shorter ranges of linearity, it is possible that these behaviors can be addressed through further refinements to pixel design. Both the continuous and the most recent discrete photodiode designs accommodate more sophisticated pixel circuitry than is present on conventional AMFPIs – such as a pixel clamp circuit, which is demonstrated to limit signal saturation under conditions corresponding to high exposures. It is anticipated that photodiode structures such as the ones reported in this study will enable the development of even more complex pixel circuitry, such as pixel-level amplifiers, that will lead to further significant improvements in imager performance. PMID:19673228

  17. Highly Reflective Multi-stable Electrofluidic Display Pixels

    NASA Astrophysics Data System (ADS)

    Yang, Shu

    Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.

  18. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications.

    PubMed

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-12-29

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors' architecture on the basis of the type of electric measurement or imaging functionalities.

  19. Imaging visible light with Medipix2.

    PubMed

    Mac Raighne, Aaron; Brownlee, Colin; Gebert, Ulrike; Maneuski, Dzmitry; Milnes, James; O'Shea, Val; Rügheimer, Tilman K

    2010-11-01

    A need exists for high-speed single-photon counting optical imaging detectors. Single-photon counting high-speed detection of x rays is possible by using Medipix2 with pixelated silicon photodiodes. In this article, we report on a device that exploits the Medipix2 chip for optical imaging. The fabricated device is capable of imaging at >3000 frames/s over a 256×256 pixel matrix. The imaging performance of the detector device via the modulation transfer function is measured, and the presence of ion feedback and its degradation of the imaging properties are discussed.

  20. Adaptive optics system for the IRSOL solar observatory

    NASA Astrophysics Data System (ADS)

    Ramelli, Renzo; Bucher, Roberto; Rossini, Leopoldo; Bianda, Michele; Balemi, Silvano

    2010-07-01

    We present a low cost adaptive optics system developed for the solar observatory at Istituto Ricerche Solari Locarno (IRSOL), Switzerland. The Shack-Hartmann Wavefront Sensor is based on a Dalsa CCD camera with 256 pixels × 256 pixels working at 1kHz. The wavefront compensation is obtained by a deformable mirror with 37 actuators and a Tip-Tilt mirror. A real time control software has been developed on a RTAI-Linux PC. Scicos/Scilab based software has been realized for an online analysis of the system behavior. The software is completely open source.

  1. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  2. Real-time rendering for multiview autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.

    2006-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.

  3. The Speedster-EXD- A New Event-Driven Hybrid CMOS X-ray Detector

    NASA Astrophysics Data System (ADS)

    Griffith, Christopher V.; Falcone, Abraham D.; Prieskorn, Zachary R.; Burrows, David N.

    2016-01-01

    The Speedster-EXD is a new 64×64 pixel, 40-μm pixel pitch, 100-μm depletion depth hybrid CMOS x-ray detector with the capability of reading out only those pixels containing event charge, thus enabling fast effective frame rates. A global charge threshold can be specified, and pixels containing charge above this threshold are flagged and read out. The Speedster detector has also been designed with other advanced in-pixel features to improve performance, including a low-noise, high-gain capacitive transimpedance amplifier that eliminates interpixel capacitance crosstalk (IPC), and in-pixel correlated double sampling subtraction to reduce reset noise. We measure the best energy resolution on the Speedster-EXD detector to be 206 eV (3.5%) at 5.89 keV and 172 eV (10.0%) at 1.49 keV. The average IPC to the four adjacent pixels is measured to be 0.25%±0.2% (i.e., consistent with zero). The pixel-to-pixel gain variation is measured to be 0.80%±0.03%, and a Monte Carlo simulation is applied to better characterize the contributions to the energy resolution.

  4. High density pixel array

    NASA Technical Reports Server (NTRS)

    McFall, James Earl (Inventor); Wiener-Avnear, Eliezer (Inventor)

    2004-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  5. Development of high resolution phoswich depth-of-interaction block detectors utilizing Mg co-doped new scintillators

    NASA Astrophysics Data System (ADS)

    Kobayashi, Takahiro; Yamamoto, Seiichi; Yeom, Jung-Yeol; Kamada, Kei; Yoshikawa, Akira

    2017-12-01

    To correct for parallax error in positron emission tomography (PET), phoswich depth-of-interaction (DOI) detector using multiple scintillators with different decay times is a practical approach. However not many scintillator combinations suitable for phoswich DOI detector have been reported. Ce doped Gd3Ga3Al2O12 (GFAG) is a newly developed promising scintillator for PET detector, which has high density, high light output, appropriate light emission wavelength for silicon-photomultiplier (Si-PM) and faster decay time than that of Ce doped Gd3Al2Ga3O12 (GAGG). In this study, we developed a Si-PM based phoswich DOI block detector of GFAG with GAGG crystal arrays and evaluated its performance. We assembled a GFAG block and a GAGG block and they were optically coupled in depth direction to form a phoswich detector block. The phoswich block was optically coupled to a Si-PM array with a 1 mm thick light guide. The sizes of the GFAG and GAGG pixels were 0.9 mm x 0.9 mm x 7.5 mm and they were arranged into 24 x 24 matrix with 0.1 mm thick BaSO4 as reflector. We conducted the performance evaluation for two types of configurations; GFAG block arranged in upper layer (GFAG/GAGG) and GAGG arranged in upper layer (GAGG/GFAG). The measured two dimensional position histograms of these block detectors showed good separation and pulse shape spectra produced two distinct peaks for both configurations although some difference in energy spectra were observed. These results indicate phoswich block detectors composed of GFAG and GAGG are promising for high resolution DOI PET systems.

  6. A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture

    NASA Astrophysics Data System (ADS)

    Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.

    2017-12-01

    This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.

  7. SPAD array based TOF SoC design for unmanned vehicle

    NASA Astrophysics Data System (ADS)

    Pan, An; Xu, Yuan; Xie, Gang; Huang, Zhiyu; Zheng, Yanghao; Shi, Weiwei

    2018-03-01

    As for the requirement of unmanned-vehicle mobile Lidar system, this paper presents a SoC design based on pulsed TOF depth image sensor. This SoC has a detection range of 300m and detecting resolution of 1.5cm. Pixels are made of SPAD. Meanwhile, SoC adopts a structure of multi-pixel sharing TDC, which significantly reduces chip area and improve the fill factor of light-sensing surface area. SoC integrates a TCSPC module to achieve the functionality of receiving each photon, measuring photon flight time and processing depth information in one chip. The SOC is designed in the SMIC 0.13μm CIS CMOS technology

  8. Machine Learning on Images: Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent

    NASA Astrophysics Data System (ADS)

    Dozier, J.; Tolle, K.; Bair, N.

    2014-12-01

    We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?

  9. Micro-endoscopy of the human vas deferens: a feasibility study of a novel device in several ex vivo models.

    PubMed

    Trottmann, M; Sroka, R; Braun, C; Liedl, B; Schaaf, H; Graw, M; Becker, A J; Stief, C G; Khoder, W Y

    2017-01-01

    The aim of this study was to show limitation as well as potential of micro-endoscopy techniques as an innovative diagnostic and therapeutic approach in andrology. Two kinds of custom-made micro-endoscopes (ME) were tested in ex vivo vas deferens specimen and in post-mortem whole body. The semi-rigid ME included a micro-optic (0.9 mm outer diameter [OD], 10.000 pixels, 120° vision angle [VE], 3-20 mm field depth [FD]) and an integrated fibre-optic light source. The flexible ME was composed of a micro-optic (OD = 0.6 mm, 6.000 pixels, 120° VE, 3-20 mm FD). The ex vivo study included retrograde investigation of the vas deferens (surgical specimen n = 9, radical prostatectomy n = 3). The post-mortem investigation (n = 4) included the inspection of the vas deferens via both approaches. The results showed that antegrade and retrograde rigid endoscopy of the vas deferens were achieved as a diagnostic tool. The working channel enabled therapeutic use including biopsies or baskets. Using the flexible ME, the orifices of the ejaculatory ducts were identified. In vivo cadaveric retrograde cannulation of the orifices was successful. Post-mortem changes of verumontanum hindered the examinations beyond. Orifices were identified shaded behind a thin transparent membrane. Antegrade vasoscopy using flexible ME was possible up to the internal inguinal ring. Further advancement was impossible because of anatomical angle and lack adequate vision guidance. The vas deferens interior was clearly visible and was documented by pictures and movies. Altogether, the described ME techniques were feasible and effective, offering the potential of innovative diagnostic and therapeutic approaches for use in the genital tract. Several innovative indications could be expected. © 2016 American Society of Andrology and European Academy of Andrology.

  10. Dynamic plasmonic colour display

    NASA Astrophysics Data System (ADS)

    Duan, Xiaoyang; Kamin, Simon; Liu, Na

    2017-02-01

    Plasmonic colour printing based on engineered metasurfaces has revolutionized colour display science due to its unprecedented subwavelength resolution and high-density optical data storage. However, advanced plasmonic displays with novel functionalities including dynamic multicolour printing, animations, and highly secure encryption have remained in their infancy. Here we demonstrate a dynamic plasmonic colour display technique that enables all the aforementioned functionalities using catalytic magnesium metasurfaces. Controlled hydrogenation and dehydrogenation of the constituent magnesium nanoparticles, which serve as dynamic pixels, allow for plasmonic colour printing, tuning, erasing and restoration of colour. Different dynamic pixels feature distinct colour transformation kinetics, enabling plasmonic animations. Through smart material processing, information encoded on selected pixels, which are indiscernible to both optical and scanning electron microscopies, can only be read out using hydrogen as a decoding key, suggesting a new generation of information encryption and anti-counterfeiting applications.

  11. Neural Network for Image-to-Image Control of Optical Tweezers

    NASA Technical Reports Server (NTRS)

    Decker, Arthur J.; Anderson, Robert C.; Weiland, Kenneth E.; Wrbanek, Susan Y.

    2004-01-01

    A method is discussed for using neural networks to control optical tweezers. Neural-net outputs are combined with scaling and tiling to generate 480 by 480-pixel control patterns for a spatial light modulator (SLM). The SLM can be combined in various ways with a microscope to create movable tweezers traps with controllable profiles. The neural nets are intended to respond to scattered light from carbon and silicon carbide nanotube sensors. The nanotube sensors are to be held by the traps for manipulation and calibration. Scaling and tiling allow the 100 by 100-pixel maximum resolution of the neural-net software to be applied in stages to exploit the full 480 by 480-pixel resolution of the SLM. One of these stages is intended to create sensitive null detectors for detecting variations in the scattered light from the nanotube sensors.

  12. Dynamic plasmonic colour display.

    PubMed

    Duan, Xiaoyang; Kamin, Simon; Liu, Na

    2017-02-24

    Plasmonic colour printing based on engineered metasurfaces has revolutionized colour display science due to its unprecedented subwavelength resolution and high-density optical data storage. However, advanced plasmonic displays with novel functionalities including dynamic multicolour printing, animations, and highly secure encryption have remained in their infancy. Here we demonstrate a dynamic plasmonic colour display technique that enables all the aforementioned functionalities using catalytic magnesium metasurfaces. Controlled hydrogenation and dehydrogenation of the constituent magnesium nanoparticles, which serve as dynamic pixels, allow for plasmonic colour printing, tuning, erasing and restoration of colour. Different dynamic pixels feature distinct colour transformation kinetics, enabling plasmonic animations. Through smart material processing, information encoded on selected pixels, which are indiscernible to both optical and scanning electron microscopies, can only be read out using hydrogen as a decoding key, suggesting a new generation of information encryption and anti-counterfeiting applications.

  13. Dynamic plasmonic colour display

    PubMed Central

    Duan, Xiaoyang; Kamin, Simon; Liu, Na

    2017-01-01

    Plasmonic colour printing based on engineered metasurfaces has revolutionized colour display science due to its unprecedented subwavelength resolution and high-density optical data storage. However, advanced plasmonic displays with novel functionalities including dynamic multicolour printing, animations, and highly secure encryption have remained in their infancy. Here we demonstrate a dynamic plasmonic colour display technique that enables all the aforementioned functionalities using catalytic magnesium metasurfaces. Controlled hydrogenation and dehydrogenation of the constituent magnesium nanoparticles, which serve as dynamic pixels, allow for plasmonic colour printing, tuning, erasing and restoration of colour. Different dynamic pixels feature distinct colour transformation kinetics, enabling plasmonic animations. Through smart material processing, information encoded on selected pixels, which are indiscernible to both optical and scanning electron microscopies, can only be read out using hydrogen as a decoding key, suggesting a new generation of information encryption and anti-counterfeiting applications. PMID:28232722

  14. High stroke pixel for a deformable mirror

    DOEpatents

    Miles, Robin R.; Papavasiliou, Alexandros P.

    2005-09-20

    A mirror pixel that can be fabricated using standard MEMS methods for a deformable mirror. The pixel is electrostatically actuated and is capable of the high deflections needed for spaced-based mirror applications. In one embodiment, the mirror comprises three layers, a top or mirror layer, a middle layer which consists of flexures, and a comb drive layer, with the flexures of the middle layer attached to the mirror layer and to the comb drive layer. The comb drives are attached to a frame via spring flexures. A number of these mirror pixels can be used to construct a large mirror assembly. The actuator for the mirror pixel may be configured as a crenellated beam with one end fixedly secured, or configured as a scissor jack. The mirror pixels may be used in various applications requiring high stroke adaptive optics.

  15. Ground calibration of the spatial response and quantum efficiency of the CdZnTe hard x-ray detectors for NuSTAR

    NASA Astrophysics Data System (ADS)

    Grefenstette, Brian W.; Bhalerao, Varun; Cook, W. Rick; Harrison, Fiona A.; Kitaguchi, Takao; Madsen, Kristin K.; Mao, Peter H.; Miyasaka, Hiromasa; Rana, Vikram

    2017-08-01

    Pixelated Cadmium Zinc Telluride (CdZnTe) detectors are currently flying on the Nuclear Spectroscopic Telescope ARray (NuSTAR) NASA Astrophysics Small Explorer. While the pixel pitch of the detectors is ≍ 605 μm, we can leverage the detector readout architecture to determine the interaction location of an individual photon to much higher spatial accuracy. The sub-pixel spatial location allows us to finely oversample the point spread function of the optics and reduces imaging artifacts due to pixelation. In this paper we demonstrate how the sub-pixel information is obtained, how the detectors were calibrated, and provide ground verification of the quantum efficiency of our Monte Carlo model of the detector response.

  16. CdTe focal plane detector for hard x-ray focusing optics

    NASA Astrophysics Data System (ADS)

    Seller, Paul; Wilson, Matthew D.; Veale, Matthew C.; Schneider, Andreas; Gaskin, Jessica; Wilson-Hodge, Colleen; Christe, Steven; Shih, Albert Y.; Gregory, Kyle; Inglis, Andrew; Panessa, Marco

    2015-08-01

    The demand for higher resolution x-ray optics (a few arcseconds or better) in the areas of astrophysics and solar science has, in turn, driven the development of complementary detectors. These detectors should have fine pixels, necessary to appropriately oversample the optics at a given focal length, and an energy response also matched to that of the optics. Rutherford Appleton Laboratory have developed a 3-side buttable, 20 mm x 20 mm CdTe-based detector with 250 μm square pixels (80x80 pixels) which achieves 1 keV FWHM @ 60 keV and gives full spectroscopy between 5 keV and 200 keV. An added advantage of these detectors is that they have a full-frame readout rate of 10 kHz. Working with NASA Goddard Space Flight Center and Marshall Space Flight Center, 4 of these 1mm-thick CdTe detectors are tiled into a 2x2 array for use at the focal plane of a balloon-borne hard-x-ray telescope, and a similar configuration could be suitable for astrophysics and solar space-based missions. This effort encompasses the fabrication and testing of flightsuitable front-end electronics and calibration of the assembled detector arrays. We explain the operation of the pixelated ASIC readout and measurements, front-end electronics development, preliminary X-ray imaging and spectral performance, and plans for full calibration of the detector assemblies. Work done in conjunction with the NASA Centers is funded through the NASA Science Mission Directorate Astrophysics Research and Analysis Program.

  17. CdTe Focal Plane Detector for Hard X-Ray Focusing Optics

    NASA Technical Reports Server (NTRS)

    Seller, Paul; Wilson, Matthew D.; Veale, Matthew C.; Schneider, Andreas; Gaskin, Jessica; Wilson-Hodge, Colleen; Christe, Steven; Shih, Albert Y.; Inglis, Andrew; Panessa, Marco

    2015-01-01

    The demand for higher resolution x-ray optics (a few arcseconds or better) in the areas of astrophysics and solar science has, in turn, driven the development of complementary detectors. These detectors should have fine pixels, necessary to appropriately oversample the optics at a given focal length, and an energy response also matched to that of the optics. Rutherford Appleton Laboratory have developed a 3-side buttable, 20 millimeter x 20 millimeter CdTe-based detector with 250 micrometer square pixels (80 x 80 pixels) which achieves 1 kiloelectronvolt FWHM (Full-Width Half-Maximum) @ 60 kiloelectronvolts and gives full spectroscopy between 5 kiloelectronvolts and 200 kiloelectronvolts. An added advantage of these detectors is that they have a full-frame readout rate of 10 kilohertz. Working with NASA Goddard Space Flight Center and Marshall Space Flight Center, 4 of these 1 millimeter-thick CdTe detectors are tiled into a 2 x 2 array for use at the focal plane of a balloon-borne hard-x-ray telescope, and a similar configuration could be suitable for astrophysics and solar space-based missions. This effort encompasses the fabrication and testing of flight-suitable front-end electronics and calibration of the assembled detector arrays. We explain the operation of the pixelated ASIC readout and measurements, front-end electronics development, preliminary X-ray imaging and spectral performance, and plans for full calibration of the detector assemblies. Work done in conjunction with the NASA Centers is funded through the NASA Science Mission Directorate Astrophysics Research and Analysis Program.

  18. Edge detection for optical synthetic aperture based on deep neural network

    NASA Astrophysics Data System (ADS)

    Tan, Wenjie; Hui, Mei; Liu, Ming; Kong, Lingqin; Dong, Liquan; Zhao, Yuejin

    2017-09-01

    Synthetic aperture optics systems can meet the demands of the next-generation space telescopes being lighter, larger and foldable. However, the boundaries of segmented aperture systems are much more complex than that of the whole aperture. More edge regions mean more imaging edge pixels, which are often mixed and discretized. In order to achieve high-resolution imaging, it is necessary to identify the gaps between the sub-apertures and the edges of the projected fringes. In this work, we introduced the algorithm of Deep Neural Network into the edge detection of optical synthetic aperture imaging. According to the detection needs, we constructed image sets by experiments and simulations. Based on MatConvNet, a toolbox of MATLAB, we ran the neural network, trained it on training image set and tested its performance on validation set. The training was stopped when the test error on validation set stopped declining. As an input image is given, each intra-neighbor area around the pixel is taken into the network, and scanned pixel by pixel with the trained multi-hidden layers. The network outputs make a judgment on whether the center of the input block is on edge of fringes. We experimented with various pre-processing and post-processing techniques to reveal their influence on edge detection performance. Compared with the traditional algorithms or their improvements, our method makes decision on a much larger intra-neighbor, and is more global and comprehensive. Experiments on more than 2,000 images are also given to prove that our method outperforms classical algorithms in optical images-based edge detection.

  19. The Design of Optical Sensor for the Pinhole/Occulter Facility

    NASA Technical Reports Server (NTRS)

    Greene, Michael E.

    1990-01-01

    Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.

  20. VICS82: The VISTA–CFHT Stripe 82 Near-infrared Survey

    NASA Astrophysics Data System (ADS)

    Geach, J. E.; Lin, Y.-T.; Makler, M.; Kneib, J.-P.; Ross, N. P.; Wang, W.-H.; Hsieh, B.-C.; Leauthaud, A.; Bundy, K.; McCracken, H. J.; Comparat, J.; Caminha, G. B.; Hudelot, P.; Lin, L.; Van Waerbeke, L.; Pereira, M. E. S.; Mast, D.

    2017-07-01

    We present the VISTA–CFHT Stripe 82 (VICS82) survey: a near-infrared (J+Ks) survey covering 150 square degrees of the Sloan Digital Sky Survey (SDSS) equatorial Stripe 82 to an average depth of J = 21.9 AB mag and Ks = 21.4 AB mag (80% completeness limits; 5σ point-source depths are approximately 0.5 mag brighter). VICS82 contributes to the growing legacy of multiwavelength data in the Stripe 82 footprint. The addition of near-infrared photometry to the existing SDSS Stripe 82 coadd ugriz photometry reduces the scatter in stellar mass estimates to δ {log}({M}\\star )≈ 0.3 dex for galaxies with {M}\\star > {10}9 {M}ȯ at z≈ 0.5, and offers improvement compared to optical-only estimates out to z≈ 1, with stellar masses constrained within a factor of approximately 2.5. When combined with other multiwavelength imaging of the Stripe, including moderate-to-deep ultraviolet (GALEX), optical and mid-infrared (Spitzer-IRAC) coverage, as well as tens of thousands of spectroscopic redshifts, VICS82 gives access to approximately 0.5 Gpc3 of comoving volume. Some of the main science drivers of VICS82 include (a) measuring the stellar mass function of {L}\\star galaxies out to z∼ 1; (b) detecting intermediate-redshift quasars at 2≲ z≲ 3.5; (c) measuring the stellar mass function and baryon census of clusters of galaxies, and (d) performing cross-correlation experiments of cosmic microwave background lensing in the optical/near-infrared that link stellar mass to large-scale dark matter structure. Here we define and describe the survey, highlight some early science results, and present the first public data release, which includes an SDSS-matched catalog as well as the calibrated pixel data themselves.

  1. An x-ray fluorescence imaging system for gold nanoparticle detection.

    PubMed

    Ricketts, K; Guazzoni, C; Castoldi, A; Gibson, A P; Royle, G J

    2013-11-07

    Gold nanoparticles (GNPs) may be used as a contrast agent to identify tumour location and can be modified to target and image specific tumour biological parameters. There are currently no imaging systems in the literature that have sufficient sensitivity to GNP concentration and distribution measurement at sufficient tissue depth for use in in vivo and in vitro studies. We have demonstrated that high detecting sensitivity of GNPs can be achieved using x-ray fluorescence; furthermore this technique enables greater depth imaging in comparison to optical modalities. Two x-ray fluorescence systems were developed and used to image a range of GNP imaging phantoms. The first system consisted of a 10 mm(2) silicon drift detector coupled to a slightly focusing polycapillary optic which allowed 2D energy resolved imaging in step and scan mode. The system has sensitivity to GNP concentrations as low as 1 ppm. GNP concentrations different by a factor of 5 could be resolved, offering potential to distinguish tumour from non-tumour. The second system was designed to avoid slow step and scan image acquisition; the feasibility of excitation of the whole specimen with a wide beam and detection of the fluorescent x-rays with a pixellated controlled drift energy resolving detector without scanning was investigated. A parallel polycapillary optic coupled to the detector was successfully used to ascertain the position where fluorescence was emitted. The tissue penetration of the technique was demonstrated to be sufficient for near-surface small-animal studies, and for imaging 3D in vitro cellular constructs. Previous work demonstrates strong potential for both imaging systems to form quantitative images of GNP concentration.

  2. LSA SAF Meteosat FRP products - Part 2: Evaluation and demonstration for use in the Copernicus Atmosphere Monitoring Service (CAMS)

    NASA Astrophysics Data System (ADS)

    Roberts, G.; Wooster, M. J.; Xu, W.; Freeborn, P. H.; Morcrette, J.-J.; Jones, L.; Benedetti, A.; Jiangping, H.; Fisher, D.; Kaiser, J. W.

    2015-11-01

    Characterising the dynamics of landscape-scale wildfires at very high temporal resolutions is best achieved using observations from Earth Observation (EO) sensors mounted onboard geostationary satellites. As a result, a number of operational active fire products have been developed from the data of such sensors. An example of which are the Fire Radiative Power (FRP) products, the FRP-PIXEL and FRP-GRID products, generated by the Land Surface Analysis Satellite Applications Facility (LSA SAF) from imagery collected by the Spinning Enhanced Visible and Infrared Imager (SEVIRI) onboard the Meteosat Second Generation (MSG) series of geostationary EO satellites. The processing chain developed to deliver these FRP products detects SEVIRI pixels containing actively burning fires and characterises their FRP output across four geographic regions covering Europe, part of South America and Northern and Southern Africa. The FRP-PIXEL product contains the highest spatial and temporal resolution FRP data set, whilst the FRP-GRID product contains a spatio-temporal summary that includes bias adjustments for cloud cover and the non-detection of low FRP fire pixels. Here we evaluate these two products against active fire data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) and compare the results to those for three alternative active fire products derived from SEVIRI imagery. The FRP-PIXEL product is shown to detect a substantially greater number of active fire pixels than do alternative SEVIRI-based products, and comparison to MODIS on a per-fire basis indicates a strong agreement and low bias in terms of FRP values. However, low FRP fire pixels remain undetected by SEVIRI, with errors of active fire pixel detection commission and omission compared to MODIS ranging between 9-13 % and 65-77 % respectively in Africa. Higher errors of omission result in greater underestimation of regional FRP totals relative to those derived from simultaneously collected MODIS data, ranging from 35 % over the Northern Africa region to 89 % over the European region. High errors of active fire omission and FRP underestimation are found over Europe and South America and result from SEVIRI's larger pixel area over these regions. An advantage of using FRP for characterising wildfire emissions is the ability to do so very frequently and in near-real time (NRT). To illustrate the potential of this approach, wildfire fuel consumption rates derived from the SEVIRI FRP-PIXEL product are used to characterise smoke emissions of the 2007 "mega-fire" event focused on Peloponnese (Greece) and used within the European Centre for Medium-Range Weather Forecasting (ECMWF) Integrated Forecasting System (IFS) as a demonstration of what can be achieved when using geostationary active fire data within the Copernicus Atmosphere Monitoring Service (CAMS). Qualitative comparison of the modelled smoke plumes with MODIS optical imagery illustrates that the model captures the temporal and spatial dynamics of the plume very well, and that high temporal resolution emissions estimates such as those available from a geostationary orbit are important for capturing the sub-daily variability in smoke plume parameters such as aerosol optical depth (AOD), which are increasingly less well resolved using daily or coarser temporal resolution emissions data sets. Quantitative comparison of modelled AOD with coincident MODIS and AERONET (Aerosol Robotic Network) AOD indicates that the former is overestimated by ~ 20-30 %, but captures the observed AOD dynamics with a high degree of fidelity. The case study highlights the potential of using geostationary FRP data to drive fire emissions estimates for use within atmospheric transport models such as those implemented in the Monitoring Atmospheric Composition and Climate (MACC) series of projects for the CAMS.

  3. Towards dualband megapixel QWIP focal plane arrays

    NASA Astrophysics Data System (ADS)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Hill, C. J.; Rafol, S. B.; Salazar, D.; Woolaway, J.; LeVan, P. D.; Tidrow, M. Z.

    2007-04-01

    Mid-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) 1024 × 1024 pixel quantum well infrared photodetector (QWIP) focal planes have been demonstrated with excellent imaging performance. The MWIR QWIP detector array has demonstrated a noise equivalent differential temperature (NEΔT) of 17 mK at a 95 K operating temperature with f/2.5 optics at 300 K background and the LWIR detector array has demonstrated a NEΔT of 13 mK at a 70 K operating temperature with the same optical and background conditions as the MWIR detector array after the subtraction of system noise. Both MWIR and LWIR focal planes have shown background limited performance (BLIP) at 90 K and 70 K operating temperatures respectively, with similar optical and background conditions. In addition, we have demonstrated MWIR and LWIR pixel co-registered simultaneously readable dualband QWIP focal plane arrays. In this paper, we will discuss the performance in terms of quantum efficiency, NEΔT, uniformity, operability, and modulation transfer functions of the 1024 × 1024 pixel arrays and the progress of dualband QWIP focal plane array development work.

  4. Multicolor megapixel QWIP focal plane arrays for remote sensing instruments

    NASA Astrophysics Data System (ADS)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Hill, C. J.; Rafol, S. B.; Mumolo, J. M.; Trinh, J. T.; Tidrow, M. Z.; LeVan, P. D.

    2006-08-01

    Mid-wavelength infrared (MWIR) and long-wavelength infrared (LWIR) 1024x1024 pixel quantum well infrared photodetector (QWIP) focal planes have been demonstrated with excellent imaging performance. The MWIR QWIP detector array has demonstrated a noise equivalent differential temperature (NEΔT) of 17 mK at a 95K operating temperature with f/2.5 optics at 300K background and the LWIR detector array has demonstrated a NEΔT of 13 mK at a 70K operating temperature with the same optical and background conditions as the MWIR detector array after the subtraction of system noise. Both MWIR and LWIR focal planes have shown background limited performance (BLIP) at 90K and 70K operating temperatures respectively, with similar optical and background conditions. In addition, we have demonstrated MWIR and LWIR pixel co-registered simultaneously readable dualband QWIP focal plane arrays. In this paper, we will discuss the performance in terms of quantum efficiency, NEΔT, uniformity, operability, and modulation transfer functions of the 1024x1024 pixel arrays and the progress of dualband QWIP focal plane array development work.

  5. Large-format platinum silicide microwave kinetic inductance detectors for optical to near-IR astronomy.

    PubMed

    Szypryt, P; Meeker, S R; Coiffard, G; Fruitwala, N; Bumble, B; Ulbricht, G; Walter, A B; Daal, M; Bockstiegel, C; Collura, G; Zobrist, N; Lipartito, I; Mazin, B A

    2017-10-16

    We have fabricated and characterized 10,000 and 20,440 pixel Microwave Kinetic Inductance Detector (MKID) arrays for the Dark-speckle Near-IR Energy-resolved Superconducting Spectrophotometer (DARKNESS) and the MKID Exoplanet Camera (MEC). These instruments are designed to sit behind adaptive optics systems with the goal of directly imaging exoplanets in a 800-1400 nm band. Previous large optical and near-IR MKID arrays were fabricated using substoichiometric titanium nitride (TiN) on a silicon substrate. These arrays, however, suffered from severe non-uniformities in the TiN critical temperature, causing resonances to shift away from their designed values and lowering usable detector yield. We have begun fabricating DARKNESS and MEC arrays using platinum silicide (PtSi) on sapphire instead of TiN. Not only do these arrays have much higher uniformity than the TiN arrays, resulting in higher pixel yields, they have demonstrated better spectral resolution than TiN MKIDs of similar design. PtSi MKIDs also do not display the hot pixel effects seen when illuminating TiN on silicon MKIDs with photons with wavelengths shorter than 1 µm.

  6. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shanks, Katherine S.; Philipp, Hugh T.; Weiss, Joel T.

    Experiments at storage ring light sources as well as at next-generation light sources increasingly require detectors capable of high dynamic range operation, combining low-noise detection of single photons with large pixel well depth. XFEL sources in particular provide pulse intensities sufficiently high that a purely photon-counting approach is impractical. The High Dynamic Range Pixel Array Detector (HDR-PAD) project aims to provide a dynamic range extending from single-photon sensitivity to 10{sup 6} photons/pixel in a single XFEL pulse while maintaining the ability to tolerate a sustained flux of 10{sup 11} ph/s/pixel at a storage ring source. Achieving these goals involves themore » development of fast pixel front-end electronics as well as, in the XFEL case, leveraging the delayed charge collection due to plasma effects in the sensor. A first prototype of essential electronic components of the HDR-PAD readout ASIC, exploring different options for the pixel front-end, has been fabricated. Here, the HDR-PAD concept and preliminary design will be described.« less

  8. Impact of absorbing aerosol deposition on snow albedo reduction over the southern Tibetan plateau based on satellite observations

    NASA Astrophysics Data System (ADS)

    Lee, Wei-Liang; Liou, K. N.; He, Cenlin; Liang, Hsin-Chien; Wang, Tai-Chi; Li, Qinbin; Liu, Zhenxin; Yue, Qing

    2017-08-01

    We investigate the snow albedo variation in spring over the southern Tibetan Plateau induced by the deposition of light-absorbing aerosols using remote sensing data from moderate resolution imaging spectroradiometer (MODIS) aboard Terra satellite during 2001-2012. We have selected pixels with 100 % snow cover for the entire period in March and April to avoid albedo contamination by other types of land surfaces. A model simulation using GEOS-Chem shows that aerosol optical depth (AOD) is a good indicator for black carbon and dust deposition on snow over the southern Tibetan Plateau. The monthly means of satellite-retrieved land surface temperature (LST) and AOD over 100 % snow-covered pixels during the 12 years are used in multiple linear regression analysis to derive the empirical relationship between snow albedo and these variables. Along with the LST effect, AOD is shown to be an important factor contributing to snow albedo reduction. We illustrate through statistical analysis that a 1-K increase in LST and a 0.1 increase in AOD indicate decreases in snow albedo by 0.75 and 2.1 % in the southern Tibetan Plateau, corresponding to local shortwave radiative forcing of 1.5 and 4.2 W m-2, respectively.

  9. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  10. High signal-to-noise-ratio electro-optical terahertz imaging system based on an optical demodulating detector array.

    PubMed

    Spickermann, Gunnar; Friederich, Fabian; Roskos, Hartmut G; Bolívar, Peter Haring

    2009-11-01

    We present a 64x48 pixel 2D electro-optical terahertz (THz) imaging system using a photonic mixing device time-of-flight camera as an optical demodulating detector array. The combination of electro-optic detection with a time-of-flight camera increases sensitivity drastically, enabling the use of a nonamplified laser source for high-resolution real-time THz electro-optic imaging.

  11. Detector Sampling of Optical/IR Spectra: How Many Pixels per FWHM?

    NASA Astrophysics Data System (ADS)

    Robertson, J. Gordon

    2017-08-01

    Most optical and IR spectra are now acquired using detectors with finite-width pixels in a square array. Each pixel records the received intensity integrated over its own area, and pixels are separated by the array pitch. This paper examines the effects of such pixellation, using computed simulations to illustrate the effects which most concern the astronomer end-user. It is shown that coarse sampling increases the random noise errors in wavelength by typically 10-20 % at 2 pixels per Full Width at Half Maximum, but with wide variation depending on the functional form of the instrumental Line Spread Function (i.e. the instrumental response to a monochromatic input) and on the pixel phase. If line widths are determined, they are even more strongly affected at low sampling frequencies. However, the noise in fitted peak amplitudes is minimally affected by pixellation, with increases less than about 5%. Pixellation has a substantial but complex effect on the ability to see a relative minimum between two closely spaced peaks (or relative maximum between two absorption lines). The consistent scale of resolving power presented by Robertson to overcome the inadequacy of the Full Width at Half Maximum as a resolution measure is here extended to cover pixellated spectra. The systematic bias errors in wavelength introduced by pixellation, independent of signal/noise ratio, are examined. While they may be negligible for smooth well-sampled symmetric Line Spread Functions, they are very sensitive to asymmetry and high spatial frequency sub-structure. The Modulation Transfer Function for sampled data is shown to give a useful indication of the extent of improperly sampled signal in an Line Spread Function. The common maxim that 2 pixels per Full Width at Half Maximum is the Nyquist limit is incorrect and most Line Spread Functions will exhibit some aliasing at this sample frequency. While 2 pixels per Full Width at Half Maximum is nevertheless often an acceptable minimum for moderate signal/noise work, it is preferable to carry out simulations for any actual or proposed Line Spread Function to find the effects of various sampling frequencies. Where spectrograph end-users have a choice of sampling frequencies, through on-chip binning and/or spectrograph configurations, it is desirable that the instrument user manual should include an examination of the effects of the various choices.

  12. Modulation transfer function measurement of microbolometer focal plane array by Lloyd's mirror method

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Rommeluere, Sylvain; Viale, Thibault; Guerineau, Nicolas; Ribet-Mohamed, Isabelle; Crastes, Arnaud; Durand, Alain; Taboury, Jean

    2014-05-01

    Today, both military and civilian applications require miniaturized and cheap optical systems. One way to achieve this trend consists in decreasing the pixel pitch of focal plane arrays (FPA). In order to evaluate the performance of the overall optical systems, it is necessary to measure the modulation transfer function (MTF) of these pixels. However, small pixels lead to higher cut-off frequencies and therefore, original MTF measurements that are able to extract frequencies up to these high cut-off frequencies, are needed. In this paper, we will present a way to extract 1D MTF at high frequencies by projecting fringes on the FPA. The device uses a Lloyd mirror placed near and perpendicular to the focal plane array. Consequently, an interference pattern of fringes can be projected on the detector. By varying the angle of incidence of the light beam, we can tune the period of the interference fringes and, thus, explore a wide range of spatial frequencies, and mainly around the cut-off frequency of the pixel which is one of the most interesting area. Illustration of this method will be applied to a 640×480 microbolometer focal plane array with a pixel pitch of 17µm in the LWIR spectral region.

  13. High density pixel array and laser micro-milling method for fabricating array

    NASA Technical Reports Server (NTRS)

    McFall, James Earl (Inventor); Wiener-Avnear, Eliezer (Inventor)

    2003-01-01

    A pixel array device is fabricated by a laser micro-milling method under strict process control conditions. The device has an array of pixels bonded together with an adhesive filling the grooves between adjacent pixels. The array is fabricated by moving a substrate relative to a laser beam of predetermined intensity at a controlled, constant velocity along a predetermined path defining a set of grooves between adjacent pixels so that a predetermined laser flux per unit area is applied to the material, and repeating the movement for a plurality of passes of the laser beam until the grooves are ablated to a desired depth. The substrate is of an ultrasonic transducer material in one example for fabrication of a 2D ultrasonic phase array transducer. A substrate of phosphor material is used to fabricate an X-ray focal plane array detector.

  14. FDTD-based optical simulations methodology for CMOS image sensors pixels architecture and process optimization

    NASA Astrophysics Data System (ADS)

    Hirigoyen, Flavien; Crocherie, Axel; Vaillant, Jérôme M.; Cazaux, Yvon

    2008-02-01

    This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects. Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation. We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75μm pixel.

  15. Scalable wide-field optical coherence tomography-based angiography for in vivo imaging applications

    PubMed Central

    Xu, Jingjiang; Wei, Wei; Song, Shaozhen; Qi, Xiaoli; Wang, Ruikang K.

    2016-01-01

    Recent advances in optical coherence tomography (OCT)-based angiography have demonstrated a variety of biomedical applications in the diagnosis and therapeutic monitoring of diseases with vascular involvement. While promising, its imaging field of view (FOV) is however still limited (typically less than 9 mm2), which somehow slows down its clinical acceptance. In this paper, we report a high-speed spectral-domain OCT operating at 1310 nm to enable wide FOV up to 750 mm2. Using optical microangiography (OMAG) algorithm, we are able to map vascular networks within living biological tissues. Thanks to 2,048 pixel-array line scan InGaAs camera operating at 147 kHz scan rate, the system delivers a ranging depth of ~7.5 mm and provides wide-field OCT-based angiography at a single data acquisition. We implement two imaging modes (i.e., wide-field mode and high-resolution mode) in the OCT system, which gives highly scalable FOV with flexible lateral resolution. We demonstrate scalable wide-field vascular imaging for multiple finger nail beds in human and whole brain in mice with skull left intact at a single 3D scan, promising new opportunities for wide-field OCT-based angiography for many clinical applications. PMID:27231630

  16. Thallium Bromide as an Alternative Material for Room-Temperature Gamma-Ray Spectroscopy and Imaging

    NASA Astrophysics Data System (ADS)

    Koehler, William

    Thallium bromide is an attractive material for room-temperature gamma-ray spectroscopy and imaging because of its high atomic number (Tl: 81, Br: 35), high density (7.56 g/cm3), and a wide bandgap (2.68 eV). In this work, 5 mm thick TlBr detectors achieved 0.94% FWHM at 662 keV for all single-pixel events and 0.72% FWHM at 662 keV from the best pixel and depth using three-dimensional position sensing technology. However, these results were limited to stable operation at -20°C. After days to months of room-temperature operation, ionic conduction caused these devices to fail. Depth-dependent signal analysis was used to isolate room-temperature degradation effects to within 0.5 mm of the anode surface. This was verified by refabricating the detectors after complete failure at room temperature; after refabrication, similar performance and functionality was recovered. As part of this work, the improvement in electron drift velocity and energy resolution during conditioning at -20°C was quantified. A new method was developed to measure the impurity concentration without changing the gamma ray measurement setup. The new method was used to show that detector conditioning was likely the result of charged impurities drifting out of the active volume. This space charge reduction then caused a more stable and uniform electric field. Additionally, new algorithms were developed to remove hole contributions in high-hole-mobility detectors to improve depth reconstruction. These algorithms improved the depth reconstruction (accuracy) without degrading the depth uncertainty (precision). Finally, spectroscopic and imaging performance of new 11 x 11 pixelated-anode TlBr detectors was characterized. The larger detectors were used to show that energy resolution can be improved by identifying photopeak events from their Tl characteristic x-rays.

  17. Optical and Electric Multifunctional CMOS Image Sensors for On-Chip Biosensing Applications

    PubMed Central

    Tokuda, Takashi; Noda, Toshihiko; Sasagawa, Kiyotaka; Ohta, Jun

    2010-01-01

    In this review, the concept, design, performance, and a functional demonstration of multifunctional complementary metal-oxide-semiconductor (CMOS) image sensors dedicated to on-chip biosensing applications are described. We developed a sensor architecture that allows flexible configuration of a sensing pixel array consisting of optical and electric sensing pixels, and designed multifunctional CMOS image sensors that can sense light intensity and electric potential or apply a voltage to an on-chip measurement target. We describe the sensors’ architecture on the basis of the type of electric measurement or imaging functionalities. PMID:28879978

  18. Longitudinal analysis on human cervical tissue using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Yao, Wang; Myers, Kristin M.; Vink, Joy-Sarah Y.; Wapner, Ronald J.; Hendon, Christine P.

    2017-02-01

    Uterine cervical collagen fiber network is vital to the normal cervical function in pregnancy. Previously, we presented an orientation estimation method to enable dispersion analysis on a single axial slice of human cervical tissue obtained from the upper half of cervix using optical coherence tomography (OCT). How the collagen fiber network structure changes from the internal os (top of the cervix which meets the uterus) to external os (bottom of cervix which extends into the vagina), remains unknown due to depth penetration limitations of OCT. To establish a collagen fiber directionality "map" of the entire cervix, we imaged serial axial slices of human NP (n=11) and PG (n=2) cervical tissue obtained from the internal to external os using Institutional Review Board approved protocols at Columbia University Medical Center. Each slice was divided into four quadrants. In each quadrant, we stitched multiple overlapped OCT volumes and analyzed the en face images that were parallel to the surface. A pixel-wise directionality map was generated. We analyzed fiber trend by measuring the mean angles and quantified dispersion by calculating the standard deviation of the fiber direction over a region of 400 μm × 400 μm. For the initial four samples, our analysis confirms a circumferential fiber pattern in the outer region of slices at all depths. We found that the standard deviation close to internal os showed no significance to the standard deviation close to external os (p>0.05), indicating comparable dispersion.

  19. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  20. Attenuating Stereo Pixel-Locking via Affine Window Adaptation

    NASA Technical Reports Server (NTRS)

    Stein, Andrew N.; Huertas, Andres; Matthies, Larry H.

    2006-01-01

    For real-time stereo vision systems, the standard method for estimating sub-pixel stereo disparity given an initial integer disparity map involves fitting parabolas to a matching cost function aggregated over rectangular windows. This results in a phenomenon known as 'pixel-locking,' which produces artificially-peaked histograms of sub-pixel disparity. These peaks correspond to the introduction of erroneous ripples or waves in the 3D reconstruction of truly Rat surfaces. Since stereo vision is a common input modality for autonomous vehicles, these inaccuracies can pose a problem for safe, reliable navigation. This paper proposes a new method for sub-pixel stereo disparity estimation, based on ideas from Lucas-Kanade tracking and optical flow, which substantially reduces the pixel-locking effect. In addition, it has the ability to correct much larger initial disparity errors than previous approaches and is more general as it applies not only to the ground plane.

  1. Full complex spatial filtering with a phase mostly DMD. [Deformable Mirror Device

    NASA Technical Reports Server (NTRS)

    Florence, James M.; Juday, Richard D.

    1991-01-01

    A new technique for implementing fully complex spatial filters with a phase mostly deformable mirror device (DMD) light modulator is described. The technique combines two or more phase-modulating flexure-beam mirror elements into a single macro-pixel. By manipulating the relative phases of the individual sub-pixels within the macro-pixel, the amplitude and the phase can be independently set for this filtering element. The combination of DMD sub-pixels into a macro-pixel is accomplished by adjusting the optical system resolution, thereby trading off system space bandwidth product for increased filtering flexibility. Volume in the larger dimensioned space, space bandwidth-complex axes count, is conserved. Experimental results are presented mapping out the coupled amplitude and phase characteristics of the individual flexure-beam DMD elements and demonstrating the independent control of amplitude and phase in a combined macro-pixel. This technique is generally applicable for implementation with any type of phase modulating light modulator.

  2. A framework for quantifying the impacts of sub-pixel reflectance variance and covariance on cloud optical thickness and effective radius retrievals based on the bi-spectral method

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Werner, F.; Cho, H.-M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2017-02-01

    The so-called bi-spectral method retrieves cloud optical thickness (τ) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved τ and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the τ and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the τ and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.

  3. A Framework for Quantifying the Impacts of Sub-Pixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bi-Spectral Method.

    NASA Technical Reports Server (NTRS)

    Zhang, Z; Werner, F.; Cho, H. -M.; Wind, Galina; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2017-01-01

    The so-called bi-spectral method retrieves cloud optical thickness (t) and cloud droplet effective radius (re) simultaneously from a pair of cloud reflectance observations, one in a visible or near infrared (VIS/NIR) band and the other in a shortwave-infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved t and re. In this study, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VIS/NIR and SWIR cloud reflectances and their covariance on the t and re retrievals. This framework takes into account the fact that the retrievals are determined by both VIS/NIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the t and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VIS/NIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval.

  4. Teaching Fraunhofer diffraction via experimental and simulated images in the laboratory

    NASA Astrophysics Data System (ADS)

    Peinado, Alba; Vidal, Josep; Escalera, Juan Carlos; Lizana, Angel; Campos, Juan; Yzuel, Maria

    2012-10-01

    Diffraction is an important phenomenon introduced to Physics university students in a subject of Fundamentals of Optics. In addition, in the Physics Degree syllabus of the Universitat Autònoma de Barcelona, there is an elective subject in Applied Optics. In this subject, diverse diffraction concepts are discussed in-depth from different points of view: theory, experiments in the laboratory and computing exercises. In this work, we have focused on the process of teaching Fraunhofer diffraction through laboratory training. Our approach involves students working in small groups. They visualize and acquire some important diffraction patterns with a CCD camera, such as those produced by a slit, a circular aperture or a grating. First, each group calibrates the CCD camera, that is to say, they obtain the relation between the distances in the diffraction plane in millimeters and in the computer screen in pixels. Afterwards, they measure the significant distances in the diffraction patterns and using the appropriate diffraction formalism, they calculate the size of the analyzed apertures. Concomitantly, students grasp the convolution theorem in the Fourier domain by analyzing the diffraction of 2-D gratings of elemental apertures. Finally, the learners use a specific software to simulate diffraction patterns of different apertures. They can control several parameters: shape, size and number of apertures, 1-D or 2-D gratings, wavelength, focal lens or pixel size.Therefore, the program allows them to reproduce the images obtained experimentally, and generate others by changingcertain parameters. This software has been created in our research group, and it is freely distributed to the students in order to help their learning of diffraction. We have observed that these hands on experiments help students to consolidate their theoretical knowledge of diffraction in a pedagogical and stimulating learning process.

  5. Imaging quality evaluation method of pixel coupled electro-optical imaging system

    NASA Astrophysics Data System (ADS)

    He, Xu; Yuan, Li; Jin, Chunqi; Zhang, Xiaohui

    2017-09-01

    With advancements in high-resolution imaging optical fiber bundle fabrication technology, traditional photoelectric imaging system have become ;flexible; with greatly reduced volume and weight. However, traditional image quality evaluation models are limited by the coupling discrete sampling effect of fiber-optic image bundles and charge-coupled device (CCD) pixels. This limitation substantially complicates the design, optimization, assembly, and evaluation image quality of the coupled discrete sampling imaging system. Based on the transfer process of grayscale cosine distribution optical signal in the fiber-optic image bundle and CCD, a mathematical model of coupled modulation transfer function (coupled-MTF) is established. This model can be used as a basis for following studies on the convergence and periodically oscillating characteristics of the function. We also propose the concept of the average coupled-MTF, which is consistent with the definition of traditional MTF. Based on this concept, the relationships among core distance, core layer radius, and average coupled-MTF are investigated.

  6. A 64-pixel NbTiN superconducting nanowire single-photon detector array for spatially resolved photon detection.

    PubMed

    Miki, Shigehito; Yamashita, Taro; Wang, Zhen; Terai, Hirotaka

    2014-04-07

    We present the characterization of two-dimensionally arranged 64-pixel NbTiN superconducting nanowire single-photon detector (SSPD) array for spatially resolved photon detection. NbTiN films deposited on thermally oxidized Si substrates enabled the high-yield production of high-quality SSPD pixels, and all 64 SSPD pixels showed uniform superconducting characteristics within the small range of 7.19-7.23 K of superconducting transition temperature and 15.8-17.8 μA of superconducting switching current. Furthermore, all of the pixels showed single-photon sensitivity, and 60 of the 64 pixels showed a pulse generation probability higher than 90% after photon absorption. As a result of light irradiation from the single-mode optical fiber at different distances between the fiber tip and the active area, the variations of system detection efficiency (SDE) in each pixel showed reasonable Gaussian distribution to represent the spatial distributions of photon flux intensity.

  7. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  8. Model of the lines of sight for an off-axis optical instrument Pleiades

    NASA Astrophysics Data System (ADS)

    Sauvage, Dominique; Gaudin-Delrieu, Catherine; Tournier, Thierry

    2017-11-01

    The future Earth observation missions aim at delivering images with a high resolution and a large field of view. These images have to be processed to get a very accurate localisation. In that goal, the individual lines of sight of each photosensitive element must be evaluated according to the localisation of the pixels in the focal plane. But, with off-axis Korsch telescope (like PLEIADES), the classical model has to be adapted. This is possible by using optical ground measurements made after the integration of the instrument. The processing of these results leads to several parameters, which are function of the offsets of the focal plane and the real focal length. All this study which has been proposed for the PLEIADES mission leads to a more elaborated model which provides the relation between the lines of sight and the location of the pixels, with a very good accuracy, close to the pixel size.

  9. In Situ Time Constant and Optical Efficiency Measurements of TRUCE Pixels in the Atacama B-Mode Search

    NASA Astrophysics Data System (ADS)

    Simon, S. M.; Appel, J. W.; Cho, H. M.; Essinger-Hileman, T.; Irwin, K. D.; Kusaka, A.; Niemack, M. D.; Nolta, M. R.; Page, L. A.; Parker, L. P.; Raghunathan, S.; Sievers, J. L.; Staggs, S. T.; Visnjic, K.

    2014-09-01

    The Atacama B-mode Search (ABS) instrument, which began observation in February of 2012, is a crossed-Dragone telescope located at an elevation of 5,100 m in the Atacama Desert in Chile. The primary scientific goal of ABS is to measure the B-mode polarization spectrum of the Cosmic Microwave Background from multipole moments of about 50 to 500 (angular scales from to ), a range that includes the primordial B-mode peak from inflationary gravitational waves. The ABS focal plane array consists of 240 pixels designed for observation at 145 GHz by the TRUCE collaboration. Each pixel has its own individual, single-moded feedhorn and contains two transition-edge sensor bolometers coupled to orthogonal polarizations that are read out using time domain multiplexing. We will report on the current status of ABS and discuss the time constants and optical efficiencies of the TRUCE detectors in the field.

  10. Automated Inspection of Defects in Optical Fiber Connector End Face Using Novel Morphology Approaches.

    PubMed

    Mei, Shuang; Wang, Yudan; Wen, Guojun; Hu, Yang

    2018-05-03

    Increasing deployment of optical fiber networks and the need for reliable high bandwidth make the task of inspecting optical fiber connector end faces a crucial process that must not be neglected. Traditional end face inspections are usually performed by manual visual methods, which are low in efficiency and poor in precision for long-term industrial applications. More seriously, the inspection results cannot be quantified for subsequent analysis. Aiming at the characteristics of typical defects in the inspection process for optical fiber end faces, we propose a novel method, “difference of min-max ranking filtering” (DO2MR), for detection of region-based defects, e.g., dirt, oil, contamination, pits, and chips, and a special model, a “linear enhancement inspector” (LEI), for the detection of scratches. The DO2MR is a morphology method that intends to determine whether a pixel belongs to a defective region by comparing the difference of gray values of pixels in the neighborhood around the pixel. The LEI is also a morphology method that is designed to search for scratches at different orientations with a special linear detector. These two approaches can be easily integrated into optical inspection equipment for automatic quality verification. As far as we know, this is the first time that complete defect detection methods for optical fiber end faces are available in the literature. Experimental results demonstrate that the proposed DO2MR and LEI models yield good comprehensive performance with high precision and accepted recall rates, and the image-level detection accuracies reach 96.0 and 89.3%, respectively.

  11. Design, optimization and evaluation of a "smart" pixel sensor array for low-dose digital radiography

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Liu, Xinghui; Ou, Hai; Chen, Jun

    2016-04-01

    Amorphous silicon (a-Si:H) thin-film transistors (TFTs) have been widely used to build flat-panel X-ray detectors for digital radiography (DR). As the demand for low-dose X-ray imaging grows, a detector with high signal-to-noise-ratio (SNR) pixel architecture emerges. "Smart" pixel is intended to use a dual-gate photosensitive TFT for sensing, storage, and switch. It differs from a conventional passive pixel sensor (PPS) and active pixel sensor (APS) in that all these three functions are combined into one device instead of three separate units in a pixel. Thus, it is expected to have high fill factor and high spatial resolution. In addition, it utilizes the amplification effect of the dual-gate photosensitive TFT to form a one-transistor APS that leads to a potentially high SNR. This paper addresses the design, optimization and evaluation of the smart pixel sensor and array for low-dose DR. We will design and optimize the smart pixel from the scintillator to TFT levels and validate it through optical and electrical simulation and experiments of a 4x4 sensor array.

  12. Smart image sensors: an emerging key technology for advanced optical measurement and microsystems

    NASA Astrophysics Data System (ADS)

    Seitz, Peter

    1996-08-01

    Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.

  13. Angle- and polarization-insensitive, small area, subtractive color filters via a-Si nanopillar arrays (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fountaine, Katherine T.; Ito, Mikinori; Pala, Ragip; Atwater, Harry A.

    2016-09-01

    Spectrally-selective nanophotonic and plasmonic structures enjoy widespread interest for application as color filters in imaging devices, due to their potential advantages over traditional organic dyes and pigments. Organic dyes are straightforward to implement with predictable optical performance at large pixel size, but suffer from inherent optical cross-talk and stability (UV, thermal, humidity) issues and also exhibit increasingly unpredictable performance as pixel size approaches dye molecule size. Nanophotonic and plasmonic color filters are more robust, but often have polarization- and angle-dependent optical response and/or require large-range periodicity. Herein, we report on design and fabrication of polarization- and angle-insensitive CYM color filters based on a-Si nanopillar arrays as small as 1um2, supported by experiment, simulation, and analytic theory. Analytic waveguide and Mie theories explain the color filtering mechanism- efficient coupling into and interband transition-mediated attenuation of waveguide-like modes—and also guided the FDTD simulation-based optimization of nanopillar array dimensions. The designed a-Si nanopillar arrays were fabricated using e-beam lithography and reactive ion etching; and were subsequently optically characterized, revealing the predicted polarization- and angle-insensitive (±40°) subtractive filter responses. Cyan, yellow, and magenta color filters have each been demonstrated. The effects of nanopillar array size and inter-array spacing were investigated both experimentally and theoretically to probe the issues of ever-shrinking pixel sizes and cross-talk, respectively. Results demonstrate that these nanopillar arrays maintain their performance down to 1um2 pixel sizes with no inter-array spacing. These concepts and results along with color-processed images taken with a fabricated color filter array will be presented and discussed.

  14. Hardware-in-the-loop projector system for light detection and ranging sensor testing

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Naumann, Charles B.; Cornell, Michael C.

    2012-08-01

    Efforts in developing a synthetic environment for testing light detection and ranging (LADAR) sensors in a hardware-in-the-loop simulation are continuing at the Aviation and Missile Research, Engineering, and Development Center of the U.S. Army Research, Engineering and Development Command (RDECOM). Current activities have concentrated on evaluating the optical projection techniques for the LADAR synthetic environment. Schemes for generating the optical signals representing the individual pixels of the projection are of particular interest. Several approaches have been investigated and tested with emphasis on operating wavelength, intensity dynamic range and uniformity, and flexibility in pixel waveform generation. This paper will discuss some of the results from these current efforts at RDECOM's System Simulation and Development Directorate's Electro Optical Technology Development Laboratory.

  15. High frame-rate computational ghost imaging system using an optical fiber phased array and a low-pixel APD array.

    PubMed

    Liu, Chunbo; Chen, Jingqiu; Liu, Jiaxin; Han, Xiang'e

    2018-04-16

    To obtain a high imaging frame rate, a computational ghost imaging system scheme is proposed based on optical fiber phased array (OFPA). Through high-speed electro-optic modulators, the randomly modulated OFPA can provide much faster speckle projection, which can be precomputed according to the geometry of the fiber array and the known phases for modulation. Receiving the signal light with a low-pixel APD array can effectively decrease the requirement on sampling quantity and computation complexity owing to the reduced data dimensionality while avoiding the image aliasing due to the spatial periodicity of the speckles. The results of analysis and simulation show that the frame rate of the proposed imaging system can be significantly improved compared with traditional systems.

  16. Pixel switching of epitaxial Pd/YHx/CaF2 switchable mirrors

    PubMed

    Kerssemakers; van der Molen SJ; Koeman; Gunther; Griessen

    2000-08-03

    Exposure of rare-earth films to hydrogen can induce a metal-insulator transition, accompanied by pronounced optical changes. This 'switchable mirror' effect has received considerable attention from theoretical, experimental and technological points of view. Most systems use polycrystalline films, but the synthesis of yttrium-based epitaxial switchable mirrors has also been reported. The latter form an extended self-organized ridge network during initial hydrogen loading, which results in the creation of micrometre-sized triangular domains. Here we observe homogeneous and essentially independent optical switching of individual domains in epitaxial switchable mirrors during hydrogen absorption. The optical switching is accompanied by topographical changes as the domains sequentially expand and contract; the ridges block lateral hydrogen diffusion and serve as a microscopic lubricant for the domain oscillations. We observe the correlated changes in topology and optical properties using in situ atomic force and optical microscopy. Single-domain phase switching is not observed in polycrystalline films, which are optically homogeneous. The ability to generate a tunable, dense pattern of switchable pixels is of technological relevance for solid-state displays based on switchable mirrors.

  17. Study on polarized optical flow algorithm for imaging bionic polarization navigation micro sensor

    NASA Astrophysics Data System (ADS)

    Guan, Le; Liu, Sheng; Li, Shi-qi; Lin, Wei; Zhai, Li-yuan; Chu, Jin-kui

    2018-05-01

    At present, both the point source and the imaging polarization navigation devices only can output the angle information, which means that the velocity information of the carrier cannot be extracted from the polarization field pattern directly. Optical flow is an image-based method for calculating the velocity of pixel point movement in an image. However, for ordinary optical flow, the difference in pixel value as well as the calculation accuracy can be reduced in weak light. Polarization imaging technology has the ability to improve both the detection accuracy and the recognition probability of the target because it can acquire the extra polarization multi-dimensional information of target radiation or reflection. In this paper, combining the polarization imaging technique with the traditional optical flow algorithm, a polarization optical flow algorithm is proposed, and it is verified that the polarized optical flow algorithm has good adaptation in weak light and can improve the application range of polarization navigation sensors. This research lays the foundation for day and night all-weather polarization navigation applications in future.

  18. Smoke optical depths - Magnitude, variability, and wavelength dependence

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Russell, P. B.; Colburn, D. A.; Ackerman, T. P.; Allen, D. A.

    1988-01-01

    An airborne autotracking sun-photometer has been used to measure magnitudes, temporal/spatial variabilities, and the wavelength dependence of optical depths in the near-ultraviolet to near-infrared spectrum of smoke from two forest fires and one jet fuel fire and of background air. Jet fuel smoke optical depths were found to be generally less wavelength dependent than background aerosol optical depths. Forest fire smoke optical depths, however, showed a wide range of wavelength depedences, such as incidents of wavelength-independent extinction.

  19. Nonuniformity correction algorithm with efficient pixel offset estimation for infrared focal plane arrays.

    PubMed

    Orżanowski, Tomasz

    2016-01-01

    This paper presents an infrared focal plane array (IRFPA) response nonuniformity correction (NUC) algorithm which is easy to implement by hardware. The proposed NUC algorithm is based on the linear correction scheme with the useful method of pixel offset correction coefficients update. The new approach to IRFPA response nonuniformity correction consists in the use of pixel response change determined at the actual operating conditions in relation to the reference ones by means of shutter to compensate a pixel offset temporal drift. Moreover, it permits to remove any optics shading effect in the output image as well. To show efficiency of the proposed NUC algorithm some test results for microbolometer IRFPA are presented.

  20. Crosstalk quantification, analysis, and trends in CMOS image sensors.

    PubMed

    Blockstein, Lior; Yadid-Pecht, Orly

    2010-08-20

    Pixel crosstalk (CTK) consists of three components, optical CTK (OCTK), electrical CTK (ECTK), and spectral CTK (SCTK). The CTK has been classified into two groups: pixel-architecture dependent and pixel-architecture independent. The pixel-architecture-dependent CTK (PADC) consists of the sum of two CTK components, i.e., the OCTK and the ECTK. This work presents a short summary of a large variety of methods for PADC reduction. Following that, this work suggests a clear quantifiable definition of PADC. Three complementary metal-oxide-semiconductor (CMOS) image sensors based on different technologies were empirically measured, using a unique scanning technology, the S-cube. The PADC is analyzed, and technology trends are shown.

  1. Preflight calibration of the Imaging Magnetograph eXperiment polarization modulation package based on liquid-crystal variable retarders.

    PubMed

    Uribe-Patarroyo, Néstor; Alvarez-Herrero, Alberto; Martínez Pillet, Valentín

    2012-07-20

    We present the study, characterization, and calibration of the polarization modulation package (PMP) of the Imaging Magnetograph eXperiment (IMaX) instrument, a successful Stokes spectropolarimeter on board the SUNRISE balloon project within the NASA Long Duration Balloon program. IMaX was designed to measure the Stokes parameters of incoming light with a signal-to-noise ratio of at least 103, using as polarization modulators two nematic liquid-crystal variable retarders (LCVRs). An ad hoc calibration system that reproduced the optical and environmental characteristics of IMaX was designed, assembled, and aligned. The system recreates the optical beam that IMaX receives from SUNRISE with known polarization across the image plane, as well as an optical system with the same characteristics of IMaX. The system was used to calibrate the IMaX PMP in vacuum and at different temperatures, with a thermal control resembling the in-flight one. The efficiencies obtained were very high, near theoretical maximum values: the total efficiency in vacuum calibration at nominal temperature was 0.972 (1 being the theoretical maximum). The condition number of the demodulation matrix of the same calibration was 0.522 (0.577 theoretical maximum). Some inhomogeneities of the LCVRs were clear during the pixel-by-pixel calibration of the PMP, but it can be concluded that the mere information of a pixel-per-pixel calibration is sufficient to maintain high efficiencies in spite of inhomogeneities of the LCVRs.

  2. A smart-pixel holographic competitive learning network

    NASA Astrophysics Data System (ADS)

    Slagle, Timothy Michael

    Neural networks are adaptive classifiers which modify their decision boundaries based on feedback from externally- or internally-generated error signals. Optics is an attractive technology for neural network implementation because it offers the possibility of parallel, nearly instantaneous computation of the weighted neuron inputs by the propagation of light through the optical system. Using current optical device technology, system performance levels of 3 × 1011 connection updates per second can be achieved. This thesis presents an architecture for an optical competitive learning network which offers advantages over previous optical implementations, including smart-pixel-based optical neurons, phase- conjugate self-alignment of a single neuron plane, and high-density, parallel-access weight storage, interconnection, and learning in a volume hologram. The competitive learning algorithm with modifications for optical implementation is described, and algorithm simulations are performed for an example problem. The optical competitive learning architecture is then introduced. The optical system is simulated using the ``beamprop'' algorithm at the level of light propagating through the system components, and results showing competitive learning operation in agreement with the algorithm simulations are presented. The optical competitive learning requires a non-linear, non-local ``winner-take-all'' (WTA) neuron function. Custom-designed smart-pixel WTA neuron arrays were fabricated using CMOS VLSI/liquid crystal technology. Results of laboratory tests of the WTA arrays' switching characteristics, time response, and uniformity are then presented. The system uses a phase-conjugate mirror to write the self-aligning interconnection weight holograms, and energy gain is required from the reflection to minimize erasure of the existing weights. An experimental system for characterizing the PCM response is described. Useful gains of 20 were obtained with a polarization-multiplexed PCM readout, and gains of up to 60 were observed when a time-sequential read-out technique was used. Finally, the optical competitive learning laboratory system is described, including some necessary modifications to the previous architectures, and the data acquisition and control system developed for the system. Experimental results showing phase conjugation of the WTA outputs, holographic interconnect storage, associative storage between input images and WTA neuron outputs, and WTA array switching are presented, demonstrating the functions necessary for the operation of the optical learning system.

  3. Depth perception camera for autonomous vehicle applications

    NASA Astrophysics Data System (ADS)

    Kornreich, Philipp

    2013-05-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.

  4. Design and fabrication of AlGaInP-based micro-light-emitting-diode array devices

    NASA Astrophysics Data System (ADS)

    Bao, Xingzhen; Liang, Jingqiu; Liang, Zhongzhu; Wang, Weibiao; Tian, Chao; Qin, Yuxin; Lü, Jinguang

    2016-04-01

    An integrated high-resolution (individual pixel size 80 μm×80 μm) solid-state self-emissive active matrix programmed with 320×240 micro-light-emitting-diode arrays structure was designed and fabricated on an AlGaInP semiconductor chip using micro electro-mechanical systems, microstructure and semiconductor fabricating techniques. Row pixels share a p-electrode and line pixels share an n-electrode. We experimentally investigated GaAs substrate thickness affects the electrical and optical characteristics of the pixels. For a 150-μm-thick GaAs substrate, the single pixel output power was 167.4 μW at 5 mA, and increased to 326.4 μW when current increase to 10 mA. The device investigated potentially plays an important role in many fields.

  5. Phase holograms in PMMA with proximity effect correction

    NASA Technical Reports Server (NTRS)

    Maker, Paul D.; Muller, R. E.

    1993-01-01

    Complex computer generated phase holograms (CGPH's) have been fabricated in PMMA by partial e-beam exposure and subsequent partial development. The CGPH was encoded as a sequence of phase delay pixels and written by the JEOL JBX-5D2 E-beam lithography system, a different dose being assigned to each value of phase delay. Following carefully controlled partial development, the pattern appeared rendered in relief in the PMMA, which then acts as the phase-delay medium. The exposure dose was in the range 20-200 micro-C/sq cm, and very aggressive development in pure acetone led to low contrast. This enabled etch depth control to better than plus or minus lambda(sub vis)/60. That result was obtained by exposing isolated 50 micron square patches and measuring resist removal over the central area where the proximity effect dose was uniform and related only to the local exposure. For complex CGPH's with pixel size of the order of the e-beam proximity effect radius, the patterns must be corrected for the extra exposure caused by electrons scattered back up out of the substrate. This has been accomplished by deconvolving the two-dimensional dose deposition function with the desired dose pattern. The deposition function, which plays much the same role as an instrument response function, was carefully measured under the exact conditions used to expose the samples. The devices fabricated were designed with 16 equal phase steps per retardation cycle, were up to 1 cm square, and consisted of up to 100 million 0.3-2.0 micron square pixels. Data files were up to 500 MB long and exposure times ranged to tens of hours. A Fresnel phase lens was fabricated that had diffraction limited optical performance with better than 85 percent efficiency.

  6. A compact high-speed pnCCD camera for optical and x-ray applications

    NASA Astrophysics Data System (ADS)

    Ihle, Sebastian; Ordavo, Ivan; Bechteler, Alois; Hartmann, Robert; Holl, Peter; Liebel, Andreas; Meidinger, Norbert; Soltau, Heike; Strüder, Lothar; Weber, Udo

    2012-07-01

    We developed a camera with a 264 × 264 pixel pnCCD of 48 μm size (thickness 450 μm) for X-ray and optical applications. It has a high quantum efficiency and can be operated up to 400 / 1000 Hz (noise≍ 2:5 ° ENC / ≍4:0 ° ENC). High-speed astronomical observations can be performed with low light levels. Results of test measurements will be presented. The camera is well suitable for ground based preparation measurements for future X-ray missions. For X-ray single photons, the spatial position can be determined with significant sub-pixel resolution.

  7. Spectroscopic optical coherence tomography based on wavelength de-multiplexing and smart pixel array detection

    NASA Astrophysics Data System (ADS)

    Laubscher, Markus; Bourquin, Stéphane; Froehly, Luc; Karamata, Boris; Lasser, Theo

    2004-07-01

    Current spectroscopic optical coherence tomography (OCT) methods rely on a posteriori numerical calculation. We present an experimental alternative for accessing spectroscopic information in OCT without post-processing based on wavelength de-multiplexing and parallel detection using a diffraction grating and a smart pixel detector array. Both a conventional A-scan with high axial resolution and the spectrally resolved measurement are acquired simultaneously. A proof-of-principle demonstration is given on a dynamically changing absorbing sample. The method's potential for fast spectroscopic OCT imaging is discussed. The spectral measurements obtained with this approach are insensitive to scan non-linearities or sample movements.

  8. Towards an active real-time THz camera: first realization of a hybrid system

    NASA Astrophysics Data System (ADS)

    May, T.; am Weg, C.; Alcin, A.; Hils, B.; Löffler, T.; Roskos, H. G.

    2007-04-01

    We report the realization of a hybrid system for stand-off THz reflectrometry measurements. The design combines the best of two worlds: the high radiation power of sub-THz micro-electronic emitters and the high sensitivity of coherent opto-electronic detection. Our system is based on a commercially available multiplied Gunn source with a cw output power of 0.6 mW at 0.65 THz. We combine it with electro-optic mixing with femtosecond light pulses in a ZnTe crystal. This scheme can be described as heterodyne detection with a Ti:sapphire fs-laser acting as local oscillator and therefore allows for phase-sensitive measurements. Example images of test objects are obtained with mechanical scanning optics and with measurement times per pixel as short as 10 ms. The test objects are placed at a distance of 1 m from the detector and also from the source. The results indicate diffraction-limited resolution. Different contrast mechanisms, based on absorption, scattering, and difference in optical thickness are employed. Our evaluation shows that it should be possible to realize a real-time multi-pixel detector with several hundreds of pixels and a dynamic range of at least two orders of magnitude in power.

  9. Ultrathin phase-change coatings on metals for electrothermally tunable colors

    NASA Astrophysics Data System (ADS)

    Bakan, Gokhan; Ayas, Sencer; Saidzoda, Tohir; Celebi, Kemal; Dana, Aykutlu

    2016-08-01

    Metal surfaces coated with ultrathin lossy dielectrics enable color generation through strong interferences in the visible spectrum. Using a phase-change thin film as the coating layer offers tuning the generated color by crystallization or re-amorphization. Here, we study the optical response of surfaces consisting of thin (5-40 nm) phase-changing Ge2Sb2Te5 (GST) films on metal, primarily Al, layers. A color scale ranging from yellow to red to blue that is obtained using different thicknesses of as-deposited amorphous GST layers turns dim gray upon annealing-induced crystallization of the GST. Moreover, when a relatively thick (>100 nm) and lossless dielectric film is introduced between the GST and Al layers, optical cavity modes are observed, offering a rich color gamut at the expense of the angle independent optical response. Finally, a color pixel structure is proposed for ultrahigh resolution (pixel size: 5 × 5 μm2), non-volatile displays, where the metal layer acting like a mirror is used as a heater element. The electrothermal simulations of such a pixel structure suggest that crystallization and re-amorphization of the GST layer using electrical pulses are possible for electrothermal color tuning.

  10. 1 kHz 2D Visual Motion Sensor Using 20 × 20 Silicon Retina Optical Sensor and DSP Microcontroller.

    PubMed

    Liu, Shih-Chii; Yang, MinHao; Steiner, Andreas; Moeckel, Rico; Delbruck, Tobi

    2015-04-01

    Optical flow sensors have been a long running theme in neuromorphic vision sensors which include circuits that implement the local background intensity adaptation mechanism seen in biological retinas. This paper reports a bio-inspired optical motion sensor aimed towards miniature robotic and aerial platforms. It combines a 20 × 20 continuous-time CMOS silicon retina vision sensor with a DSP microcontroller. The retina sensor has pixels that have local gain control and adapt to background lighting. The system allows the user to validate various motion algorithms without building dedicated custom solutions. Measurements are presented to show that the system can compute global 2D translational motion from complex natural scenes using one particular algorithm: the image interpolation algorithm (I2A). With this algorithm, the system can compute global translational motion vectors at a sample rate of 1 kHz, for speeds up to ±1000 pixels/s, using less than 5 k instruction cycles (12 instructions per pixel) per frame. At 1 kHz sample rate the DSP is 12% occupied with motion computation. The sensor is implemented as a 6 g PCB consuming 170 mW of power.

  11. Low Complexity Compression and Speed Enhancement for Optical Scanning Holography

    PubMed Central

    Tsang, P. W. M.; Poon, T.-C.; Liu, J.-P.; Kim, T.; Kim, Y. S.

    2016-01-01

    In this paper we report a low complexity compression method that is suitable for compact optical scanning holography (OSH) systems with different optical settings. Our proposed method can be divided into 2 major parts. First, an automatic decision maker is applied to select the rows of holographic pixels to be scanned. This process enhances the speed of acquiring a hologram, and also lowers the data rate. Second, each row of down-sampled pixels is converted into a one-bit representation with delta modulation (DM). Existing DM-based hologram compression techniques suffers from the disadvantage that a core parameter, commonly known as the step size, has to be determined in advance. However, the correct value of the step size for compressing each row of hologram is dependent on the dynamic range of the pixels, which could deviate significantly with the object scene, as well as OSH systems with different opical settings. We have overcome this problem by incorporating a dynamic step-size adjustment scheme. The proposed method is applied in the compression of holograms that are acquired with 2 different OSH systems, demonstrating a compression ratio of over two orders of magnitude, while preserving favorable fidelity on the reconstructed images. PMID:27708410

  12. Emirates eXploration Imager (EXI) Overview from the Emirates Mars Mission

    NASA Astrophysics Data System (ADS)

    Al Shamsi, M. R.; Wolff, M. J.; Jones, A. R.; Khoory, M. A.; Osterloo, M. M.; AlMheiri, S.; Reed, H.; Drake, G.

    2017-12-01

    The Emirates eXploration Imager (EXI) instrument is one of three scientific instruments abroad the Emirate Mars Mission (EMM) spacecraft, "Hope". The planned launch window opens in the summer of 2020, with the goal of this United Arab Emirates (UAE) mission to explore the dynamics of the Martian atmosphere through global spatial sampling which includes both diurnal and seasonal timescales. A particular focus of the mission is the improvement of our understanding of the global circulation in the lower atmosphere and the connections to the upward transport of energy of the escaping atmospheric particles from the upper atmosphere. This will be accomplished using three unique and complementary scientific instruments. The subject of this presentation, EXI, is a multi-band, camera capable of taking 12 megapixel images, which translates to a spatial resolution of better than 8 km with a well calibrated radiometric performance. EXI uses a selector wheel mechanism consisting of 6 discrete bandpass filters to sample the optical spectral region: 3 UV bands and 3 visible (RGB) bands. Atmospheric characterization will involve the retrieval of the ice optical depth using the 300-340 nm band, the dust optical depth in the 205-235nm range, and the column abundance of ozone with a band covering 245-275 nm. Radiometric fidelity is optimized while simplifying the optical design by separating the UV and VIS optical paths. The instrument is being developed jointly by the Laboratory for Atmospheric and Space Physics (LASP), University of California, Boulder, USA, and Mohammed Bin Rashid Space Centre (MBRSC), Dubai, UAE. The development of analysis software (reduction and retrieval) is being enabled through an EXI Observation Simulator. This package will produce EXI-like images using a combination of realistic viewing geometry (NAIF and a "reference trajectory") and simulated radiance values that include relevant atmospheric conditions and properties (Global Climate Model, DISORT). These noiseless images can then have instrument effects added (e.g., read-noise, dark current, pixel sensitivity, etc) to allow for the direct testing of data compression schemes, calibration pipeline processing, and atmospheric retrievals.

  13. Using the auxiliary camera for system calibration of 3D measurement by digital speckle

    NASA Astrophysics Data System (ADS)

    Xue, Junpeng; Su, Xianyu; Zhang, Qican

    2014-06-01

    The study of 3D shape measurement by digital speckle temporal sequence correlation have drawn a lot of attention by its own advantages, however, the measurement mainly for depth z-coordinate, horizontal physical coordinate (x, y) are usually marked as image pixel coordinate. In this paper, a new approach for the system calibration is proposed. With an auxiliary camera, we made up the temporary binocular vision system, which are used for the calibration of horizontal coordinates (mm) while the temporal sequence reference-speckle-sets are calibrated. First, the binocular vision system has been calibrated using the traditional method. Then, the digital speckles are projected on the reference plane, which is moved by equal distance in the direction of depth, temporal sequence speckle images are acquired with camera as reference sets. When the reference plane is in the first position and final position, crossed fringe pattern are projected to the plane respectively. The control points of pixel coordinates are extracted by Fourier analysis from the images, and the physical coordinates are calculated by the binocular vision. The physical coordinates corresponding to each pixel of the images are calculated by interpolation algorithm. Finally, the x and y corresponding to arbitrary depth value z are obtained by the geometric formula. Experiments prove that our method can fast and flexibly measure the 3D shape of an object as point cloud.

  14. Analysis of Thematic Mapper data for studying the suspended matter distribution in the coastal area of the German Bight (North Sea)

    NASA Technical Reports Server (NTRS)

    Doerffer, R.; Fischer, J.; Stoessel, M.; Brockmann, C.; Grassl, H.

    1989-01-01

    Thematic Mapper data were analyzed with respect to its capability for mapping the complex structure and dynamics of suspended matter distribution in the coastal area of the German Bight (North Sea). Three independent pieces of information were found by factor analysis of all seven TM channels: suspended matter concentration, atmospheric scattering, and sea surface temperature. For the required atmospheric correction, the signal-to-noise ratios of Channels 5 and 7 have to be improved by averaging over 25 x 25 pixels, which also makes it possible to monitor the aerosol optical depth and aerosol type over cloud-free water surfaces. Near-surface suspended matter concentrations may be detected with an accuracy of factor less than 2 by using an algorithm derived from radiative transfer model calculation. The patchiness of suspended matter and its relation to underwater topography was analyzed with autocorrelation and cross-correlation.

  15. Improved evaluation of optical depth components from Langley plot data

    NASA Technical Reports Server (NTRS)

    Biggar, S. F.; Gellman, D. I.; Slater, P. N.

    1990-01-01

    A simple, iterative procedure to determine the optical depth components of the extinction optical depth measured by a solar radiometer is presented. Simulated data show that the iterative procedure improves the determination of the exponent of a Junge law particle size distribution. The determination of the optical depth due to aerosol scattering is improved as compared to a method which uses only two points from the extinction data. The iterative method was used to determine spectral optical depth components for June 11-13, 1988 during the MAC III experiment.

  16. Atmospheric imaging results from the Mars exploration rovers: Spirit and Opportunity.

    PubMed

    Lemmon, M T; Wolff, M J; Smith, M D; Clancy, R T; Banfield, D; Landis, G A; Ghosh, A; Smith, P H; Spanovich, N; Whitney, B; Whelley, P; Greeley, R; Thompson, S; Bell, J F; Squyres, S W

    2004-12-03

    A visible atmospheric optical depth of 0.9 was measured by the Spirit rover at Gusev crater and by the Opportunity rover at Meridiani Planum. Optical depth decreased by about 0.6 to 0.7% per sol through both 90-sol primary missions. The vertical distribution of atmospheric dust at Gusev crater was consistent with uniform mixing, with a measured scale height of 11.56 +/- 0.62 kilometers. The dust's cross section weighted mean radius was 1.47 +/- 0.21 micrometers (mm) at Gusev and 1.52 +/- 0.18 mm at Meridiani. Comparison of visible optical depths with 9-mm optical depths shows a visible-to-infrared optical depth ratio of 2.0 +/- 0.2 for comparison with previous monitoring of infrared optical depths.

  17. Simulation and Spectrum Extraction in the Spectroscopic Channel of the SNAP Experiment

    NASA Astrophysics Data System (ADS)

    Tilquin, Andre; Bonissent, A.; Gerdes, D.; Ealet, A.; Prieto, E.; Macaire, C.; Aumenier, M. H.

    2007-05-01

    A pixel-level simulation software is described. It is composed of two modules. The first module applies Fourier optics at each active element of the system to construct the PSF at a large variety of wavelengths and spatial locations of the point source. The input is provided by the engineer's design program (Zemax). It describes the optical path and the distortions. The PSF properties are compressed and interpolated using shapelets decomposition and neural network techniques. A second module is used for production jobs. It uses the output of the first module to reconstruct the relevant PSF and integrate it on the detector pixels. Extended and polychromatic sources are approximated by a combination of monochromatic point sources. For the spectrum extraction, we use a fast simulator based on a multidimensional linear interpolation of the pixel response tabulated on a grid of values of wavelength, position on sky and slice number. The prediction of the fast simulator is compared to the observed pixel content, and a chi-square minimization where the parameters are the bin contents is used to build the extracted spectrum. The visible and infrared arms are combined in the same chi-square, providing a single spectrum.

  18. A 3D image sensor with adaptable charge subtraction scheme for background light suppression

    NASA Astrophysics Data System (ADS)

    Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.

    2013-02-01

    We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.

  19. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  20. Dispersion analysis of collagen fiber networks in cervical tissue using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Gan, Yu; Yao, Wang; Myers, Kristin M.; Vink, Joy Y.; Wapner, Ronald J.; Hendon, Christine P.

    2016-02-01

    Understanding the human cervical collagen fiber network is critical to delineating the physiology of cervical remodeling during pregnancy. Previously, we presented our methodology to study the ultrastructure of collagen fibers over an entire field of transverse slices of human cervix tissue using optical coherence tomography. Here, we present a pixel-wise fiber orientation method to enable dispersion analysis on entire slices of human cervical tissues. We obtained en face images that were parallel to the surface. In each en face image, we masked the collagen fiber region based on signal noise ratio. Then, we extracted fiber orientations in each pixel using a weighted summation scheme and generated a pixel-wise directionality map within the entire region. The weight was determined by intensity variations between a pixel of interest and its neighboring pixels and their corresponding distances. We divided the directionality map into regions of 400 μm × 400 μm along radial direction in all four quadrants. In each region, we fit von-Mises distribution to fiber orientations of pixels with mode θ and dispersion b. We compared dispersions among regions and samples. Using IRB approved protocols, we obtained whole transverse slices of cervical tissue from pregnant (n = 2) and non-pregnant (n = 13) women. We observed higher dispersion in pregnant samples compared to non-pregnant samples and higher dispersions in patient's right/left zones than posterior/anterior zones within an axial slice. Future studies will analyze how collagen fiber dispersion patterns change from the internal to the external os.

  1. Ultra-high resolution and high-brightness AMOLED

    NASA Astrophysics Data System (ADS)

    Wacyk, Ihor; Ghosh, Amal; Prache, Olivier; Draper, Russ; Fellowes, Dave

    2012-06-01

    As part of its continuing effort to improve both the resolution and optical performance of AMOLED microdisplays, eMagin has recently developed an SXGA (1280×3×1024) microdisplay under a US Army RDECOM CERDEC NVESD contract that combines the world's smallest OLED pixel pitch with an ultra-high brightness green OLED emitter. This development is aimed at next-generation HMD systems with "see-through" and daylight imaging requirements. The OLED pixel array is built on a 0.18-micron CMOS backplane and contains over 4 million individually addressable pixels with a pixel pitch of 2.7 × 8.1 microns, resulting in an active area of 0.52 inches diagonal. Using both spatial and temporal enhancement, the display can provide over 10-bits of gray-level control for high dynamic range applications. The new pixel design also enables the future implementation of a full-color QSXGA (2560 × RGB × 2048) microdisplay in an active area of only 1.05 inch diagonal. A low-power serialized low-voltage-differential-signaling (LVDS) interface is integrated into the display for use as a remote video link for tethered systems. The new SXGA backplane has been combined with the high-brightness green OLED device developed by eMagin under an NVESD contract. This OLED device has produced an output brightness of more than 8000fL with all pixels on; lifetime measurements are currently underway and will presented at the meeting. This paper will describe the operational features and first optical and electrical test results of the new SXGA demonstrator microdisplay.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kishimoto, S., E-mail: syunji.kishimoto@kek.jp; Haruki, R.; Mitsui, T.

    We developed a silicon avalanche photodiode (Si-APD) linear-array detector to be used for time-resolved X-ray scattering experiments using synchrotron X-rays. The Si-APD linear array consists of 64 pixels (pixel size: 100 × 200 μm{sup 2}) with a pixel pitch of 150 μm and a depletion depth of 10 μm. The multichannel scaler counted X-ray pulses over continuous 2046 time bins for every 0.5 ns and recorded a time spectrum at each pixel with a time resolution of 0.5 ns (FWHM) for 8.0 keV X-rays. Using the detector system, we were able to observe X-ray peaks clearly separated with 2 nsmore » interval in the multibunch-mode operation of the Photon Factory ring. The small-angle X-ray scattering for polyvinylidene fluoride film was also observed with the detector.« less

  3. Lightweight uncooled TWS equipped with catadioptric optics and microscan mechanism

    NASA Astrophysics Data System (ADS)

    Bergeron, A.; Jerominek, H.; Doucet, M.; Lagacé, F.; Desnoyers, N.; Bernier, S.; Mercier, L.; Boucher, M.-A.; Jacob, M.; Alain, C.; Pope, T. D.; Laou, P.

    2006-05-01

    A rugged lightweight thermal weapon sight (TWS) prototype was developed at INO in collaboration with DRDC-Valcartier. This TWS model is based on uncooled bolometer technology, ultralight catadioptric optics, ruggedized mechanics and electronics, and extensive onboard processing capabilities. The TWS prototype operates in a single 8-12 μm infrared (IR) band. It is equipped with a unique lightweight athermalized catadioptric objective and a bolometric IR imager with an INO focal plane array (FPA). Microscan technology allows the use of a 160 x 120 pixel FPA with a pitch of 50 μm to achieve a 320 × 240 pixel resolution image thereby avoiding the size (larger optics) and cost (expensive IR optical components) penalties associated with the use of larger format arrays. The TWS is equipped with a miniature shutter for automatic offset calibration. Based on the operation of the FPA at 100 frames per second (fps), real-time imaging with 320 x 240 pixel resolution at 25 fps is available. This TWS is also equipped with a high resolution (857 x 600 pixels) OLED color microdisplay and an integrated wireless digital RF link. The sight has an adjustable and selectable electronic reticule or crosshair (five possible reticules) and a manual focus from 5 m to infinity standoff distance. Processing capabilities are added to introduce specific functionalities such as image inversion (black hot and white hot), image enhancement, and pixel smoothing. This TWS prototype is very lightweight (~ 1100 grams) and compact (volume of 93 cubic inches). It offers human size target detection at 800 m and recognition at 200 m (Johnson criteria). With 6 Li AA batteries, it operates continuously for 5 hours and 20 minutes at room temperature. It can operate over the temperature range of -30 °C to +40 °C and its housing is completely sealed. The TWS is adapted to weaver or Picatinny rail mounting. The overall design of the TWS prototype is based on feedbacks of users to achieve improved user-friendly (e.g. no pull-down menus and no electronic focusing) and ergonomic (e.g. locations of buttons) features.

  4. Development of depth encoding small animal PET detectors using dual-ended readout of pixelated scintillator arrays with SiPMs.

    PubMed

    Kuang, Zhonghua; Sang, Ziru; Wang, Xiaohui; Fu, Xin; Ren, Ning; Zhang, Xianming; Zheng, Yunfei; Yang, Qian; Hu, Zhanli; Du, Junwei; Liang, Dong; Liu, Xin; Zheng, Hairong; Yang, Yongfeng

    2018-02-01

    The performance of current small animal PET scanners is mainly limited by the detector performance and depth encoding detectors are required to develop PET scanner to simultaneously achieve high spatial resolution and high sensitivity. Among all depth encoding PET detector approaches, dual-ended readout detector has the advantage to achieve the highest depth of interaction (DOI) resolution and spatial resolution. Silicon photomultiplier (SiPM) is believed to be the photodetector of the future for PET detector due to its excellent properties as compared to the traditional photodetectors such as photomultiplier tube (PMT) and avalanche photodiode (APD). The purpose of this work is to develop high resolution depth encoding small animal PET detector using dual-ended readout of finely pixelated scintillator arrays with SiPMs. Four lutetium-yttrium oxyorthosilicate (LYSO) arrays with 11 × 11 crystals and 11.6 × 11.6 × 20 mm 3 outside dimension were made using ESR, Toray and BaSO 4 reflectors. The LYSO arrays were read out with Hamamatsu 4 × 4 SiPM arrays from both ends. The SiPM array has a pixel size of 3 × 3 mm 2 , 0.2 mm gap in between the pixels and a total active area of 12.6 × 12.6 mm 2 . The flood histograms, DOI resolution, energy resolution and timing resolution of the four detector modules were measured and compared. All crystals can be clearly resolved from the measured flood histograms of all four arrays. The BaSO 4 arrays provide the best and the ESR array provides the worst flood histograms. The DOI resolution obtained from the DOI profiles of the individual crystals of the four array is from 2.1 to 2.35 mm for events with E > 350 keV. The DOI ratio variation among crystals is bigger for the BaSO 4 arrays as compared to both the ESR and Toray arrays. The BaSO 4 arrays provide worse detector based DOI resolution. The photopeak amplitude of the Toray array had the maximum change with depth, it provides the worst energy resolution of 21.3%. The photopeak amplitude of the BaSO 4 array with 80 μm reflector almost doesn't change with depth, it provides the best energy resolution of 12.9%. A maximum timing shift of 1.37 ns to 1.61 ns among the corner and the center crystals in the four arrays was obtained due to the use of resistor network readout. A crystal based timing resolution of 0.68 ns to 0.83 ns and a detector based timing resolution of 1.26 ns to 1.45 ns were obtained for the four detector modules. Four high resolution depth encoding small animal PET detectors were developed using dual-ended readout of pixelated scintillator arrays with SiPMs. The performance results show that those detectors can be used to build a small animal PET scanner to simultaneously achieve uniform high spatial resolution and high sensitivity. © 2017 American Association of Physicists in Medicine.

  5. High-speed real-time image compression based on all-optical discrete cosine transformation

    NASA Astrophysics Data System (ADS)

    Guo, Qiang; Chen, Hongwei; Wang, Yuxi; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2017-02-01

    In this paper, we present a high-speed single-pixel imaging (SPI) system based on all-optical discrete cosine transform (DCT) and demonstrate its capability to enable noninvasive imaging of flowing cells in a microfluidic channel. Through spectral shaping based on photonic time stretch (PTS) and wavelength-to-space conversion, structured illumination patterns are generated at a rate (tens of MHz) which is three orders of magnitude higher than the switching rate of a digital micromirror device (DMD) used in a conventional single-pixel camera. Using this pattern projector, high-speed image compression based on DCT can be achieved in the optical domain. In our proposed system, a high compression ratio (approximately 10:1) and a fast image reconstruction procedure are both achieved, which implicates broad applications in industrial quality control and biomedical imaging.

  6. Characterisation of a novel reverse-biased PPD CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.

    2017-11-01

    A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.

  7. Combining Imaging and Non-Imaging Observations for Improved Space-Object Identification

    DTIC Science & Technology

    2011-09-27

    Optical and Digital Superresolution Early in the project, we expolited Fisher information (FI) to characterize the extent of spatial-frequency...extrapolation beyond the diffraction-limited optical bandwidth when the support of the object is known a priori. This support-assisted optical superresolution ...both digital (DSR) and optical superresolution (OSR). Indeed, by analyzing a se- quence of sub-pixel-shifted undersampled images one can show the

  8. The effect of spatial resolution upon cloud optical property retrievals. I - Optical thickness

    NASA Technical Reports Server (NTRS)

    Feind, Rand E.; Christopher, Sundar A.; Welch, Ronald M.

    1992-01-01

    High spectral and spatial resolution Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery is used to study the effects of spatial resolution upon fair weather cumulus cloud optical thickness retrievals. As a preprocessing step, a variation of the Gao and Goetz three-band ratio technique is used to discriminate clouds from the background. The combination of the elimination of cloud shadow pixels and using the first derivative of the histogram allows for accurate cloud edge discrimination. The data are progressively degraded from 20 m to 960 m spatial resolution. The results show that retrieved cloud area increases with decreasing spatial resolution. The results also show that there is a monotonic decrease in retrieved cloud optical thickness with decreasing spatial resolution. It is also demonstrated that the use of a single, monospectral reflectance threshold is inadequate for identifying cloud pixels in fair weather cumulus scenes and presumably in any inhomogeneous cloud field. Cloud edges have a distribution of reflectance thresholds. The incorrect identification of cloud edges significantly impacts the accurate retrieval of cloud optical thickness values.

  9. Pixel pitch and particle energy influence on the dark current distribution of neutron irradiated CMOS image sensors.

    PubMed

    Belloir, Jean-Marc; Goiffon, Vincent; Virmontois, Cédric; Raine, Mélanie; Paillet, Philippe; Duhamel, Olivier; Gaillardin, Marc; Molina, Romain; Magnan, Pierre; Gilard, Olivier

    2016-02-22

    The dark current produced by neutron irradiation in CMOS Image Sensors (CIS) is investigated. Several CIS with different photodiode types and pixel pitches are irradiated with various neutron energies and fluences to study the influence of each of these optical detector and irradiation parameters on the dark current distribution. An empirical model is tested on the experimental data and validated on all the irradiated optical imagers. This model is able to describe all the presented dark current distributions with no parameter variation for neutron energies of 14 MeV or higher, regardless of the optical detector and irradiation characteristics. For energies below 1 MeV, it is shown that a single parameter has to be adjusted because of the lower mean damage energy per nuclear interaction. This model and these conclusions can be transposed to any silicon based solid-state optical imagers such as CIS or Charged Coupled Devices (CCD). This work can also be used when designing an optical imager instrument, to anticipate the dark current increase or to choose a mitigation technique.

  10. Printing colour at the optical diffraction limit.

    PubMed

    Kumar, Karthik; Duan, Huigao; Hegde, Ravi S; Koh, Samuel C W; Wei, Jennifer N; Yang, Joel K W

    2012-09-01

    The highest possible resolution for printed colour images is determined by the diffraction limit of visible light. To achieve this limit, individual colour elements (or pixels) with a pitch of 250 nm are required, translating into printed images at a resolution of ∼100,000 dots per inch (d.p.i.). However, methods for dispensing multiple colourants or fabricating structural colour through plasmonic structures have insufficient resolution and limited scalability. Here, we present a non-colourant method that achieves bright-field colour prints with resolutions up to the optical diffraction limit. Colour information is encoded in the dimensional parameters of metal nanostructures, so that tuning their plasmon resonance determines the colours of the individual pixels. Our colour-mapping strategy produces images with both sharp colour changes and fine tonal variations, is amenable to large-volume colour printing via nanoimprint lithography, and could be useful in making microimages for security, steganography, nanoscale optical filters and high-density spectrally encoded optical data storage.

  11. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  12. Geometrical superresolved imaging using nonperiodic spatial masking.

    PubMed

    Borkowski, Amikam; Zalevsky, Zeev; Javidi, Bahram

    2009-03-01

    The resolution of every imaging system is limited either by the F-number of its optics or by the geometry of its detection array. The geometrical limitation is caused by lack of spatial sampling points as well as by the shape of every sampling pixel that generates spectral low-pass filtering. We present a novel approach to overcome the low-pass filtering that is due to the shape of the sampling pixels. The approach combines special algorithms together with spatial masking placed in the intermediate image plane and eventually allows geometrical superresolved imaging without relation to the actual shape of the pixels.

  13. Nature's crucible: Manufacturing optical nonlinearities for high resolution, high sensitivity encoding in the compound eye of the fly, Musca domestica

    NASA Technical Reports Server (NTRS)

    Wilcox, Mike

    1993-01-01

    The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.

  14. Polarized-pixel performance model for DoFP polarimeter

    NASA Astrophysics Data System (ADS)

    Feng, Bin; Shi, Zelin; Liu, Haizheng; Liu, Li; Zhao, Yaohong; Zhang, Junchao

    2018-06-01

    A division of a focal plane (DoFP) polarimeter is manufactured by placing a micropolarizer array directly onto the focal plane array (FPA) of a detector. Each element of the DoFP polarimeter is a polarized pixel. This paper proposes a performance model for a polarized pixel. The proposed model characterizes the optical and electronic performance of a polarized pixel by three parameters. They are respectively major polarization responsivity, minor polarization responsivity and polarization orientation. Each parameter corresponds to an intuitive physical feature of a polarized pixel. This paper further extends this model to calibrate polarization images from a DoFP (division of focal plane) polarimeter. This calibration work is evaluated quantitatively by a developed DoFP polarimeter under varying illumination intensity and angle of linear polarization. The experiment proves that our model reduces nonuniformity to 6.79% of uncalibrated DoLP (degree of linear polarization) images, and significantly improves the visual effect of DoLP images.

  15. Apparatus And Method For Osl-Based, Remote Radiation Monitoring And Spectrometry

    DOEpatents

    Miller, Steven D.; Smith, Leon Eric; Skorpik, James R.

    2006-03-07

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  16. Apparatus and method for OSL-based, remote radiation monitoring and spectrometry

    DOEpatents

    Smith, Leon Eric [Richland, WA; Miller, Steven D [Richland, WA; Bowyer, Theodore W [Oakton, VA

    2008-05-20

    Compact, OSL-based devices for long-term, unattended radiation detection and spectroscopy are provided. In addition, a method for extracting spectroscopic information from these devices is taught. The devices can comprise OSL pixels and at least one radiation filter surrounding at least a portion of the OSL pixels. The filter can modulate an incident radiation flux. The devices can further comprise a light source and a detector, both proximally located to the OSL pixels, as well as a power source and a wireless communication device, each operably connected to the light source and the detector. Power consumption of the device ranges from ultra-low to zero. The OSL pixels can retain data regarding incident radiation events as trapped charges. The data can be extracted wirelessly or manually. The method for extracting spectroscopic data comprises optically stimulating the exposed OSL pixels, detecting a readout luminescence, and reconstructing an incident-energy spectrum from the luminescence.

  17. Spectroradiometric calibration of the Thematic Mapper and Multispectral Scanner system

    NASA Technical Reports Server (NTRS)

    Slater, P. N.; Palmer, J. M. (Principal Investigator)

    1984-01-01

    Radiometric measurements were taken on the morning of the LANDSAT 5 Thematic Mapper overpass. The sky was cloud free and the sites were dry. Barnes multiband radiometer data were collected for a 4 x 4 pixel area and two fractional pixel areas of slightly higher and lower reflectances than the larger area. Helicopter color photography was obtained of all the ground areas. This photography will allow a detailed reflectance map of the 4 x 4 pixel are to be made and registered to the TM imagery to an accuracy of better than half a pixel. Spectropolarimeter data were also collected of the 4 x 4 pixel area from the helicopter. In addition, ground based solar radiometer data were collected to provide spectral extinction optical thickness valves. The radiative transfer theory used in the development of the Herman code which was used in predicting the TM entrance pupil spectral radiances from the ground based measurements is described.

  18. The CAOS camera platform: ushering in a paradigm change in extreme dynamic range imager design

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.

    2017-02-01

    Multi-pixel imaging devices such as CCD, CMOS and Focal Plane Array (FPA) photo-sensors dominate the imaging world. These Photo-Detector Array (PDA) devices certainly have their merits including increasingly high pixel counts and shrinking pixel sizes, nevertheless, they are also being hampered by limitations in instantaneous dynamic range, inter-pixel crosstalk, quantum full well capacity, signal-to-noise ratio, sensitivity, spectral flexibility, and in some cases, imager response time. Recently invented is the Coded Access Optical Sensor (CAOS) Camera platform that works in unison with current Photo-Detector Array (PDA) technology to counter fundamental limitations of PDA-based imagers while providing high enough imaging spatial resolution and pixel counts. Using for example the Texas Instruments (TI) Digital Micromirror Device (DMD) to engineer the CAOS camera platform, ushered in is a paradigm change in advanced imager design, particularly for extreme dynamic range applications.

  19. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    NASA Astrophysics Data System (ADS)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of the 3D automotive system, operated both at night and during daytime, in both indoor and outdoor, in real traffic, scenario. The achieved long-range (up to 45m), high dynamic-range (118 dB), highspeed (over 200 fps) 3D depth measurement, and high precision (better than 90 cm at 45 m), highlight the excellent performance of this CMOS SPAD camera for automotive applications.

  20. Optical encryption of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Qi; Wang, Jun; Wang, Qiong-Hua

    2018-03-01

    We present an optical encryption method of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography. By modifying the Mach–Zehnder interferometer, the interference of the multiple objects beams and the one reference beam is used to simultaneously encrypt multiple objects into a ciphertext. During decryption, each three-dimensional object can be decrypted independently without having to decrypt other objects. Since the single-pixel digital holography based on compressive sensing theory is introduced, the encrypted data of this method is effectively reduced. In addition, recording fewer encrypted data can greatly reduce the bandwidth of network transmission. Moreover, the compressive sensing essentially serves as a secret key that makes an intruder attack invalid, which means that the system is more secure than the conventional encryption method. Simulation results demonstrate the feasibility of the proposed method and show that the system has good security performance. Project supported by the National Natural Science Foundation of China (Grant Nos. 61405130 and 61320106015).

  1. An algorithm for estimating aerosol optical depth from HIMAWARI-8 data over Ocean

    NASA Astrophysics Data System (ADS)

    Lee, Kwon Ho

    2016-04-01

    The paper presents currently developing algorithm for aerosol detection and retrieval over ocean for the next generation geostationary satellite, HIMAWARI-8. Enhanced geostationary remote sensing observations are now enables for aerosol retrieval of dust, smoke, and ash, which began a new era of geostationary aerosol observations. Sixteen channels of the Advanced HIMAWARI Imager (AHI) onboard HIMAWARI-8 offer capabilities for aerosol remote sensing similar to those currently provided by the Moderate Resolution Imaging Spectroradiometer (MODIS). Aerosols were estimated in detection processing from visible and infrared channel radiances, and in retrieval processing using the inversion-optimization of satellite-observed radiances with those calculated from radiative transfer model. The retrievals are performed operationally every ten minutes for pixel sizes of ~8 km. The algorithm currently under development uses a multichannel approach to estimate the effective radius, aerosol optical depth (AOD) simultaneously. The instantaneous retrieved AOD is evaluated by the MODIS level 2 operational aerosol products (C006), and the daily retrieved AOD was compared with ground-based measurements from the AERONET databases. The results show that the detection of aerosol and estimated AOD are in good agreement with the MODIS data and ground measurements with a correlation coefficient of ˜0.90 and a bias of 4%. These results suggest that the proposed method applied to the HIMAWARI-8 satellite data can accurately estimate continuous AOD. Acknowledgments This work was supported by "Development of Geostationary Meteorological Satellite Ground Segment(NMSC-2014-01)" program funded by National Meteorological Satellite Centre(NMSC) of Korea Meteorological Administration(KMA).

  2. Characterization of Pixelated Cadmium-Zinc-Telluride Detectors for Astrophysical Applications

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    Comparisons of charge sharing and charge loss measurements between two pixelated Cadmium-Zinc-Telluride (CdZnTe) detectors are discussed. These properties along with the detector geometry help to define the limiting energy resolution and spatial resolution of the detector in question. The first detector consists of a 1-mm-thick piece of CdZnTe sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). Signal readout is via discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). This crystal is bonded to a custom-built readout chip (ASIC) providing all front-end electronics to each of the 256 independent pixels. These detectors act as precursors to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation.

  3. Micromirror array nanostructures for anticounterfeiting applications

    NASA Astrophysics Data System (ADS)

    Lee, Robert A.

    2004-06-01

    The optical characteristics of pixellated passive micro mirror arrays are derived and applied in the context of their use as reflective optically variable device (OVD) nanostructures for the protection of documents from counterfeiting. The traditional design variables of foil based diffractive OVDs are shown to be able to be mapped to a corresponding set of design parameters for reflective optical micro mirror array (OMMA) devices. The greatly increased depth characteristics of micro mirror array OVDs provides an opportunity for directly printing the OVD microstructure onto the security document in-line with the normal printing process. The micro mirror array OVD architecture therefore eliminates the need for hot stamping foil as the carrier of the OVD information, thereby reducing costs. The origination of micro mirror array devices via a palette based data format and a combination electron beam lithography and photolithography techniques is discussed via an artwork example and experimental tests. Finally the application of the technology to the design of a generic class of devices which have the interesting property of allowing for both application and customer specific OVD image encoding and data encoding at the end user stage of production is described. Because of the end user nature of the image and data encoding process these devices are particularly well suited to ID document applications and for this reason we refer this new OVD concept as biometric OVD technology.

  4. Portable and cost-effective pixel super-resolution on-chip microscope for telemedicine applications.

    PubMed

    Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan

    2011-01-01

    We report a field-portable lensless on-chip microscope with a lateral resolution of <1 μm and a large field-of-view of ~24 mm(2). This microscope is based on digital in-line holography and a pixel super-resolution algorithm to process multiple lensfree holograms and obtain a single high-resolution hologram. In its compact and cost-effective design, we utilize 23 light emitting diodes butt-coupled to 23 multi-mode optical fibers, and a simple optical filter, with no moving parts. Weighing only ~95 grams, we demonstrate the performance of this field-portable microscope by imaging various objects including human malaria parasites in thin blood smears.

  5. A noiseless, kHz frame rate imaging detector for AO wavefront sensors based on MCPs read out with the Medipix2 CMOS pixel chip

    NASA Astrophysics Data System (ADS)

    Vallerga, J. V.; McPhate, J. B.; Tremsin, A. S.; Siegmund, O. H. W.; Mikulec, B.; Clark, A. G.

    2004-12-01

    Future wavefront sensors in adaptive optics (AO) systems for the next generation of large telescopes (> 30 m diameter) will require large formats (512x512) , kHz frame rates, low readout noise (<3 electrons) and high optical QE. The current generation of CCDs cannot achieve the first three of these specifications simultaneously. We present a detector scheme that can meet the first three requirements with an optical QE > 40%. This detector consists of a vacuum tube with a proximity focused GaAs photocathode whose photoelectrons are amplified by microchannel plates and the resulting output charge cloud counted by a pixelated CMOS application specific integrated circuit (ASIC) called the Medipix2 (http://medipix.web.cern.ch/MEDIPIX/). Each 55 micron square pixel of the Medipix2 chip has an amplifier, discriminator and 14 bit counter and the 256x256 array can be read out in 287 microseconds. The chip is 3 side abuttable so a 512x512 array is feasible in one vacuum tube. We will present the first results with an open-faced, demountable version of the detector where we have mounted a pair of MCPs 500 microns above a Medipix2 readout inside a vacuum chamber and illuminated it with UV light. The results include: flat field response, spatial resolution, spatial linearity on the sub-pixel level and global event counting rate. We will also discuss the vacuum tube design and the fabrication issues associated with the Medipix2 surviving the tube making process.

  6. 3-D Spatial Resolution of 350 μm Pitch Pixelated CdZnTe Detectors for Imaging Applications.

    PubMed

    Yin, Yongzhi; Chen, Ximeng; Wu, Heyu; Komarov, Sergey; Garson, Alfred; Li, Qiang; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2013-02-01

    We are currently investigating the feasibility of using highly pixelated Cadmium Zinc Telluride (CdZnTe) detectors for sub-500 μ m resolution PET imaging applications. A 20 mm × 20 mm × 5 mm CdZnTe substrate was fabricated with 350 μ m pitch pixels (250 μ m anode pixels with 100 μ m gap) and coplanar cathode. Charge sharing among the pixels of a 350 μ m pitch detector was studied using collimated 122 keV and 511 keV gamma ray sources. For a 350 μ m pitch CdZnTe detector, scatter plots of the charge signal of two neighboring pixels clearly show more charge sharing when the collimated beam hits the gap between adjacent pixels. Using collimated Co-57 and Ge-68 sources, we measured the count profiles and estimated the intrinsic spatial resolution of 350 μ m pitch detector biased at -1000 V. Depth of interaction was analyzed based on two methods, i.e., cathode/anode ratio and electron drift time, in both 122 keV and 511 keV measurements. For single-pixel photopeak events, a linear correlation between cathode/anode ratio and electron drift time was shown, which would be useful for estimating the DOI information and preserving image resolution in CdZnTe PET imaging applications.

  7. 3-D Spatial Resolution of 350 μm Pitch Pixelated CdZnTe Detectors for Imaging Applications

    PubMed Central

    Yin, Yongzhi; Chen, Ximeng; Wu, Heyu; Komarov, Sergey; Garson, Alfred; Li, Qiang; Guo, Qingzhen; Krawczynski, Henric; Meng, Ling-Jian; Tai, Yuan-Chuan

    2016-01-01

    We are currently investigating the feasibility of using highly pixelated Cadmium Zinc Telluride (CdZnTe) detectors for sub-500 μm resolution PET imaging applications. A 20 mm × 20 mm × 5 mm CdZnTe substrate was fabricated with 350 μm pitch pixels (250 μm anode pixels with 100 μm gap) and coplanar cathode. Charge sharing among the pixels of a 350 μm pitch detector was studied using collimated 122 keV and 511 keV gamma ray sources. For a 350 μm pitch CdZnTe detector, scatter plots of the charge signal of two neighboring pixels clearly show more charge sharing when the collimated beam hits the gap between adjacent pixels. Using collimated Co-57 and Ge-68 sources, we measured the count profiles and estimated the intrinsic spatial resolution of 350 μm pitch detector biased at −1000 V. Depth of interaction was analyzed based on two methods, i.e., cathode/anode ratio and electron drift time, in both 122 keV and 511 keV measurements. For single-pixel photopeak events, a linear correlation between cathode/anode ratio and electron drift time was shown, which would be useful for estimating the DOI information and preserving image resolution in CdZnTe PET imaging applications. PMID:28250476

  8. A Framework Based on 2-D Taylor Expansion for Quantifying the Impacts of Sub-Pixel Reflectance Variance and Covariance on Cloud Optical Thickness and Effective Radius Retrievals Based on the Bi-Spectral Method

    NASA Technical Reports Server (NTRS)

    Zhang, Z.; Werner, F.; Cho, H. -M.; Wind, G.; Platnick, S.; Ackerman, A. S.; Di Girolamo, L.; Marshak, A.; Meyer, Kerry

    2016-01-01

    The bi-spectral method retrieves cloud optical thickness and cloud droplet effective radius simultaneously from a pair of cloud reflectance observations, one in a visible or near-infrared (VISNIR) band and the other in a shortwave infrared (SWIR) band. A cloudy pixel is usually assumed to be horizontally homogeneous in the retrieval. Ignoring sub-pixel variations of cloud reflectances can lead to a significant bias in the retrieved and re. In the literature, the retrievals of and re are often assumed to be independent and considered separately when investigating the impact of sub-pixel cloud reflectance variations on the bi-spectral method. As a result, the impact on is contributed only by the sub-pixel variation of VISNIR band reflectance and the impact on re only by the sub-pixel variation of SWIR band reflectance. In our new framework, we use the Taylor expansion of a two-variable function to understand and quantify the impacts of sub-pixel variances of VISNIR and SWIR cloud reflectances and their covariance on the and re retrievals. This framework takes into account the fact that the retrievals are determined by both VISNIR and SWIR band observations in a mutually dependent way. In comparison with previous studies, it provides a more comprehensive understanding of how sub-pixel cloud reflectance variations impact the and re retrievals based on the bi-spectral method. In particular, our framework provides a mathematical explanation of how the sub-pixel variation in VISNIR band influences the re retrieval and why it can sometimes outweigh the influence of variations in the SWIR band and dominate the error in re retrievals, leading to a potential contribution of positive bias to the re retrieval. We test our framework using synthetic cloud fields from a large-eddy simulation and real observations from Moderate Resolution Imaging Spectroradiometer. The predicted results based on our framework agree very well with the numerical simulations. Our framework can be used to estimate the retrieval uncertainty from sub-pixel reflectance variations in operational satellite cloud products and to help understand the differences in and re retrievals between two instruments.

  9. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  10. A comparison of Aqua MODIS ice and liquid water cloud physical and optical properties between collection 6 and collection 5.1: Pixel-to-pixel comparisons

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Rapp, Anita D.; Yang, Ping; Baum, Bryan A.; King, Michael D.

    2017-04-01

    We compare differences in ice and liquid water cloud physical and optical properties between Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) collection 6 (C6) and collection 5.1 (C51). The C6 cloud products changed significantly due to improved calibration, improvements based on comparisons with the Cloud-Aerosol Lidar with Orthogonal Polarization, treatment of subpixel liquid water clouds, introduction of a roughened ice habit for C6 rather than the use of smooth ice particles in C51, and more. The MODIS cloud products form a long-term data set for analysis, modeling, and various purposes. Thus, it is important to understand the impact of the changes. Two cases are considered for C6 to C51 comparisons. Case 1 considers pixels with valid cloud retrievals in both C6 and C51, while case 2 compares all valid cloud retrievals in each collection. One year (2012) of level-2 MODIS cloud products are examined, including cloud effective radius (CER), optical thickness (COT), water path, cloud top pressure (CTP), cloud top temperature, and cloud fraction. Large C6-C51 differences are found in the ice CER (regionally, as large as 15 μm) and COT (decrease in annual average by approximately 25%). Liquid water clouds have higher CTP in marine stratocumulus regions in C6 but lower CTP globally (-5 hPa), and there are 66% more valid pixels in C6 (case 2) due to the treatment of pixels with subpixel clouds. Simulated total cloud radiative signatures from C51 and C6 are compared to Clouds and the Earth's Radiant Energy System Energy Balanced And Filled (EBAF) product. The C6 CREs compare more closely with the EBAF than the C51 counterparts.

  11. Single-pixel computational ghost imaging with helicity-dependent metasurface hologram.

    PubMed

    Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang

    2017-09-01

    Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security.

  12. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  13. High performance optical payloads for microsatellites

    NASA Astrophysics Data System (ADS)

    Geyl, Roland; Rodolfo, Jacques; Girault, Jean-Philippe

    2017-09-01

    Safran is presenting two concepts of optical payloads for microsatellites combining high performances and extremely compact volume. The first one offer 10-m Ground Sampling Distance (GSD) over 60x40 km2 area from 600 km orbit optimized for twilight conditions. The second one is offering a much higher resolution of 1.8-m over 11x7,5 km2 area from the same 600 km orbit. The two concepts are based on advanced innovative diffraction limited optical system packaged in a unique very compact volume lower than 8U = 200x200x200 mm making them the ideal solution for 15- 100 kg microsatellites. The maximum number of pixels is served to the end-user space imagery community thanks to 35 mm Full Frame sensors offering, as of today, 6000x4000 pixels. Up to 10 spectral bands from 475 to 900 nm can be offered thanks to 2D structured filters.

  14. Single-pixel computational ghost imaging with helicity-dependent metasurface hologram

    PubMed Central

    Liu, Hong-Chao; Yang, Biao; Guo, Qinghua; Shi, Jinhui; Guan, Chunying; Zheng, Guoxing; Mühlenbernd, Holger; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang

    2017-01-01

    Different optical imaging techniques are based on different characteristics of light. By controlling the abrupt phase discontinuities with different polarized incident light, a metasurface can host a phase-only and helicity-dependent hologram. In contrast, ghost imaging (GI) is an indirect imaging modality to retrieve the object information from the correlation of the light intensity fluctuations. We report single-pixel computational GI with a high-efficiency reflective metasurface in both simulations and experiments. Playing a fascinating role in switching the GI target with different polarized light, the metasurface hologram generates helicity-dependent reconstructed ghost images and successfully introduces an additional security lock in a proposed optical encryption scheme based on the GI. The robustness of our encryption scheme is further verified with the vulnerability test. Building the first bridge between the metasurface hologram and the GI, our work paves the way to integrate their applications in the fields of optical communications, imaging technology, and security. PMID:28913433

  15. Compressed single pixel imaging in the spatial frequency domain

    PubMed Central

    Torabzadeh, Mohammad; Park, Il-Yong; Bartels, Randy A.; Durkin, Anthony J.; Tromberg, Bruce J.

    2017-01-01

    Abstract. We have developed compressed sensing single pixel spatial frequency domain imaging (cs-SFDI) to characterize tissue optical properties over a wide field of view (35  mm×35  mm) using multiple near-infrared (NIR) wavelengths simultaneously. Our approach takes advantage of the relatively sparse spatial content required for mapping tissue optical properties at length scales comparable to the transport scattering length in tissue (ltr∼1  mm) and the high bandwidth available for spectral encoding using a single-element detector. cs-SFDI recovered absorption (μa) and reduced scattering (μs′) coefficients of a tissue phantom at three NIR wavelengths (660, 850, and 940 nm) within 7.6% and 4.3% of absolute values determined using camera-based SFDI, respectively. These results suggest that cs-SFDI can be developed as a multi- and hyperspectral imaging modality for quantitative, dynamic imaging of tissue optical and physiological properties. PMID:28300272

  16. Fabrication of fully transparent nanowire transistors for transparent and flexible electronics

    NASA Astrophysics Data System (ADS)

    Ju, Sanghyun; Facchetti, Antonio; Xuan, Yi; Liu, Jun; Ishikawa, Fumiaki; Ye, Peide; Zhou, Chongwu; Marks, Tobin J.; Janes, David B.

    2007-06-01

    The development of optically transparent and mechanically flexible electronic circuitry is an essential step in the effort to develop next-generation display technologies, including `see-through' and conformable products. Nanowire transistors (NWTs) are of particular interest for future display devices because of their high carrier mobilities compared with bulk or thin-film transistors made from the same materials, the prospect of processing at low temperatures compatible with plastic substrates, as well as their optical transparency and inherent mechanical flexibility. Here we report fully transparent In2O3 and ZnO NWTs fabricated on both glass and flexible plastic substrates, exhibiting high-performance n-type transistor characteristics with ~82% optical transparency. These NWTs should be attractive as pixel-switching and driving transistors in active-matrix organic light-emitting diode (AMOLED) displays. The transparency of the entire pixel area should significantly enhance aperture ratio efficiency in active-matrix arrays and thus substantially decrease power consumption.

  17. Fabrication of fully transparent nanowire transistors for transparent and flexible electronics.

    PubMed

    Ju, Sanghyun; Facchetti, Antonio; Xuan, Yi; Liu, Jun; Ishikawa, Fumiaki; Ye, Peide; Zhou, Chongwu; Marks, Tobin J; Janes, David B

    2007-06-01

    The development of optically transparent and mechanically flexible electronic circuitry is an essential step in the effort to develop next-generation display technologies, including 'see-through' and conformable products. Nanowire transistors (NWTs) are of particular interest for future display devices because of their high carrier mobilities compared with bulk or thin-film transistors made from the same materials, the prospect of processing at low temperatures compatible with plastic substrates, as well as their optical transparency and inherent mechanical flexibility. Here we report fully transparent In(2)O(3) and ZnO NWTs fabricated on both glass and flexible plastic substrates, exhibiting high-performance n-type transistor characteristics with approximately 82% optical transparency. These NWTs should be attractive as pixel-switching and driving transistors in active-matrix organic light-emitting diode (AMOLED) displays. The transparency of the entire pixel area should significantly enhance aperture ratio efficiency in active-matrix arrays and thus substantially decrease power consumption.

  18. LCoS-SLM technology based on Digital Electro-optics Platform and using in dynamic optics for application development

    NASA Astrophysics Data System (ADS)

    Tsai, Chun-Wei; Wang, Chen; Lyu, Bo-Han; Chu, Chen-Hsien

    2017-08-01

    Digital Electro-optics Platform is the main concept of Jasper Display Corp. (JDC) to develop various applications. These applications are based on our X-on-Silicon technologies, for example, X-on-Silicon technologies could be used on Liquid Crystal on Silicon (LCoS), Micro Light-Emitting Diode on Silicon (μLEDoS), Organic Light-Emitting Diode on Silicon (OLEDoS), and Cell on Silicon (CELLoS), etc. LCoS technology is applied to Spatial Light Modulator (SLM), Dynamic Optics, Wavelength Selective Switch (WSS), Holographic Display, Microscopy, Bio-tech, 3D Printing and Adaptive Optics, etc. In addition, μLEDoS technology is applied to Augmented Reality (AR), Head Up Display (HUD), Head-mounted Display (HMD), and Wearable Devices. Liquid Crystal on Silicon - Spatial Light Modulator (LCoSSLM) based on JDC's On-Silicon technology for both amplitude and phase modulation, have an expanding role in several optical areas where light control on a pixel-by-pixel basis is critical for optimum system performance. Combination of the advantage of hardware and software, we can establish a "dynamic optics" for the above applications or more. Moreover, through the software operation, we can control the light more flexible and easily as programmable light processor.

  19. The DEPFET Sensor-Amplifier Structure: A Method to Beat 1/f Noise and Reach Sub-Electron Noise in Pixel Detectors

    PubMed Central

    Lutz, Gerhard; Porro, Matteo; Aschauer, Stefan; Wölfel, Stefan; Strüder, Lothar

    2016-01-01

    Depleted field effect transistors (DEPFET) are used to achieve very low noise signal charge readout with sub-electron measurement precision. This is accomplished by repeatedly reading an identical charge, thereby suppressing not only the white serial noise but also the usually constant 1/f noise. The repetitive non-destructive readout (RNDR) DEPFET is an ideal central element for an active pixel sensor (APS) pixel. The theory has been derived thoroughly and results have been verified on RNDR-DEPFET prototypes. A charge measurement precision of 0.18 electrons has been achieved. The device is well-suited for spectroscopic X-ray imaging and for optical photon counting in pixel sensors, even at high photon numbers in the same cell. PMID:27136549

  20. Performance of a 512 x 512 Gated CMOS Imager with a 250 ps Exposure Time

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A T; Moody, J D; Hsing, W W

    2012-10-01

    We describe the performance of a 512x512 gated CMOS read out integrated circuit (ROIC) with a 250 ps exposure time. A low-skew, H-tree trigger distribution system is used to locally generate individual pixel gates in each 8x8 neighborhood of the ROIC. The temporal width of the gate is voltage controlled and user selectable via a precision potentiometer. The gating implementation was first validated in optical tests of a 64x64 pixel prototype ROIC developed as a proof-of-concept during the early phases of the development program. The layout of the H-Tree addresses each quadrant of the ROIC independently and admits operation ofmore » the ROIC in two modes. If “common mode” triggering is used, the camera provides a single 512x512 image. If independent triggers are used, the camera can provide up to four 256x256 images with a frame separation set by the trigger intervals. The ROIC design includes small (sub-pixel) optical photodiode structures to allow test and characterization of the ROIC using optical sources prior to bump bonding. Reported test results were obtained using short pulse, second harmonic Ti:Sapphire laser systems operating at λ~ 400 nm at sub-ps pulse widths.« less

  1. Optical-domain subsampling for data efficient depth ranging in Fourier-domain optical coherence tomography

    PubMed Central

    Siddiqui, Meena; Vakoc, Benjamin J.

    2012-01-01

    Recent advances in optical coherence tomography (OCT) have led to higher-speed sources that support imaging over longer depth ranges. Limitations in the bandwidth of state-of-the-art acquisition electronics, however, prevent adoption of these advances into the clinical applications. Here, we introduce optical-domain subsampling as a method for imaging at high-speeds and over extended depth ranges but with a lower acquisition bandwidth than that required using conventional approaches. Optically subsampled laser sources utilize a discrete set of wavelengths to alias fringe signals along an extended depth range into a bandwidth limited frequency window. By detecting the complex fringe signals and under the assumption of a depth-constrained signal, optical-domain subsampling enables recovery of the depth-resolved scattering signal without overlapping artifacts from this bandwidth-limited window. We highlight key principles behind optical-domain subsampled imaging, and demonstrate this principle experimentally using a polygon-filter based swept-source laser that includes an intra-cavity Fabry-Perot (FP) etalon. PMID:23038343

  2. Smoke Over Haze: Comparative Analysis of Satellite, Surface Radiometer and Airborne In-Situ Measurements of Aerosol Optical Properties and Radiative Forcing Over the Eastern US

    NASA Astrophysics Data System (ADS)

    vant-Hull, B.; Li, Z.; Taubman, B.; Marufu, L.; Levy, R.; Chang, F.; Doddridge, B.; Dickerson, R.

    2004-12-01

    In July 2002 Canadian forest fires produced a major smoke episode that blanketed the U.S. East Coast. Properties of the smoke aerosol were measured in-situ from aircraft, complementing operational AERONET and MODIS remote sensed aerosol retrievals. This study compares single scattering albedo and phase function derived from the in-situ measurements and AERONET retrievals in order to evaluate their consistency for application to satellite retrievals of optical depth and radiative forcing. These optical properties were combined with MODIS reflectance observations to calculate optical depth. The use of AERONET optical properties yielded optical depths 2% to 16% lower than those directly measured by AERONET. The use of in-situ derived optical properties resulted in optical depths 22% to 43% higher than AERONET measurements. These higher optical depths are attributed primarily to the higher absorption measured in-situ, which is roughly twice that retrieved by AERONET. The resulting satellite retrieved optical depths were in turn used to calculate integrated radiative forcing at both the surface and TOA. Comparisons to surface (SurfRad and ISIS) and to satellite (CERES) broadband radiometer measurements demonstrate that the use of optical properties derived from the aircraft measurements provided a better broadband forcing estimate (21% error) than those derived from AERONET (33% error). Thus AERONET derived optical properties produced better fits to optical depth measurements, while in-situ properties resulted in better fits to forcing measurements. These apparent inconsistencies underline the significant challenges facing the aerosol community in achieving column closure between narrow and broadband measurements and calculations.

  3. Resonant cavity enhanced multi-analyte sensing

    NASA Astrophysics Data System (ADS)

    Bergstein, David Alan

    Biological research and medicine increasingly depend on interrogating binding interactions among small segments of DNA, RNA, protein, and bio-specific small molecules. Microarray technology, which senses the affinity for target molecules in solution for a multiplicity of capturing agents fixed to a surface, has been used in biological research for gene expression profiling and in medicine for molecular biomarker detection. Label-free affinity sensing is preferable as it avoids fluorescent labeling of the target molecules, reducing test cost and variability. The Resonant Cavity Imaging Biosensor (RCIB) is a label-free optical inference based technique introduced that scales readily to high throughput and employs an optical resonant cavity to enhance sensitivity by a factor of 100 or more. Near-infrared light centered at 1512.5 nm couples resonantly through a cavity constructed from Si/SiO2 Bragg reflectors, one of which serves as the binding surface. As the wavelength is swept 5 nm, an Indium-Gallium-Arsenide digital camera monitors cavity transmittance at each pixel with resolution 128 x 128. A wavelength shift in the local resonant response of the optical cavity indicates binding. Positioning the sensing surface with respect to the standing wave pattern of the electric field within the cavity, one can control the sensitivity of the measurement to the presence of bound molecules thereby enhancing or suppressing sensitivity where appropriate. Transmitted intensity at thousands of pixel locations are recorded simultaneously in a 10 s, 5 nm scan. An initial proof-of-principle setup was constructed. A sample was fabricated with 25, 100 mum wide square regions, each with a different density of 1 mum square depressions etched 12 nm into the S1O 2 surface. The average depth of each etched region was found with 0.05 nm RMS precision when the sample remains loaded in the setup and 0.3 nm RMS precision when the sample is removed and replaced. Selective binding of the protein avidin to biotin conjugated bovine serum albumin was demonstrated with 50 pg/mm2 sensitivity. Analysis and discussion of these results provides a path toward improved performance.

  4. New developments for determination of uncertainty in phase evaluation

    NASA Astrophysics Data System (ADS)

    Liu, Sheng

    Phase evaluation exists mostly in, but not limited to, interferometric applications that utilize coherent multidimensional signals to modulate the physical quantity of interest into a nonlinear form, represented by repeating the phase modulo of 271 radians. In order to estimate the underlying physical quantity, the wrapped phase has to be unwrapped by an evaluation procedure which is usually called phase unwrapping. The procedure of phase unwrapping will obviously face the challenge of inconsistent phase, which could bring errors in phase evaluation. The main objectives of this research include addressing the problem of inconsistent phase in phase unwrapping and applications in modern optical techniques. In this research, a new phase unwrapping algorithm is developed. The creative idea of doing phase unwrapping between regions has an advantage over conventional pixel-to-pixel unwrapping methods because the unwrapping result is more consistent by using a voting mechanism based on all Zit-discontinuities hints. Furthermore, a systematic sequence of regional unwrapping is constructed in order to achieve a global consistent result. An implementation of the idea is illustrated in dct.il with step-by-step pseudo codes. The performance of the algorithm is demonstrated on real world applications. In order to solve a phase unwrapping problem which is caused by depth discontinuities in 3D shape measurement, a new absolute phase coding strategy is developed. The algorithm presented has two merits: effectively extends the coding range and preserves the measurement sensitivity. The performance of the proposed absolute coding strategy is proved by results of 3D shape measurement for objects with surface discontinuities. As a powerful tool for real world applications a universal software package, Optical Measurement and Evaluation Software (OMES), is designed for the purposes of automatic measurement and quantitative evaluation in 3D shape measurement and laser interferometry. Combined with different sensors or setups, OMES has been successfully applied in the industries, for example, GM Powertrain, Coming, and Ford Optical Lab., and used for various applications such as shape measurement, deformation/displacement measurement, strain/stress analysis, non-destructive testing, vibration/modal analysis, and biomechanics analysis.

  5. Integrated Photonic Neural Probes for Patterned Brain Stimulation

    DTIC Science & Technology

    2017-08-14

    two -photon imaging Task 3.2: In vivo demonstration of remote optical stimulation using photonic probes and multi -site electrical recording...have patterned nine e-pixels. We can individually address each e-pixel by tuning the color of the input light to the AWG. Figure (8) shows two ...Report: Integrated Photonic Neural Probes for Patterned Brain Stimulation The views , opinions and/or findings contained in this report are those of the

  6. Single-pixel imaging by Hadamard transform and its application for hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Mizutani, Yasuhiro; Shibuya, Kyuki; Taguchi, Hiroki; Iwata, Tetsuo; Takaya, Yasuhiro; Yasui, Takeshi

    2016-10-01

    In this paper, we report on comparisons of single-pixel imagings using Hadamard Transform (HT) and the ghost imaging (GI) in the view point of the visibility under weak light conditions. For comparing the two methods, we have discussed about qualities of images based on experimental results and numerical analysis. To detect images by the TH method, we have illuminated the Hadamard-pattern mask and calculated by orthogonal transform. On the other hand, the GH method can detect images by illuminating random patterns and a correlation measurement. For comparing two methods under weak light intensity, we have controlled illuminated intensities of a DMD projector about 0.1 in signal-to-noise ratio. Though a process speed of the HT image was faster then an image via the GI, the GI method has an advantage of detection under weak light condition. An essential difference between the HT and the GI method is discussed about reconstruction process. Finally, we also show a typical application of the single-pixel imaging such as hyperspectral images by using dual-optical frequency combs. An optical setup consists of two fiber lasers, spatial light modulated for generating patten illumination, and a single pixel detector. We are successful to detect hyperspectrul images in a range from 1545 to 1555 nm at 0.01nm resolution.

  7. Realization of arbitrarily long focus-depth optical vortices with spiral area-varying zone plates

    NASA Astrophysics Data System (ADS)

    Zheng, Chenglong; Zang, Huaping; Du, Yanli; Tian, Yongzhi; Ji, Ziwen; Zhang, Jing; Fan, Quanping; Wang, Chuanke; Cao, Leifeng; Liang, Erjun

    2018-05-01

    We provide a methodology to realize an optical vortex with arbitrarily long focus-depth. With a technique of varying each zone area of a phase spiral zone plate one can obtain optics capable of generating ultra-long focus-depth optical vortex from a plane wave. The focal property of such optics was analysed using the Fresnel diffraction theory, and an experimental demonstration was performed to verify its effectiveness. Such optics may bring new opportunity and benefits for optical vortex application such as optical manipulation and lithography.

  8. Optical Design of WFIRST-AFTA Wide-Field Instrument

    NASA Technical Reports Server (NTRS)

    Pasquale, Bert; Content, Dave; Kruk, Jeffrey; Vaughn, David; Gong, Qian; Howard, Joseph; Jurling, Alden; Mentzell, Eric; Armani, Nerses; Kuan, Gary

    2014-01-01

    The WFIRSTAFTA Wide-Field Infrared Survey Telescope TMA optical design provides 0.28-sq FOV at 0.11 pixel scale, operating between 0.6 2.4m, including a spectrograph mode (1.3-1.95m.) An IFU provides a discrete 3x3.15 field at 0.15 sampling.

  9. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    DOE PAGES

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    2016-08-26

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less

  10. Constraining the low-cloud optical depth feedback at middle and high latitudes using satellite observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terai, C. R.; Klein, S. A.; Zelinka, M. D.

    The increase in cloud optical depth with warming at middle and high latitudes is a robust cloud feedback response found across all climate models. This study builds on results that suggest the optical depth response to temperature is timescale invariant for low-level clouds. The timescale invariance allows one to use satellite observations to constrain the models' optical depth feedbacks. Three passive-sensor satellite retrievals are compared against simulations from eight models from the Atmosphere Model Intercomparison Project (AMIP) of the 5th Coupled Model Intercomparison Project (CMIP5). This study confirms that the low-cloud optical depth response is timescale invariant in the AMIPmore » simulations, generally at latitudes higher than 40°. Compared to satellite estimates, most models overestimate the increase in optical depth with warming at the monthly and interannual timescales. Many models also do not capture the increase in optical depth with estimated inversion strength that is found in all three satellite observations and in previous studies. The discrepancy between models and satellites exists in both hemispheres and in most months of the year. A simple replacement of the models' optical depth sensitivities with the satellites' sensitivities reduces the negative shortwave cloud feedback by at least 50% in the 40°–70°S latitude band and by at least 65% in the 40°–70°N latitude band. Furthermore, based on this analysis of satellite observations, we conclude that the low-cloud optical depth feedback at middle and high latitudes is likely too negative in climate models.« less

  11. Aerosol spectral optical depths - Jet fuel and forest fire smokes

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Livingston, J. M.

    1990-01-01

    The Ames autotracking airborne sun photometer was used to investigate the spectral depth between 380 and 1020 nm of smokes from a jet fuel pool fire and a forest fire in May and August 1988, respectively. Results show that the forest fire smoke exhibited a stronger wavelength dependence of optical depths than did the jet fuel fire smoke at optical depths less than unity. At optical depths greater than or equal to 1, both smokes showed neutral wavelength dependence, similar to that of an optically thin stratus deck. These results verify findings of earlier investigations and have implications both on the climatic impact of large-scale smokes and on the wavelength-dependent transmission of electromagnetic signals.

  12. Comparative study of various pixel photodiodes for digital radiography: Junction structure, corner shape and noble window opening

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Uk; Cho, Minsik; Lee, Dae Hee; Yoo, Hyunjun; Kim, Myung Soo; Bae, Jun Hyung; Kim, Hyoungtaek; Kim, Jongyul; Kim, Hyunduk; Cho, Gyuseong

    2012-05-01

    Recently, large-size 3-transistors (3-Tr) active pixel complementary metal-oxide silicon (CMOS) image sensors have been being used for medium-size digital X-ray radiography, such as dental computed tomography (CT), mammography and nondestructive testing (NDT) for consumer products. We designed and fabricated 50 µm × 50 µm 3-Tr test pixels having a pixel photodiode with various structures and shapes by using the TSMC 0.25-m standard CMOS process to compare their optical characteristics. The pixel photodiode output was continuously sampled while a test pixel was continuously illuminated by using 550-nm light at a constant intensity. The measurement was repeated 300 times for each test pixel to obtain reliable results on the mean and the variance of the pixel output at each sampling time. The sampling rate was 50 kHz, and the reset period was 200 msec. To estimate the conversion gain, we used the mean-variance method. From the measured results, the n-well/p-substrate photodiode, among 3 photodiode structures available in a standard CMOS process, showed the best performance at a low illumination equivalent to the typical X-ray signal range. The quantum efficiencies of the n+/p-well, n-well/p-substrate, and n+/p-substrate photodiodes were 18.5%, 62.1%, and 51.5%, respectively. From a comparison of pixels with rounded and rectangular corners, we found that a rounded corner structure could reduce the dark current in large-size pixels. A pixel with four rounded corners showed a reduced dark current of about 200fA compared to a pixel with four rectangular corners in our pixel sample size. Photodiodes with round p-implant openings showed about 5% higher dark current, but about 34% higher sensitivities, than the conventional photodiodes.

  13. Validation of MODIS aerosol optical depth over the Mediterranean Coast

    NASA Astrophysics Data System (ADS)

    Díaz-Martínez, J. Vicente; Segura, Sara; Estellés, Víctor; Utrillas, M. Pilar; Martínez-Lozano, J. Antonio

    2013-04-01

    Atmospheric aerosols, due to their high spatial and temporal variability, are considered one of the largest sources of uncertainty in different processes affecting visibility, air quality, human health, and climate. Among their effects on climate, they play an important role in the energy balance of the Earth. On one hand they have a direct effect by scattering and absorbing solar radiation; on the other, they also have an impact in precipitation, modifying clouds, or affecting air quality. The application of remote sensing techniques to investigate aerosol effects on climate has advanced significatively over last years. In this work, the products employed have been obtained from the Moderate Resolution Imaging Spectroradiometer (MODIS). MODIS is a sensor located onboard both Earth Observing Systems (EOS) Terra and Aqua satellites, which provide almost complete global coverage every day. These satellites have been acquiring data since early 2000 (Terra) and mid 2002 (Aqua) and offer different products for land, ocean and atmosphere. Atmospheric aerosol products are presented as level 2 products with a pixel size of 10 x 10 km2 in nadir. MODIS aerosol optical depth (AOD) is retrieved by different algorithms depending on the pixel surface, distinguishing between land and ocean. For its validation, ground based sunphotometer data from AERONET (Aerosol Robotic Network) has been employed. AERONET is an international operative network of Cimel CE318 sky-sunphotometers that provides the most extensive aerosol data base globally available of ground-based measurements. The ground sunphotometric technique is considered the most accurate for the retrieval of radiative properties of aerosols in the atmospheric column. In this study we present a validation of MODIS C051 AOD employing AERONET measurements over different Mediterranean coastal sites centered over an area of 50 x 50 km2, which includes both pixels over land and ocean. The validation is done comparing spatial statistics from MODIS with corresponding temporal statistics from AERONET, as proposed by Ichoku et al. (2002). Eight Mediterranean coastal sites (in Spain, France, Italy, Crete, Turkey and Israel) with available AERONET and MODIS data have been used. These stations have been selected following QA criteria (minimum 1000 days of level 2.0 data) and a maximum distance of 8 km from the coast line. Results of the validation over each site show analogous behaviour, giving similar results regarding to the accuracy of the algorithms. Greatest differences are found for the AOD obtained over land, especially for drier regions, where the surface tends to be brighter. In general, the MODIS AOD has better a agreement with AERONET retrievals for the ocean algorithm than the land algorithm when validated over coastal sites, and the agreement is within the expected uncertainty estimated for MODIS data. References: - C. Ichoku et al., "A spatio-temporal approach for global validation and analysis of MODIS aerosol products", Geophysical Research Letters, 219, 12, 10.1029/2001GL013206, 2002.

  14. Numerical simulation of crosstalk in reduced pitch HgCdTe photon-trapping structure pixel arrays.

    PubMed

    Schuster, Jonathan; Bellotti, Enrico

    2013-06-17

    We have investigated crosstalk in HgCdTe photovoltaic pixel arrays employing a photon trapping (PT) structure realized with a periodic array of pillars intended to provide broadband operation. We have found that, compared to non-PT pixel arrays with similar geometry, the array employing the PT structure has a slightly higher optical crosstalk. However, when the total crosstalk is evaluated, the presence of the PT region drastically reduces the total crosstalk; making the use of the PT structure not only useful to obtain broadband operation, but also desirable for reducing crosstalk in small pitch detector arrays.

  15. Design and fabrication of reflective spatial light modulator for high-dynamic-range wavefront control

    NASA Astrophysics Data System (ADS)

    Zhu, Hao; Bierden, Paul; Cornelissen, Steven; Bifano, Thomas; Kim, Jin-Hong

    2004-10-01

    This paper describes design and fabrication of a microelectromechanical metal spatial light modulator (SLM) integrated with complementary metal-oxide semiconductor (CMOS) electronics, for high-dynamic-range wavefront control. The metal SLM consists of a large array of piston-motion MEMS mirror segments (pixels) which can deflect up to 0.78 µm each. Both 32x32 and 150x150 arrays of the actuators (1024 and 22500 elements respectively) were fabricated onto the CMOS driver electronics and individual pixels were addressed. A new process has been developed to reduce the topography during the metal MEMS processing to fabricate mirror pixels with improved optical quality.

  16. Monocular depth perception using image processing and machine learning

    NASA Astrophysics Data System (ADS)

    Hombali, Apoorv; Gorde, Vaibhav; Deshpande, Abhishek

    2011-10-01

    This paper primarily exploits some of the more obscure, but inherent properties of camera and image to propose a simpler and more efficient way of perceiving depth. The proposed method involves the use of a single stationary camera at an unknown perspective and an unknown height to determine depth of an object on unknown terrain. In achieving so a direct correlation between a pixel in an image and the corresponding location in real space has to be formulated. First, a calibration step is undertaken whereby the equation of the plane visible in the field of view is calculated along with the relative distance between camera and plane by using a set of derived spatial geometrical relations coupled with a few intrinsic properties of the system. The depth of an unknown object is then perceived by first extracting the object under observation using a series of image processing steps followed by exploiting the aforementioned mapping of pixel and real space coordinate. The performance of the algorithm is greatly enhanced by the introduction of reinforced learning making the system independent of hardware and environment. Furthermore the depth calculation function is modified with a supervised learning algorithm giving consistent improvement in results. Thus, the system uses the experience in past and optimizes the current run successively. Using the above procedure a series of experiments and trials are carried out to prove the concept and its efficacy.

  17. Characterization and Performance of the Cananea Near-infrared Camera (CANICA)

    NASA Astrophysics Data System (ADS)

    Devaraj, R.; Mayya, Y. D.; Carrasco, L.; Luna, A.

    2018-05-01

    We present details of characterization and imaging performance of the Cananea Near-infrared Camera (CANICA) at the 2.1 m telescope of the Guillermo Haro Astrophysical Observatory (OAGH) located in Cananea, Sonora, México. CANICA has a HAWAII array with a HgCdTe detector of 1024 × 1024 pixels covering a field of view of 5.5 × 5.5 arcmin2 with a plate scale of 0.32 arcsec/pixel. The camera characterization involved measuring key detector parameters: conversion gain, dark current, readout noise, and linearity. The pixels in the detector have a full-well-depth of 100,000 e‑ with the conversion gain measured to be 5.8 e‑/ADU. The time-dependent dark current was estimated to be 1.2 e‑/sec. Readout noise for correlated double sampled (CDS) technique was measured to be 30 e‑/pixel. The detector shows 10% non-linearity close to the full-well-depth. The non-linearity was corrected within 1% levels for the CDS images. Full-field imaging performance was evaluated by measuring the point spread function, zeropoints, throughput, and limiting magnitude. The average zeropoint value in each filter are J = 20.52, H = 20.63, and K = 20.23. The saturation limit of the detector is about sixth magnitude in all the primary broadbands. CANICA on the 2.1 m OAGH telescope reaches background-limited magnitudes of J = 18.5, H = 17.6, and K = 16.0 for a signal-to-noise ratio of 10 with an integration time of 900 s.

  18. Simultaneous fluorescence and quantitative phase microscopy with single-pixel detectors

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Suo, Jinli; Zhang, Yuanlong; Dai, Qionghai

    2018-02-01

    Multimodal microscopy offers high flexibilities for biomedical observation and diagnosis. Conventional multimodal approaches either use multiple cameras or a single camera spatially multiplexing different modes. The former needs expertise demanding alignment and the latter suffers from limited spatial resolution. Here, we report an alignment-free full-resolution simultaneous fluorescence and quantitative phase imaging approach using single-pixel detectors. By combining reference-free interferometry with single-pixel detection, we encode the phase and fluorescence of the sample in two detection arms at the same time. Then we employ structured illumination and the correlated measurements between the sample and the illuminations for reconstruction. The recovered fluorescence and phase images are inherently aligned thanks to single-pixel detection. To validate the proposed method, we built a proof-of-concept setup for first imaging the phase of etched glass with the depth of a few hundred nanometers and then imaging the fluorescence and phase of the quantum dot drop. This method holds great potential for multispectral fluorescence microscopy with additional single-pixel detectors or a spectrometer. Besides, this cost-efficient multimodal system might find broad applications in biomedical science and neuroscience.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka

    We developed a prototype positron emission tomography (PET) system based on a new concept called Open-close PET, which has two modes: open and close-modes. In the open-mode, the detector ring is separated into two halved rings and subject is imaged with the open space and projection image is formed. In the close-mode, the detector ring is closed to be a regular circular ring, and the subject can be imaged without an open space, and so reconstructed images can be made without artifacts. The block detector of the Open-close PET system consists of two scintillator blocks that use two types ofmore » gadolinium orthosilicate (GSO) scintillators with different decay times, angled optical fiber-based image guides, and a flat panel photomultiplier tube. The GSO pixel size was 1.6 × 2.4 × 7 mm and 8 mm for fast (35 ns) and slow (60 ns) GSOs, respectively. These GSOs were arranged into an 11 × 15 matrix and optically coupled in the depth direction to form a depth-of-interaction detector. The angled optical fiber-based image guides were used to arrange the two scintillator blocks at 22.5° so that they can be arranged in a hexadecagonal shape with eight block detectors to simplify the reconstruction algorithm. The detector ring was divided into two halves to realize the open-mode and set on a mechanical stand with which the distance between the two parts can be manually changed. The spatial resolution in the close-mode was 2.4-mm FWHM, and the sensitivity was 1.7% at the center of the field-of-view. In both the close- and open-modes, we made sagittal (y-z plane) projection images between the two halved detector rings. We obtained reconstructed and projection images of {sup 18}F-NaF rat studies and proton-irradiated phantom images. These results indicate that our developed Open-close PET is useful for some applications such as proton therapy as well as other applications such as molecular imaging.« less

  20. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.

  1. Smoke over haze: Comparative analysis of satellite, surface radiometer, and airborne in situ measurements of aerosol optical properties and radiative forcing over the eastern United States

    NASA Astrophysics Data System (ADS)

    Vant-Hull, Brian; Li, Zhanqing; Taubman, Brett F.; Levy, Robert; Marufu, Lackson; Chang, Fu-Lung; Doddridge, Bruce G.; Dickerson, Russell R.

    2005-05-01

    In July 2002 Canadian forest fires produced a major smoke episode that blanketed the east coast of the United States. Properties of the smoke aerosol were measured in situ from aircraft, complementing operational Aerosol Robotic Network (AERONET), and Moderate Resolution Imaging Spectroradiometer (MODIS) remotely sensed aerosol retrievals. This study compares single scattering albedo and phase function derived from the in situ measurements and AERONET retrievals in order to evaluate their consistency for application to satellite retrievals of optical depth and radiative forcing. These optical properties were combined with MODIS reflectance observations to calculate optical depth. The use of AERONET optical properties yielded optical depths 2-16% lower than those directly measured by AERONET. The use of in situ-derived optical properties resulted in optical depths 22-43% higher than AERONET measurements. These higher optical depths are attributed primarily to the higher absorption measured in situ, which is roughly twice that retrieved by AERONET. The resulting satellite retrieved optical depths were in turn used to calculate integrated radiative forcing at both the surface and top of atmosphere. Comparisons to surface (Surface Radiation Budget Network (SURFRAD) and ISIS) and to satellite (Clouds and Earth Radiant Energy System CERES) broadband radiometer measurements demonstrate that the use of optical properties derived from the aircraft measurements provided a better broadband forcing estimate (21% error) than those derived from AERONET (33% error). Thus AERONET-derived optical properties produced better fits to optical depth measurements, while in situ properties resulted in better fits to forcing measurements. These apparent inconsistencies underline the significant challenges facing the aerosol community in achieving column closure between narrow and broadband measurements and calculations.

  2. Spatio-thermal depth correction of RGB-D sensors based on Gaussian processes in real-time

    NASA Astrophysics Data System (ADS)

    Heindl, Christoph; Pönitz, Thomas; Stübl, Gernot; Pichler, Andreas; Scharinger, Josef

    2018-04-01

    Commodity RGB-D sensors capture color images along with dense pixel-wise depth information in real-time. Typical RGB-D sensors are provided with a factory calibration and exhibit erratic depth readings due to coarse calibration values, ageing and thermal influence effects. This limits their applicability in computer vision and robotics. We propose a novel method to accurately calibrate depth considering spatial and thermal influences jointly. Our work is based on Gaussian Process Regression in a four dimensional Cartesian and thermal domain. We propose to leverage modern GPUs for dense depth map correction in real-time. For reproducibility we make our dataset and source code publicly available.

  3. 360 degree vision system: opportunities in transportation

    NASA Astrophysics Data System (ADS)

    Thibault, Simon

    2007-09-01

    Panoramic technologies are experiencing new and exciting opportunities in the transportation industries. The advantages of panoramic imagers are numerous: increased areas coverage with fewer cameras, imaging of multiple target simultaneously, instantaneous full horizon detection, easier integration of various applications on the same imager and others. This paper reports our work on panomorph optics and potential usage in transportation applications. The novel panomorph lens is a new type of high resolution panoramic imager perfectly suitable for the transportation industries. The panomorph lens uses optimization techniques to improve the performance of a customized optical system for specific applications. By adding a custom angle to pixel relation at the optical design stage, the optical system provides an ideal image coverage which is designed to reduce and optimize the processing. The optics can be customized for the visible, near infra-red (NIR) or infra-red (IR) wavebands. The panomorph lens is designed to optimize the cost per pixel which is particularly important in the IR. We discuss the use of the 360 vision system which can enhance on board collision avoidance systems, intelligent cruise controls and parking assistance. 360 panoramic vision systems might enable safer highways and significant reduction in casualties.

  4. Experimental demonstration of 1.5Hz passive isolation system for precision optical payloads

    NASA Astrophysics Data System (ADS)

    Guan, Xin; Wang, Guang-yuan; Cao, Dong-jing; Tang, Shao-fan; Chen, Xiang; Liang, Lu; Zheng, Gang-tie

    2017-11-01

    The ground resolution of remote sensing satellite has been raised from hundreds of meters to less than one meter in recent few decades. As a result, the precision optical payload becomes more and more sensitive to structure vibrations of satellite buses. Although these vibrations generally have extremely low magnitude, they can result in significant image quality degradation to an optical payload. The suggestion of using vibration isolators to isolate payload from the satellite bus has been put forward in 1980s'[1]. Recently, WorldView-2 achieved its perfect image quality via using a set of low frequency isolators[2]. Recently, some of the optical payload manufacturers begin to provide vibration isolators as standard parts together with their main products . During the prototype testing of an earth resource satellite, the image of the optical payload was found to jitter for 5 10 pixels due to disturbances transmitted from the satellite bus structure. Test results indicated that the acceleration level of the vibration was of mG magnitude. To solve the problem, a highly sensitive vibration isolation system was developed to reduce the transmission of disturbances. Integrated isolation performance tests showed that the image jitter can be decreased to below 0.3 pixels.

  5. Low cost solution-based materials processing methods for large area OLEDs and OFETs

    NASA Astrophysics Data System (ADS)

    Jeong, Jonghwa

    In Part 1, we demonstrate the fabrication of organic light-emitting devices (OLEDs) with precisely patterned pixels by the spin-casting of Alq3 and rubrene thin films with dimensions as small as 10 mum. The solution-based patterning technique produces pixels via the segregation of organic molecules into microfabricated channels or wells. Segregation is controlled by a combination of weak adsorbing characteristics of aliphatic terminated self-assembled monolayers (SAMs) and by centrifugal force, which directs the organic solution into the channel or well. This novel patterning technique may resolve the limitations of pixel resolution in the method of thermal evaporation using shadow masks, and is applicable to the fabrication of large area displays. Furthermore, the patterning technique has the potential to produce pixel sizes down to the limitation of photolithography and micromachining techniques, thereby enabling the fabrication of high-resolution microdisplays. The patterned OLEDs, based upon a confined structure with low refractive index of SiO2, exhibited higher current density than an unpatterned OLED, which results in higher electroluminescence intensity and eventually more efficient device operation at low applied voltages. We discuss the patterning method and device fabrication, and characterize the morphological, optical, and electrical properties of the organic pixels. In part 2, we demonstrate a new growth technique for organic single crystals based on solvent vapor assisted recrystallization. We show that, by controlling the polarity of the solvent vapor and the exposure time in a closed system, we obtain rubrene in orthorhombic to monoclinic crystal structures. This novel technique for growing single crystals can induce phase shifting and alteration of crystal structure and lattice parameters. The organic molecules showed structural change from orthorhombic to monoclinic, which also provided additional optical transition of hypsochromic shift from that of the orthorhombic form. An intermediate form of the crystal exhibits an optical transition to the lowest vibrational energy level that is otherwise disallowed in the single-crystal orthorhombic form. The monoclinic form exhibits entirely new optical transitions and showed a possible structural rearrangement for increasing charge carrier mobility, making it promising for organic devices. These phenomena can be explained and proved by the chemical structure and molecular packing of the monoclinic form, transformed from orthorhombic crystalline structure.

  6. Optical characterisation and analysis of multi-mode pixels for use in future far infrared telescopes

    NASA Astrophysics Data System (ADS)

    McCarthy, Darragh; Trappe, Neil; Murphy, J. Anthony; Doherty, Stephen; Gradziel, Marcin; O'Sullivan, Créidhe; Audley, Michael D.; de Lange, Gert; van der Vorst, Maarten

    2016-07-01

    In this paper we present the development and verification of feed horn simulation code based on the mode- matching technique to simulate the electromagnetic performance of waveguide based structures of rectangular cross-section. This code is required to model multi-mode pyramidal horns which may be required for future far infrared (far IR) space missions where wavelengths in the range of 30 to 200 µm will be analysed. Multi-mode pyramidal horns can be used effectively to couple radiation to sensitive superconducting devices like Kinetic Inductance Detectors (KIDs) or Transition Edge Sensor (TES) detectors. These detectors could be placed in integrating cavities (to further increase the efficiency) with an absorbing layer used to couple to the radiation. The developed code is capable of modelling each of these elements, and so will allow full optical characterisation of such pixels and allow an optical efficiency to be calculated effectively. As the signals being measured at these short wavelengths are at an extremely low level, the throughput of the system must be maximised and so multi-mode systems are proposed. To this end, the focal planes of future far IR missions may consist of an array of multi-mode rectangular feed horns feeding an array of, for example, TES devices contained in individual integrating cavities. Such TES arrays have been fabricated by SRON Groningen and are currently undergoing comprehensive optical, electrical and thermal verification. In order to fully understand and validate the optical performance of the receiver system, it is necessary to develop comprehensive and robust optical models in parallel. We outline the development and verification of this optical modelling software by means of applying it to a representative multi-mode system operating at 150 GHz in order to obtain sufficiently short execution times so as to comprehensively test the code. SAFARI (SPICA FAR infrared Instrument) is a far infrared imaging grating spectrometer, to be proposed as an ESA M5 mission. It is planned for this mission to be launched on board the proposed SPICA (SPace Infrared telescope for Cosmology and Astrophysics) mission, in collaboration with JAXA. SAFARI is planned to operate in the 1.5-10 THz band, focussing on the formation and evolution of galaxies, stars and planetary systems. The pixel that drove the development of the techniques presented in this paper is typical of one option that could be implemented in the SAFARI focal plane, and so the ability to accurately understand and characterise such pixels is critical in the design phase of the next generation of far IR telescopes.

  7. Tele-transmission of stereoscopic images of the optic nerve head in glaucoma via Internet.

    PubMed

    Bergua, Antonio; Mardin, Christian Y; Horn, Folkert K

    2009-06-01

    The objective was to describe an inexpensive system to visualize stereoscopic photographs of the optic nerve head on computer displays and to transmit such images via the Internet for collaborative research or remote clinical diagnosis in glaucoma. Stereoscopic images of glaucoma patients were digitized and stored in a file format (joint photographic stereoimage [jps]) containing all three-dimensional information for both eyes on an Internet Web site (www.trizax.com). The size of jps files was between 0.4 to 1.4 MB (corresponding to a diagonal stereo image size between 900 and 1400 pixels) suitable for Internet protocols. A conventional personal computer system equipped with wireless stereoscopic LCD shutter glasses and a CRT-monitor with high refresh rate (120 Hz) can be used to obtain flicker-free stereo visualization of true-color images with high resolution. Modern thin-film transistor-LCD displays in combination with inexpensive red-cyan goggles achieve stereoscopic visualization with the same resolution but reduced color quality and contrast. The primary aim of our study was met to transmit stereoscopic images via the Internet. Additionally, we found that with both stereoscopic visualization techniques, cup depth, neuroretinal rim shape, and slope of the inner wall of the optic nerve head, can be qualitatively better perceived and interpreted than with monoscopic images. This study demonstrates high-quality and low-cost Internet transmission of stereoscopic images of the optic nerve head from glaucoma patients. The technique allows exchange of stereoscopic images and can be applied to tele-diagnostic and glaucoma research.

  8. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    PubMed

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  9. Stratospheric aerosol optical depths, 1850-1990

    NASA Technical Reports Server (NTRS)

    Sato, Makiko; Hansen, James E.; Mccormick, M. Patrick; Pollack, James B.

    1993-01-01

    A global stratospheric aerosol database employed for climate simulations is described. For the period 1883-1990, aerosol optical depths are estimated from optical extinction data, whose quality increases with time over that period. For the period 1850-1882, aerosol optical depths are more crudely estimated from volcanological evidence for the volume of ejecta from major known volcanoes. The data set is available over Internet.

  10. Determination of effective droplet radius and optical depth of liquid water clouds over a tropical site in northern Thailand using passive microwave soundings, aircraft measurements and spectral irradiance data

    NASA Astrophysics Data System (ADS)

    Nimnuan, P.; Janjai, S.; Nunez, M.; Pratummasoot, N.; Buntoung, S.; Charuchittipan, D.; Chanyatham, T.; Chantraket, P.; Tantiplubthong, N.

    2017-08-01

    This paper presents an algorithm for deriving the effective droplet radius and optical depth of liquid water clouds using ground-based measurements, aircraft observations and an adiabatic model of cloud liquid water. The algorithm derives cloud effective radius and cloud optical depth over a tropical site at Omkoi (17.80°N, 98.43°E), Thailand. Monthly averages of cloud optical depth are highest in April (54.5), which is the month with the lowest average cloud effective radius (4.2 μm), both occurring before the start of the rainy season and at the end of the high contamination period. By contrast, the monsoon period extending from May to October brings higher cloud effective radius and lower cloud optical depth to the region on average. At the diurnal scale there is a gradual increase in average cloud optical depth and decrease in cloud effective radius as the day progresses.

  11. Harmonics rejection in pixelated interferograms using spatio-temporal demodulation.

    PubMed

    Padilla, J M; Servin, M; Estrada, J C

    2011-09-26

    Pixelated phase-mask interferograms have become an industry standard in spatial phase-shifting interferometry. These pixelated interferograms allow full wavefront encoding using a single interferogram. This allows the study of fast dynamic events in hostile mechanical environments. Recently an error-free demodulation method for ideal pixelated interferograms was proposed. However, non-ideal conditions in interferometry may arise due to non-linear response of the CCD camera, multiple light paths in the interferometer, etc. These conditions generate non-sinusoidal fringes containing harmonics which degrade the phase estimation. Here we show that two-dimensional Fourier demodulation of pixelated interferograms rejects most harmonics except the complex ones at {-3(rd), +5(th), -7(th), +9(th), -11(th),…}. We propose temporal phase-shifting to remove these remaining harmonics. In particular, a 2-step phase-shifting algorithm is used to eliminate the -3(rd) and +5(th) complex harmonics, while a 3-step one is used to remove the -3(rd), +5<(th), -7(th) and +9(th) complex harmonics. © 2011 Optical Society of America

  12. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    PubMed

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  13. Ultrasensitive Kilo-Pixel Imaging Array of Photon Noise-Limited Kinetic Inductance Detectors Over an Octave of Bandwidth for THz Astronomy

    NASA Astrophysics Data System (ADS)

    Bueno, J.; Murugesan, V.; Karatsu, K.; Thoen, D. J.; Baselmans, J. J. A.

    2018-05-01

    We present the development of a background-limited kilo-pixel imaging array of ultrawide bandwidth kinetic inductance detectors (KIDs) suitable for space-based THz astronomy applications. The array consists of 989 KIDs, in which the radiation is coupled to each KID via a leaky lens antenna, covering the frequency range between 1.4 and 2.8 THz. The single pixel performance is fully characterised using a representative small array in terms of sensitivity, optical efficiency, beam pattern and frequency response, matching very well its expected performance. The kilo-pixel array is characterised electrically, finding a yield larger than 90% and an averaged noise-equivalent power lower than 3 × 10^{-19} W/Hz^{1/2} . The interaction between the kilo-pixel array and cosmic rays is studied, with an expected dead time lower than 0.6% when operated in an L2 or a similar far-Earth orbit.

  14. Micromirror Arrays for Adaptive Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, E.J.

    The long-range goal of this project is to develop the optical and mechanical design of a micromirror array for adaptive optics that will meet the following criteria: flat mirror surface ({lambda}/20), high fill factor (> 95%), large stroke (5-10 {micro}m), and pixel size {approx}-200 {micro}m. This will be accomplished by optimizing the mirror surface and actuators independently and then combining them using bonding technologies that are currently being developed.

  15. Optical architecture design for detection of absorbers embedded in visceral fat.

    PubMed

    Francis, Robert; Florence, James; MacFarlane, Duncan

    2014-05-01

    Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector.

  16. Optical architecture design for detection of absorbers embedded in visceral fat

    PubMed Central

    Francis, Robert; Florence, James; MacFarlane, Duncan

    2014-01-01

    Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector. PMID:24877008

  17. Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations.

    PubMed

    Huang, Rongyong; Zheng, Shunyi; Hu, Kun

    2018-06-01

    Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.

  18. Compositional diversity of near-, far-side transitory zone around Naonobu, Webb and Sinus Successus craters: Inferences from Chandrayaan-1 Moon Mineralogy Mapper (M3) data

    NASA Astrophysics Data System (ADS)

    Bharti, Rishikesh; Ramakrishnan, D.; Singh, K. D.

    2014-02-01

    This study investigated the potential of Moon Mineralogy Mapper (M3) data for studying compositional variation in the near-, far-side transition zone of the lunar surface. For this purpose, the radiance values of the M3 data were corrected for illumination and emission related effects and converted to apparent reflectance. Dimensionality of the calibrated reflectance image cube was reduced using Independent Component Analysis (ICA) and endmembers were extracted by using Pixel Purity Index (PPI) algorithm. The selected endmembers were linearly unmixed and resolved for mineralogy using United States Geological Survey (USGS) library spectra of minerals. These mineralogically resolved endmembers were used to map the compositional variability within, and outside craters using Spectral Angle Mapper (SAM) algorithm. Cross validation for certain litho types was attempted using band ratios like Optical Maturity (OMAT), Color Ratio Composite and Integrated Band Depth ratio (IBD). The identified lithologies for highland and basin areas match well with published works and strongly support depth related magmatic differentiation. Prevalence of pigeonite-basalt, pigeonite-norite and pyroxenite in crater peaks and floors are unique to the investigated area and are attributed to local, lateral compositional variability in magma composition due to pressure, temperature, and rate of cooling.

  19. Formulation of coarse integral imaging and its applications

    NASA Astrophysics Data System (ADS)

    Kakeya, Hideki

    2008-02-01

    This paper formulates the notion of coarse integral imaging and applies it to practical designs of 3D displays for the purposes of robot teleoperation and automobile HUDs. 3D display technologies are demanded in the applications where real-time and precise depth perception is required, such as teleoperation of robot manipulators and HUDs for automobiles. 3D displays for these applications, however, have not been realized so far. In the conventional 3D display technologies, the eyes are usually induced to focus on the screen, which is not suitable for the above purposes. To overcome this problem the author adopts the coarse integral imaging system, where each component lens is large enough to cover pixels dozens of times more than the number of views. The merit of this system is that it can induce the viewer's focus on the planes of various depths by generating a real image or a virtual image off the screen. This system, however, has major disadvantages in the quality of image, which is caused by aberration of lenses and discontinuity at the joints of component lenses. In this paper the author proposes practical optical designs for 3D monitors for robot teleoperation and 3D HUDs for automobiles by overcoming the problems of aberration and discontinuity of images.

  20. Snow depth and snow cover retrieval from FengYun3B microwave radiation imagery based on a snow passive microwave unmixing method in Northeast China

    NASA Astrophysics Data System (ADS)

    Gu, Lingjia; Ren, Ruizhi; Zhao, Kai; Li, Xiaofeng

    2014-01-01

    The precision of snow parameter retrieval is unsatisfactory for current practical demands. The primary reason is because of the problem of mixed pixels that are caused by low spatial resolution of satellite passive microwave data. A snow passive microwave unmixing method is proposed in this paper, based on land cover type data and the antenna gain function of passive microwaves. The land cover type of Northeast China is partitioned into grass, farmland, bare soil, forest, and water body types. The component brightness temperatures (CBT), namely unmixed data, with 1 km data resolution are obtained using the proposed unmixing method. The snow depth determined by the CBT and three snow depth retrieval algorithms are validated through field measurements taken in forest and farmland areas of Northeast China in January 2012 and 2013. The results show that the overall of the retrieval precision of the snow depth is improved by 17% in farmland areas and 10% in forest areas when using the CBT in comparison with the mixed pixels. The snow cover results based on the CBT are compared with existing MODIS snow cover products. The results demonstrate that more snow cover information can be obtained with up to 86% accuracy.

  1. Strategic and Tactical Technology and Methodologies for Electro-Optical Sensor Testing

    DTIC Science & Technology

    1997-02-01

    Performance of Ground Quartz and Flashed Opal 21... structural rigidity, inertial stability, optical fidelity, etc., should consider the later requirements for modeling and simulations which will...individual pixels. Figure 6 is the magnified section, and Fig. 7 illustrates the results of using fractal interpolation to artifi - cially increase the

  2. Ship Detection in Optical Satellite Image Based on RX Method and PCAnet

    NASA Astrophysics Data System (ADS)

    Shao, Xiu; Li, Huali; Lin, Hui; Kang, Xudong; Lu, Ting

    2017-12-01

    In this paper, we present a novel method for ship detection in optical satellite image based on the ReedXiaoli (RX) method and the principal component analysis network (PCAnet). The proposed method consists of the following three steps. First, the spatially adjacent pixels in optical image are arranged into a vector, transforming the optical image into a 3D cube image. By taking this process, the contextual information of the spatially adjacent pixels can be integrated to magnify the discrimination between ship and background. Second, the RX anomaly detection method is adopted to preliminarily extract ship candidates from the produced 3D cube image. Finally, real ships are further confirmed among ship candidates by applying the PCAnet and the support vector machine (SVM). Specifically, the PCAnet is a simple deep learning network which is exploited to perform feature extraction, and the SVM is applied to achieve feature pooling and decision making. Experimental results demonstrate that our approach is effective in discriminating between ships and false alarms, and has a good ship detection performance.

  3. Digital polarization holography advancing geometrical phase optics.

    PubMed

    De Sio, Luciano; Roberts, David E; Liao, Zhi; Nersisyan, Sarik; Uskova, Olena; Wickboldt, Lloyd; Tabiryan, Nelson; Steeves, Diane M; Kimball, Brian R

    2016-08-08

    Geometrical phase or the fourth generation (4G) optics enables realization of optical components (lenses, prisms, gratings, spiral phase plates, etc.) by patterning the optical axis orientation in the plane of thin anisotropic films. Such components exhibit near 100% diffraction efficiency over a broadband of wavelengths. The films are obtained by coating liquid crystalline (LC) materials over substrates with patterned alignment conditions. Photo-anisotropic materials are used for producing desired alignment conditions at the substrate surface. We present and discuss here an opportunity of producing the widest variety of "free-form" 4G optical components with arbitrary spatial patterns of the optical anisotropy axis orientation with the aid of a digital spatial light polarization converter (DSLPC). The DSLPC is based on a reflective, high resolution spatial light modulator (SLM) combined with an "ad hoc" optical setup. The most attractive feature of the use of a DSLPC for photoalignment of nanometer thin photo-anisotropic coatings is that the orientation of the alignment layer, and therefore of the fabricated LC or LC polymer (LCP) components can be specified on a pixel-by-pixel basis with high spatial resolution. By varying the optical magnification or de-magnification the spatial resolution of the photoaligned layer can be adjusted to an optimum for each application. With a simple "click" it is possible to record different optical components as well as arbitrary patterns ranging from lenses to invisible labels and other transparent labels that reveal different images depending on the side from which they are viewed.

  4. Velocity filtering applied to optical flow calculations

    NASA Technical Reports Server (NTRS)

    Barniv, Yair

    1990-01-01

    Optical flow is a method by which a stream of two-dimensional images obtained from a forward-looking passive sensor is used to map the three-dimensional volume in front of a moving vehicle. Passive ranging via optical flow is applied here to the helicopter obstacle-avoidance problem. Velocity filtering is used as a field-based method to determine range to all pixels in the initial image. The theoretical understanding and performance analysis of velocity filtering as applied to optical flow is expanded and experimental results are presented.

  5. Oriented modulation for watermarking in direct binary search halftone images.

    PubMed

    Guo, Jing-Ming; Su, Chang-Cheng; Liu, Yun-Fu; Lee, Hua; Lee, Jiann-Der

    2012-09-01

    In this paper, a halftoning-based watermarking method is presented. This method enables high pixel-depth watermark embedding, while maintaining high image quality. This technique is capable of embedding watermarks with pixel depths up to 3 bits without causing prominent degradation to the image quality. To achieve high image quality, the parallel oriented high-efficient direct binary search (DBS) halftoning is selected to be integrated with the proposed orientation modulation (OM) method. The OM method utilizes different halftone texture orientations to carry different watermark data. In the decoder, the least-mean-square-trained filters are applied for feature extraction from watermarked images in the frequency domain, and the naïve Bayes classifier is used to analyze the extracted features and ultimately to decode the watermark data. Experimental results show that the DBS-based OM encoding method maintains a high degree of image quality and realizes the processing efficiency and robustness to be adapted in printing applications.

  6. Ground-truthing AVIRIS mineral mapping at Cuprite, Nevada

    NASA Technical Reports Server (NTRS)

    Swayze, Gregg; Clark, Roger N.; Kruse, Fred; Sutley, Steve; Gallagher, Andrea

    1992-01-01

    Mineral abundance maps of 18 minerals were made of the Cuprite Mining District using 1990 AVIRIS data and the Multiple Spectral Feature Mapping Algorithm (MSFMA) as discussed in Clark et al. This technique uses least-squares fitting between a scaled laboratory reference spectrum and ground calibrated AVIRIS data for each pixel. Multiple spectral features can be fitted for each mineral and an unlimited number of minerals can be mapped simultaneously. Quality of fit and depth from continuum numbers for each mineral are calculated for each pixel and the results displayed as a multicolor image.

  7. A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy

    USGS Publications Warehouse

    Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,

    2010-01-01

    A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.

  8. Design of integrated eye tracker-display device for head mounted systems

    NASA Astrophysics Data System (ADS)

    David, Y.; Apter, B.; Thirer, N.; Baal-Zedaka, I.; Efron, U.

    2009-08-01

    We propose an Eye Tracker/Display system, based on a novel, dual function device termed ETD, which allows sharing the optical paths of the Eye tracker and the display and on-chip processing. The proposed ETD design is based on a CMOS chip combining a Liquid-Crystal-on-Silicon (LCoS) micro-display technology with near infrared (NIR) Active Pixel Sensor imager. The ET operation allows capturing the Near IR (NIR) light, back-reflected from the eye's retina. The retinal image is then used for the detection of the current direction of eye's gaze. The design of the eye tracking imager is based on the "deep p-well" pixel technology, providing low crosstalk while shielding the active pixel circuitry, which serves the imaging and the display drivers, from the photo charges generated in the substrate. The use of the ETD in the HMD Design enables a very compact design suitable for Smart Goggle applications. A preliminary optical, electronic and digital design of the goggle and its associated ETD chip and digital control, are presented.

  9. Dynamic Janus Metasurfaces in the Visible Spectral Region.

    PubMed

    Yu, Ping; Li, Jianxiong; Zhang, Shuang; Jin, Zhongwei; Schütz, Gisela; Qiu, Cheng-Wei; Hirscher, Michael; Liu, Na

    2018-06-27

    Janus monolayers have long been captivated as a popular notion for breaking in-plane and out-of-plane structural symmetry. Originated from chemistry and materials science, the concept of Janus functions have been recently extended to ultrathin metasurfaces by arranging meta-atoms asymmetrically with respect to the propagation or polarization direction of the incident light. However, such metasurfaces are intrinsically static and the information they carry can be straightforwardly decrypted by scanning the incident light directions and polarization states once the devices are fabricated. In this Letter, we present a dynamic Janus metasurface scheme in the visible spectral region. In each super unit cell, three plasmonic pixels are categorized into two sets. One set contains a magnesium nanorod and a gold nanorod that are orthogonally oriented with respect to each other, working as counter pixels. The other set only contains a magnesium nanorod. The effective pixels on the Janus metasurface can be reversibly regulated by hydrogenation/dehydrogenation of the magnesium nanorods. Such dynamic controllability at visible frequencies allows for flat optical elements with novel functionalities including beam steering, bifocal lensing, holographic encryption, and dual optical function switching.

  10. Improved GO/PO method and its application to wideband SAR image of conducting objects over rough surface

    NASA Astrophysics Data System (ADS)

    Jiang, Wang-Qiang; Zhang, Min; Nie, Ding; Jiao, Yong-Chang

    2018-04-01

    To simulate the multiple scattering effect of target in synthetic aperture radar (SAR) image, the hybrid method GO/PO method, which combines the geometrical optics (GO) and physical optics (PO), is employed to simulate the scattering field of target. For ray tracing is time-consuming, the Open Graphics Library (OpenGL) is usually employed to accelerate the process of ray tracing. Furthermore, the GO/PO method is improved for the simulation in low pixel situation. For the improved GO/PO method, the pixels are arranged corresponding to the rectangular wave beams one by one, and the GO/PO result is the sum of the contribution values of all the rectangular wave beams. To get high-resolution SAR image, the wideband echo signal is simulated which includes information of many electromagnetic (EM) waves with different frequencies. Finally, the improved GO/PO method is used to simulate the SAR image of targets above rough surface. And the effects of reflected rays and the size of pixel matrix on the SAR image are also discussed.

  11. Modelling of influence of spherical aberration coefficients on depth of focus of optical systems

    NASA Astrophysics Data System (ADS)

    Pokorný, Petr; Šmejkal, Filip; Kulmon, Pavel; Mikš, Antonín.; Novák, Jiří; Novák, Pavel

    2017-06-01

    This contribution describes how to model the influence of spherical aberration coefficients on the depth of focus of optical systems. Analytical formulas for the calculation of beam's caustics are presented. The conditions for aberration coefficients are derived for two cases when we require that either the Strehl definition or the gyration radius should be the identical in two symmetrically placed planes with respect to the paraxial image plane. One can calculate the maximum depth of focus and the minimum diameter of the circle of confusion of the optical system corresponding to chosen conditions. This contribution helps to understand how spherical aberration may affect the depth of focus and how to design such an optical system with the required depth of focus. One can perform computer modelling and design of the optical system and its spherical aberration in order to achieve the required depth of focus.

  12. An optical fiber expendable seawater temperature/depth profile sensor

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Chen, Shizhe; Zhang, Keke; Yan, Xingkui; Yang, Xianglong; Bai, Xuejiao; Liu, Shixuan

    2017-10-01

    Marine expendable temperature/depth profiler (XBT) is a disposable measuring instrument which can obtain temperature/depth profile data quickly in large area waters and mainly used for marine surveys, scientific research, military application. The temperature measuring device is a thermistor in the conventional XBT probe (CXBT)and the depth data is only a calculated value by speed and time depth calculation formula which is not an accurate measurement result. Firstly, an optical fiber expendable temperature/depth sensor based on the FBG-LPG cascaded structure is proposed to solve the problems of the CXBT, namely the use of LPG and FBG were used to detect the water temperature and depth, respectively. Secondly, the fiber end reflective mirror is used to simplify optical cascade structure and optimize the system performance. Finally, the optical path is designed and optimized using the reflective optical fiber end mirror. The experimental results show that the sensitivity of temperature and depth sensing based on FBG-LPG cascade structure is about 0.0030C and 0.1%F.S. respectively, which can meet the requirements of the sea water temperature/depth observation. The reflectivity of reflection mirror is in the range from 48.8% to 72.5%, the resonant peak of FBG and LPG are reasonable and the whole spectrum are suitable for demodulation. Through research on the optical fiber XBT (FXBT), the direct measurement of deep-sea temperature/depth profile data can be obtained simultaneously, quickly and accurately. The FXBT is a new all-optical seawater temperature/depth sensor, which has important academic value and broad application prospect and is expected to replace the CXBT in the future.

  13. WE-D-17A-02: Evaluation of a Two-Dimensional Optical Dosimeter On Measuring Lateral Profiles of Proton Pencil Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsi, W; Lee, T; Schultz, T

    Purpose: To evaluate the accuracy of a two-dimensional optical dosimeter on measuring lateral profiles for spots and scanned fields of proton pencil beams. Methods: A digital camera with a color image senor was utilized to image proton-induced scintillations on Gadolinium-oxysulfide phosphor reflected by a stainless-steel mirror. Intensities of three colors were summed for each pixel with proper spatial-resolution calibration. To benchmark this dosimeter, the field size and penumbra for 100mm square fields of singleenergy pencil-scan protons were measured and compared between this optical dosimeter and an ionization-chamber profiler. Sigma widths of proton spots in air were measured and compared betweenmore » this dosimeter and a commercial optical dosimeter. Clinical proton beams with ranges between 80 mm and 300 mm at CDH proton center were used for this benchmark. Results: Pixel resolutions vary 1.5% between two perpendicular axes. For a pencil-scan field with 302 mm range, measured field sizes and penumbras between two detection systems agreed to 0.5 mm and 0.3 mm, respectively. Sigma widths agree to 0.3 mm between two optical dosimeters for a proton spot with 158 mm range; having widths of 5.76 mm and 5.92 mm for X and Y axes, respectively. Similar agreements were obtained for others beam ranges. This dosimeter was successfully utilizing on mapping the shapes and sizes of proton spots at the technical acceptance of McLaren proton therapy system. Snow-flake spots seen on images indicated the image sensor having pixels damaged by radiations. Minor variations in intensity between different colors were observed. Conclusions: The accuracy of our dosimeter was in good agreement with other established devices in measuring lateral profiles of pencil-scan fields and proton spots. A precise docking mechanism for camera was designed to keep aligned optical path while replacing damaged image senor. Causes for minor variations between emitted color lights will be investigated.« less

  14. Ground Optical Signal Processing Architecture for Contributing SSA Space Based Sensor Data

    NASA Astrophysics Data System (ADS)

    Koblick, D.; Klug, M.; Goldsmith, A.; Flewelling, B.; Jah, M.; Shanks, J.; Piña, R.

    2014-09-01

    The main objective of the DARPA program Orbit Outlook (O^2) is to improve the metric tracking and detection performance of the Space Situational Network (SSN) by adding a diverse low-cost network of contributing sensors to the Space Situational Awareness (SSA) mission. In order to accomplish this objective, not only must a sensor be in constant communication with a planning and scheduling system to process tasking requests, there must be an underlying framework to provide useful data products, such as angles only measurements. Existing optical signal processing implementations such as the Optical Processing Architecture at Lincoln (OPAL) are capable of converting mission data collections to angles only observations, but may be difficult for many users to obtain, support, and customize for low-cost missions and demonstration programs. The Ground Optical Signal Processing Architecture (GOSPA) will ingest raw imagery and telemetry data from a space based electro optical sensor and perform a background removal process to remove anomalous pixels, interpolate over bad pixels, and dominant temporal noise. After background removal, the streak end points and target centroids are located using a corner detection algorithm developed by Air Force Research Laboratory. These identified streak locations are then fused with the corresponding spacecraft telemetry data to determine the Right Ascension and Declination measurements with respect to time. To demonstrate the performance of GOSPA, non-rate tracking collections against a satellite in Geosynchronous Orbit are simulated from a visible optical imaging sensor in a polar Low Earth Orbit. Stars, noise and bad pixels are added to the simulated images based on look angles and sensor parameters. These collections are run through the GOSPA framework to provide angles- only measurements to the Air Force Research Laboratory Constrained Admissible Region Multiple Hypothesis Filter (CAR-MHF) in which an Initial Orbit Determination is performed and compared to truth data.

  15. Application of simple all-sky imagers for the estimation of aerosol optical depth

    NASA Astrophysics Data System (ADS)

    Kazantzidis, Andreas; Tzoumanikas, Panagiotis; Nikitidou, Efterpi; Salamalikis, Vasileios; Wilbert, Stefan; Prahl, Christoph

    2017-06-01

    Aerosol optical depth is a key atmospheric constituent for direct normal irradiance calculations at concentrating solar power plants. However, aerosol optical depth is typically not measured at the solar plants for financial reasons. With the recent introduction of all-sky imagers for the nowcasting of direct normal irradiance at the plants a new instrument is available which can be used for the determination of aerosol optical depth at different wavelengths. In this study, we are based on Red, Green and Blue intensities/radiances and calculations of the saturated area around the Sun, both derived from all-sky images taken with a low-cost surveillance camera at the Plataforma Solar de Almeria, Spain. The aerosol optical depth at 440, 500 and 675nm is calculated. The results are compared with collocated aerosol optical measurements and the mean/median difference and standard deviation are less than 0.01 and 0.03 respectively at all wavelengths.

  16. Effect of Thin Cirrus Clouds on Dust Optical Depth Retrievals From MODIS Observations

    NASA Technical Reports Server (NTRS)

    Feng, Qian; Hsu, N. Christina; Yang, Ping; Tsay, Si-Chee

    2011-01-01

    The effect of thin cirrus clouds in retrieving the dust optical depth from MODIS observations is investigated by using a simplified aerosol retrieval algorithm based on the principles of the Deep Blue aerosol property retrieval method. Specifically, the errors of the retrieved dust optical depth due to thin cirrus contamination are quantified through the comparison of two retrievals by assuming dust-only atmospheres and the counterparts with overlapping mineral dust and thin cirrus clouds. To account for the effect of the polarization state of radiation field on radiance simulation, a vector radiative transfer model is used to generate the lookup tables. In the forward radiative transfer simulations involved in generating the lookup tables, the Rayleigh scattering by atmospheric gaseous molecules and the reflection of the surface assumed to be Lambertian are fully taken into account. Additionally, the spheroid model is utilized to account for the nonsphericity of dust particles In computing their optical properties. For simplicity, the single-scattering albedo, scattering phase matrix, and optical depth are specified a priori for thin cirrus clouds assumed to consist of droxtal ice crystals. The present results indicate that the errors in the retrieved dust optical depths due to the contamination of thin cirrus clouds depend on the scattering angle, underlying surface reflectance, and dust optical depth. Under heavy dusty conditions, the absolute errors are comparable to the predescribed optical depths of thin cirrus clouds.

  17. Analog signal processing for optical coherence imaging systems

    NASA Astrophysics Data System (ADS)

    Xu, Wei

    Optical coherence tomography (OCT) and optical coherence microscopy (OCM) are non-invasive optical coherence imaging techniques, which enable micron-scale resolution, depth resolved imaging capability. Both OCT and OCM are based on Michelson interferometer theory. They are widely used in ophthalmology, gastroenterology and dermatology, because of their high resolution, safety and low cost. OCT creates cross sectional images whereas OCM obtains en face images. In this dissertation, the design and development of three increasingly complicated analog signal processing (ASP) solutions for optical coherence imaging are presented. The first ASP solution was implemented for a time domain OCT system with a Rapid Scanning Optical Delay line (RSOD)-based optical signal modulation and logarithmic amplifier (Log amp) based demodulation. This OCT system can acquire up to 1600 A-scans per second. The measured dynamic range is 106dB at 200A-scan per second. This OCT signal processing electronics includes an off-the-shelf filter box with a Log amp circuit implemented on a PCB board. The second ASP solution was developed for an OCM system with synchronized modulation and demodulation and compensation for interferometer phase drift. This OCM acquired micron-scale resolution, high dynamic range images at acquisition speeds up to 45,000 pixels/second. This OCM ASP solution is fully custom designed on a perforated circuit board. The third ASP solution was implemented on a single 2.2 mm x 2.2 mm complementary metal oxide semiconductor (CMOS) chip. This design is expandable to a multiple channel OCT system. A single on-chip CMOS photodetector and ASP channel was used for coherent demodulation in a time domain OCT system. Cross-sectional images were acquired with a dynamic range of 76dB (limited by photodetector responsivity). When incorporated with a bump-bonded InGaAs photodiode with higher responsivity, the expected dynamic range is close to 100dB.

  18. All-optical framing photography based on hyperspectral imaging method

    NASA Astrophysics Data System (ADS)

    Liu, Shouxian; Li, Yu; Li, Zeren; Chen, Guanghua; Peng, Qixian; Lei, Jiangbo; Liu, Jun; Yuan, Shuyun

    2017-02-01

    We propose and experimentally demonstrate a new all optical-framing photography that uses hyperspectral imaging methods to record a chirped pulse's temporal-spatial information. This proposed method consists of three parts: (1) a chirped laser pulse encodes temporal phenomena onto wavelengths; (2) a lenslet array generates a series of integral pupil images;(3) a dispersive device disperses the integral images at void space of image sensor. Compared with Ultrafast All-Optical Framing Technology(Daniel Frayer,2013,2014) and Sequentially Time All-Optical Mapping Photography( Nakagawa 2014, 2015), our method is convenient to adjust the temporal resolution and to flexibly increase the numbers of frames. Theoretically, the temporal resolution of our scheme is limited by the amount of dispersion that is added to a Fourier transform limited femtosecond laser pulse. Correspondingly, the optimal number of frames is decided by the ratio of the observational time window to the temporal resolution, and the effective pixels of each frame are mostly limited by the dimensions M×N of the lenslet array. For example, if a 40fs Fourier transform limited femtosecond pulse is stretched to 10ps, a CCD camera with 2048×3072 pixels can record 15 framing images with temporal resolution of 650fs and image size of 100×100 pixels. As spectrometer structure, our recording part has another advantage that not only amplitude images but also frequency domain interferograms can be imaged. Therefore, it is comparatively easy to capture fast dynamics in the refractive index change of materials. A further dynamic experiment is being conducted.

  19. Large format imaging arrays for the Atacama Cosmology Telescope

    NASA Technical Reports Server (NTRS)

    Chervenak, J. A.; Wollack, E. J.; Marraige, T.; Staggs, S.; Niemack, M.; Doriese, B.

    2006-01-01

    We describe progress in the fabrication, characterization, and production of detector arrays for the Atacama Cosmology Telescope (ACT). The completed ACT instrument is specified to image simultaneously at 145, 225, and 265 GHz using three 32x32 filled arrays of superconducting transition edge sensors (TES) read out with time-division-multiplexed SQUID amplifiers. We present details of the pixel design and testing including the optimization of the electrical parameters for multiplexed readout. Using geometric noise suppression and careful tuning of operation temperature and device bias resistance, the excess noise in the TES devices is balanced with detector speed for interfacing with the ACT optics. The design also accounts for practical tolerances such as transition temperature gradients and scatter that occur in the production of multiple wafers to populate fully the kilopixel cameras. We have developed an implanted absorber layer compatible with our silicon-on-insulator process that allows for tunable optical resistance with requisite on-wafer uniformity and wafer-to-wafer reproducibility. Arrays of 32 elements have been tested in the laboratory environment including electrical, optical, and multiplexed performance. Given this pixel design, optical tests and modeling are used to predict the performance of the filled array under anticipated viewing conditions. Integration of the filled array of pixels with a tuned backshort and dielectric plate in front of the array maximize absorption and the focal plane and suppress reflections. A mechanical design for the build of the full structure is completed and we report on progress toward the construction of a prototype array for first light on the ACT.

  20. A decade of infrared versus visible AOD analysis within the dust belt

    NASA Astrophysics Data System (ADS)

    Capelle, Virginie; Chédin, Alain; Pondrom, Marc; Crevoisier, Cyril; Armante, Raymond; Crépeau, Laurent; Scott, Noëlle

    2017-04-01

    Aerosols represent one of the dominant uncertainties in radiative forcing, partly because of their very high spatiotemporal variability, a still insufficient knowledge of their microphysical and optical properties, or of their vertical distribution. A better understanding and forecasting of their impact on climate therefore requires precise observations of dust emission and transport. Observations from space offer a good opportunity to follow, day by day and at high spatial resolution, dust evolution at global scale and over long time series. In this context, infrared observations, by allowing retrieving simultaneously dust optical depth (AOD) as well as the mean dust layer altitude, daytime and nighttime, over oceans and over continents, in particular over desert, appears highly complementary to observations in the visible. In this study, a decade of infrared observations (Metop-A/IASI and AIRS/AQUA) has been processed pixel by pixel, using a "Look-Up-Table" (LUT) physical approach. The retrieved infrared 10µm coarse-mode AOD is compared with the Spectral Deconvolution Algorithm (SDA) 500nm coarse mode AOD observed at 50 ground-based Aerosol RObotic NETwork (AERONET) sites located within the dust belt. Analyzing their brings into evidence an important geographical variability. Lowest values are found close to dust sources ( 0.45 for the Sahel or Arabian Peninsula, 0.6-0.7 for the Northern part of Africa or India), whereas the ratio increases for transported dust with values of 0.9-1 for the Caribbean and for the Mediterranean basin. This variability is interpreted as a marker of clays abundance, and might be linked to the dust particle illite to kaolinite ratio, a recognized tracer of dust sources and transport. More generally, it suggests that the difference between the radiative impact of dust aerosols in the visible and in the infrared depends on the type of particles observed. This highlights the importance of taking into account the specificity of the infrared when considering the role of mineral dust on the Earth's energy budget.

  1. Classification of river water pollution using Hyperion data

    NASA Astrophysics Data System (ADS)

    Kar, Soumyashree; Rathore, V. S.; Champati ray, P. K.; Sharma, Richa; Swain, S. K.

    2016-06-01

    A novel attempt is made to use hyperspectral remote sensing to identify the spatial variability of metal pollutants present in river water. It was also attempted to classify the hyperspectral image - Earth Observation-1 (EO-1) Hyperion data of an 8 km stretch of the river Yamuna, near Allahabad city in India depending on its chemical composition. For validating image analysis results, a total of 10 water samples were collected and chemically analyzed using Inductively Coupled Plasma-Optical Emission Spectroscopy (ICP-OES). Two different spectral libraries from field and image data were generated for the 10 sample locations. Advanced per-pixel supervised classifications such as Spectral Angle Mapper (SAM), SAM target finder using BandMax and Support Vector Machine (SVM) were carried out along with the unsupervised clustering procedure - Iterative Self-Organizing Data Analysis Technique (ISODATA). The results were compared and assessed with respect to ground data. Analytical Spectral Devices (ASD), Inc. spectroradiometer, FieldSpec 4 was used to generate the spectra of the water samples which were compiled into a spectral library and used for Spectral Absorption Depth (SAD) analysis. The spectral depth pattern of image and field spectral libraries was found to be highly correlated (correlation coefficient, R2 = 0.99) which validated the image analysis results with respect to the ground data. Further, we carried out a multivariate regression analysis to assess the varying concentrations of metal ions present in water based on the spectral depth of the corresponding absorption feature. Spectral Absorption Depth (SAD) analysis along with metal analysis of field data revealed the order in which the metals affected the river pollution, which was in conformity with the findings of Central Pollution Control Board (CPCB). Therefore, it is concluded that hyperspectral imaging provides opportunity that can be used for satellite based remote monitoring of water quality from space.

  2. Long-term analysis of aerosol optical depth over Northeast Asia using a satellite-based measurement: MI Yonsei Aerosol Retrieval Algorithm (YAER)

    NASA Astrophysics Data System (ADS)

    Kim, Mijin; Kim, Jhoon; Yoon, Jongmin; Chung, Chu-Yong; Chung, Sung-Rae

    2017-04-01

    In 2010, the Korean geostationary earth orbit (GEO) satellite, the Communication, Ocean, and Meteorological Satellite (COMS), was launched including the Meteorological Imager (MI). The MI measures atmospheric condition over Northeast Asia (NEA) using a single visible channel centered at 0.675 μm and four IR channels at 3.75, 6.75, 10.8, 12.0 μm. The visible measurement can also be utilized for the retrieval of aerosol optical properties (AOPs). Since the GEO satellite measurement has an advantage for continuous monitoring of AOPs, we can analyze the spatiotemporal variation of the aerosol using the MI observations over NEA. Therefore, we developed an algorithm to retrieve aerosol optical depth (AOD) using the visible observation of MI, and named as MI Yonsei Aerosol Retrieval Algorithm (YAER). In this study, we investigated the accuracy of MI YAER AOD by comparing the values with the long-term products of AERONET sun-photometer. The result showed that the MI AODs were significantly overestimated than the AERONET values over bright surface in low AOD case. Because the MI visible channel centered at red color range, contribution of aerosol signal to the measured reflectance is relatively lower than the surface contribution. Therefore, the AOD error in low AOD case over bright surface can be a fundamental limitation of the algorithm. Meanwhile, an assumption of background aerosol optical depth (BAOD) could result in the retrieval uncertainty, also. To estimate the surface reflectance by considering polluted air condition over the NEA, we estimated the BAOD from the MODIS dark target (DT) aerosol products by pixel. The satellite-based AOD retrieval, however, largely depends on the accuracy of the surface reflectance estimation especially in low AOD case, and thus, the BAOD could include the uncertainty in surface reflectance estimation of the satellite-based retrieval. Therefore, we re-estimated the BAOD using the ground-based sun-photometer measurement, and investigated the effects of the BAOD assumption. The satellite-based BAOD was significantly higher than the ground-based value over urban area, and thus, resulted in the underestimation of surface reflectance and the overestimation of AOD. The error analysis of the MI AOD also showed sensitivity to cloud contamination, clearly. Therefore, improvements of cloud masking process in the developed single channel MI algorithm as well as the modification of the surface reflectance estimation will be required for the future study.

  3. Position, rotation, and intensity invariant recognizing method

    DOEpatents

    Ochoa, E.; Schils, G.F.; Sweeney, D.W.

    1987-09-15

    A method for recognizing the presence of a particular target in a field of view which is target position, rotation, and intensity invariant includes the preparing of a target-specific invariant filter from a combination of all eigen-modes of a pattern of the particular target. Coherent radiation from the field of view is then imaged into an optical correlator in which the invariant filter is located. The invariant filter is rotated in the frequency plane of the optical correlator in order to produce a constant-amplitude rotational response in a correlation output plane when the particular target is present in the field of view. Any constant response is thus detected in the output plane to determine whether a particular target is present in the field of view. Preferably, a temporal pattern is imaged in the output plane with a optical detector having a plurality of pixels and a correlation coefficient for each pixel is determined by accumulating the intensity and intensity-square of each pixel. The orbiting of the constant response caused by the filter rotation is also preferably eliminated either by the use of two orthogonal mirrors pivoted correspondingly to the rotation of the filter or the attaching of a refracting wedge to the filter to remove the offset angle. Detection is preferably performed of the temporal pattern in the output plane at a plurality of different angles with angular separation sufficient to decorrelate successive frames. 1 fig.

  4. InP-based Geiger-mode avalanche photodiode arrays for three-dimensional imaging at 1.06 μm

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Jiang, Xudong; Patel, Ketan; Slomkowski, Krystyna; Koch, Tim; Rangwala, Sabbir; Zalud, Peter F.; Yu, Young; Tower, John; Ferraro, Joseph

    2009-05-01

    We report on the development of 32 x 32 focal plane arrays (FPAs) based on InGaAsP/InP Geiger-mode avalanche photodiodes (GmAPDs) designed for use in three-dimensional (3-D) laser radar imaging systems at 1064 nm. To our knowledge, this is the first realization of FPAs for 3-D imaging that employ a planar-passivated buried-junction InP-based GmAPD device platform. This development also included the design and fabrication of custom readout integrate circuits (ROICs) to perform avalanche detection and time-of-flight measurements on a per-pixel basis. We demonstrate photodiode arrays (PDAs) with a very narrow breakdown voltage distribution width of 0.34 V, corresponding to a breakdown voltage total variation of less than +/- 0.2%. At an excess bias voltage of 3.3 V, which provides 40% pixel-level single photon detection efficiency, we achieve average dark count rates of 2 kHz at an operating temperature of 248 K. We present the characterization of optical crosstalk induced by hot carrier luminescence during avalanche events, where we show that the worst-case crosstalk probability per pixel, which occurs for nearest neighbors, has a value of less than 1.6% and exhibits anisotropy due to isolation trench etch geometry. To demonstrate the FPA response to optical density variations, we show a simple image of a broadened optical beam.

  5. Constraints on the detection of cryovolcanic plumes on Europa

    NASA Astrophysics Data System (ADS)

    Quick, Lynnae C.; Barnouin, Olivier S.; Prockter, Louise M.; Patterson, G. Wesley

    2013-09-01

    Surface venting is a common occurrence on several outer solar system satellites. Spacecraft have observed plumes erupting from the geologically young surfaces of Io, Triton and Enceladus. Europa also has a relatively young surface and previous studies have suggested that cryovolcanic eruptions may be responsible for the production of low-albedo deposits surrounding lenticulae and along triple band margins and lineae. Here, we have used the projected thicknesses of these deposits as constraints to determine the lifetimes of detectable cryovolcanic plumes that may have emplaced them. In an effort to explore the feasibility of detection of the particle component of plumes by spacecraft cameras operating at visible wavelengths, we present a conservative model to estimate plume characteristics such as height, eruption velocity, and optical depth under a variety of conditions. We find that cryovolcanic plumes on Europa are likely to be fairly small in stature with heights between 2.5 and 26 km, and eruption velocities between 81 and 261 m/s, respectively. Under these conditions and assuming that plumes are products of steady eruptions with particle radii of 0.5 μm, our model suggests that easily detectable plumes will have optical depths, τ, greater than or equal to 0.04, and that their lifetimes may be no more than 300,000 years. Plume detection may be possible if high phase angle limb observations and/or stereo imaging of the surface are undertaken in areas where eruptive activity is likely to occur. Cameras with imaging resolutions greater than 50 m/pixel should be used to make all observations. Future missions could employ the results of our model in searches for plume activity at Europa.

  6. SeaWiFS Postlaunch Calibration and Validation Analyses

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B. (Editor); Firestone, Elaine (Editor); McClain, Charles R.; Barnes, Robert A.; Eplee, Robert E., Jr.; Franz, Bryan A.; Hsu, N. Christina; Patt, Frederick S.; Pietras, Christophe M.; Robinson, Wayne D.

    2000-01-01

    The effort to resolve data quality issues and improve on the initial data evaluation methodologies of the SeaWiFS Project was an extensive one. These evaluations have resulted, to date, in three major reprocessings of the entire data set where each reprocessing addressed the data quality issues that could be identified up to the time of the reprocessing. Three volumes of the SeaWiFS Postlaunch Technical Report Series (Volumes 9, 10, and 11) are needed to document the improvements implemented since launch. Volume 10 continues the sequential presentation of postlaunch data analysis and algorithm descriptions begun in Volume 9. Chapter 1 of Volume 10 describes an absorbing aerosol index, similar to that produced by the Total Ozone Mapping Spectrometer (TOMS) Project, which is used to flag pixels contaminated by absorbing aerosols, such as, dust and smoke. Chapter 2 discusses the algorithm being used to remove SeaWiFS out-of-band radiance from the water-leaving radiances. Chapter 3 provides an itemization of all significant changes in the processing algorithms for each of the first three reprocessings. Chapter 4 shows the time series of global clear water and deep-water (depths greater than 1,000m) bio-optical and atmospheric properties (normalized water-leaving radiances, chlorophyll, atmospheric optical depth, etc.) based on the eight-day composites as a check on the sensor calibration stability. Chapter 5 examines the variation in the derived products with scan angle using high resolution data around Hawaii to test for residual scan modulation effects and atmospheric correction biases. Chapter 6 provides a methodology for evaluating the atmospheric correction algorithm and atmospheric derived products using ground-based observations. Similarly, Chapter 7 presents match-up comparisons of coincident satellite and in situ data to determine the accuracy of the water-leaving radiances, chlorophyll a, and K(490) products.

  7. Validation of MODIS aerosol optical depth product over China using CARSNET measurements

    NASA Astrophysics Data System (ADS)

    Xie, Yong; Zhang, Yan; Xiong, Xiaoxiong; Qu, John J.; Che, Huizheng

    2011-10-01

    This study evaluates Moderate Resolution Imaging Spectroradiometer (MODIS) Aerosol Optical Depth (AOD) retrievals with ground measurements collected by the China Aerosol Remote Sensing NETwork (CARSNET). In current stage, the MODIS Collection 5 (C5) AODs are retrieved by two distinct algorithms: the Dark Target (DT) and the Deep Blue (DB). The CARSNET AODs are derived with measurements of Cimel Electronique CE-318, the same instrument deployed by the AEROsol Robotic Network (AEROENT). The collocation is performed by matching each MODIS AOD pixel (10 × 10 km 2) to CARSNET AOD mean within 7.5 min of satellite overpass. Four-year comparisons (2005-2008) of MODIS/CARSNET at ten sites show the performance of MODIS AOD retrieval is highly dependent on the underlying land surface. The MODIS DT AODs are on average lower than the CARSNET AODs by 6-30% over forest and grassland areas, but can be higher by up to 54% over urban area and 95% over desert-like area. More than 50% of the MODIS DT AODs fall within the expected error envelope over forest and grassland areas. The MODIS DT tends to overestimate for small AOD at urban area. At high vegetated area it underestimates for small AOD and overestimates for large AOD. Generally, its quality reduces with the decreasing AOD value. The MODIS DB is capable of retrieving AOD over desert but with a significant underestimation at CARSNET sites. The best retrieval of the MODIS DB is over grassland area with about 70% retrievals within the expected error. The uncertainties of MODIS AOD retrieval from spatial-temporal collocation and instrument calibration are discussed briefly.

  8. Estimating PM2.5 in Xi'an , China Using Aerosol Optical Depth of Npp Viirs Data and Meteorological Measurements

    NASA Astrophysics Data System (ADS)

    Zhang, K.; Yang, Z.; Zheng, J.; Jiao, J.; Gao, W.

    2018-04-01

    In recent years, the air pollution is becoming more and more serious, which not only causes the decrease of the visibility, but also affects the human health. As the most important pollutant particulate matter, remote sensing satellite measurements have been widely used to estimate PM2.5 concentration on the ground. Visible Infrared Imaging Radiometer Suite (VIIRS) is one of the instruments which is taken in the National Polar-orbiting Partnership (NPP) satellite. In this study, VIIRS was used to retrieve aerosol optical depth (AOD) with the way of dark pixels, and several other major meteorological variables (wind speed, relative humidity, NO2 concentration, ground surface relative humidity and planetary boundary layer height) were combined with AOD to construct a nonlinear multiple regression mode for establishing the relationship between AOD and PM2.5 concentration. The North Basin of Shaanxi province of China, which includes Xi'an, is located in the north of Qinling Mountains, south of the Loess Plateau, and in the central of Weihe basin, with special structure and other adverse weather conditions (static wind, less rain) to cause the frequent haze weather in Xi'an. Xi'an city was selected as the area of the experiment due to its particularity. This research obtained the AOD results from August 1, 2013 to October 30, 2013. The inversion results were compared with ground-based PM2.5 concentration date from air quality monitoring station of Xi'an. The result showed that there is a significant correlation between the two, and the correlation coefficient is 0.783. The inversion result verified that the model of VIIRS data agreed well AOD, which could be used to estimate the surface PM2.5 concentration and monitor the regional air quality.

  9. Planck intermediate results. XIV. Dust emission at millimetre wavelengths in the Galactic plane

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Alves, M. I. R.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Battaner, E.; Benabed, K.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bonaldi, A.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chiang, H. C.; Chiang, L.-Y.; Christensen, P. R.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Couchot, F.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Galeotta, S.; Ganga, K.; Ghosh, T.; Giard, M.; Giardino, G.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Harrison, D. L.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leonardi, R.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Macías-Pérez, J. F.; Maffei, B.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Mazzotta, P.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Pasian, F.; Patanchon, G.; Peel, M.; Perdereau, O.; Perrotta, F.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Ricciardi, S.; Riller, T.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Savini, G.; Scott, D.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Verstraete, L.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-04-01

    We use Planck HFI data combined with ancillary radio data to study the emissivity index of the interstellar dust emission in the frequency range 100-353 GHz, or 3-0.8 mm, in the Galactic plane. We analyse the region l = 20°-44° and |b| ≤ 4° where the free-free emission can be estimated from radio recombination line data. We fit the spectra at each sky pixel with a modified blackbody model and two opacity spectral indices, βmm and βFIR, below and above 353 GHz, respectively. We find that βmm is smaller than βFIR, and we detect a correlation between this low frequency power-law index and the dust optical depth at 353 GHz, τ353. The opacity spectral index βmm increases from about 1.54 in the more diffuse regions of the Galactic disk, |b| = 3°-4° and τ353 ~ 5 × 10-5, to about 1.66 in the densest regions with an optical depth of more than one order of magnitude higher. We associate this correlation with an evolution of the dust emissivity related to the fraction of molecular gas along the line of sight. This translates into βmm ~ 1.54 for a medium that is mostly atomic and βmm ~ 1.66 when the medium is dominated by molecular gas. We find that both the two-level system model and magnetic dipole emission by ferromagnetic particles can explain the results. These results improve our understanding of the physics of interstellar dust and lead towards a complete model of the dust spectrum of the Milky Way from far-infrared to millimetre wavelengths.

  10. Comparison of computation time and image quality between full-parallax 4G-pixels CGHs calculated by the point cloud and polygon-based method

    NASA Astrophysics Data System (ADS)

    Nakatsuji, Noriaki; Matsushima, Kyoji

    2017-03-01

    Full-parallax high-definition CGHs composed of more than billion pixels were so far created only by the polygon-based method because of its high performance. However, GPUs recently allow us to generate CGHs much faster by the point cloud. In this paper, we measure computation time of object fields for full-parallax high-definition CGHs, which are composed of 4 billion pixels and reconstruct the same scene, by using the point cloud with GPU and the polygon-based method with CPU. In addition, we compare the optical and simulated reconstructions between CGHs created by these techniques to verify the image quality.

  11. A Compact Polarization Imager

    NASA Technical Reports Server (NTRS)

    Thompson, Karl E.; Rust, David M.; Chen, Hua

    1995-01-01

    A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.

  12. Optically Based Rapid Screening Method for Proven Optimal Treatment Strategies Before Treatment Begins

    DTIC Science & Technology

    2015-08-01

    lifetime ( t2 ) corresponds to protein- bound NADH (23). Conversely, protein-bound FAD corre- sponds to the short lifetime, whereas free FAD corresponds...single photon counting (TCSPC) electronics (SPC-150, Becker and Hickl). TCSPC uses a fast detector PMT to measure the time between a laser pulse and... Becker and Hickl). A binning of nine surrounding pixels was used. Then, the fluorescence lifetime components were computed for each pixel by deconvolving

  13. On the performance of large monolithic LaCl3(Ce) crystals coupled to pixelated silicon photosensors

    NASA Astrophysics Data System (ADS)

    Olleros, P.; Caballero, L.; Domingo-Pardo, C.; Babiano, V.; Ladarescu, I.; Calvo, D.; Gramage, P.; Nacher, E.; Tain, J. L.; Tolosa, A.

    2018-03-01

    We investigate the performance of large area radiation detectors, with high energy- and spatial-resolution, intended for the development of a Total Energy Detector with gamma-ray imaging capability, so-called i-TED. This new development aims for an enhancement in detection sensitivity in time-of-flight neutron capture measurements, versus the commonly used C6D6 liquid scintillation total-energy detectors. In this work, we study in detail the impact of the readout photosensor on the energy response of large area (50×50 mm2) monolithic LaCl3(Ce) crystals, in particular when replacing a conventional mono-cathode photomultiplier tube by an 8×8 pixelated silicon photomultiplier. Using the largest commercially available monolithic SiPM array (25 cm2), with a pixel size of 6×6 mm2, we have measured an average energy resolution of 3.92% FWHM at 662 keV for crystal thicknesses of 10, 20 and 30 mm. The results are confronted with detailed Monte Carlo (MC) calculations, where optical processes and properties have been included for the reliable tracking of the scintillation photons. After the experimental validation of the MC model, we use our MC code to explore the impact of a smaller photosensor segmentation on the energy resolution. Our optical MC simulations predict only a marginal deterioration of the spectroscopic performance for pixels of 3×3 mm2.

  14. A liquid-crystal-on-silicon color sequential display using frame buffer pixel circuits

    NASA Astrophysics Data System (ADS)

    Lee, Sangrok

    Next generation liquid-crystal-on-silicon (LCOS) high definition (HD) televisions and image projection displays will need to be low-cost and high quality to compete with existing systems based on digital micromirror devices (DMDs), plasma displays, and direct view liquid crystal displays. In this thesis, a novel frame buffer pixel architecture that buffers data for the next image frame while displaying the current frame, offers such a competitive solution is presented. The primary goal of the thesis is to demonstrate the LCOS microdisplay architecture for high quality image projection displays and at potentially low cost. The thesis covers four main research areas: new frame buffer pixel circuits to improve the LCOS performance, backplane architecture design and testing, liquid crystal modes for the LCOS microdisplay, and system integration and demonstration. The design requirements for the LCOS backplane with a 64 x 32 pixel array are addressed and measured electrical characteristics matches to computer simulation results. Various liquid crystal (LC) modes applicable for LCOS microdisplays and their physical properties are discussed. One- and two-dimensional director simulations are performed for the selected LC modes. Test liquid crystal cells with the selected LC modes are made and their electro-optic effects are characterized. The 64 x 32 LCOS microdisplays fabricated with the best LC mode are optically tested with interface circuitry. The characteristics of the LCOS microdisplays are summarized with the successful demonstration.

  15. Charge Loss and Charge Sharing Measurements for Two Different Pixelated Cadmium-Zinc-Telluride Detectors

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Sharma, Dharma; Ramsey, Brian; Seller, Paul

    2003-01-01

    As part of ongoing research at Marshall Space Flight Center, Cadmium-Zinc- Telluride (CdZnTe) pixilated detectors are being developed for use at the focal plane of the High Energy Replicated Optics (HERO) telescope. HERO requires a 64x64 pixel array with a spatial resolution of around 200 microns (with a 6m focal length) and high energy resolution (< 2% at 60keV). We are currently testing smaller arrays as a necessary first step towards this goal. In this presentation, we compare charge sharing and charge loss measurements between two devices that differ both electronically and geometrically. The first device consists of a 1-mm-thick piece of CdZnTe that is sputtered with a 4x4 array of pixels with pixel pitch of 750 microns (inter-pixel gap is 100 microns). The signal is read out using discrete ultra-low-noise preamplifiers, one for each of the 16 pixels. The second detector consists of a 2-mm-thick piece of CdZnTe that is sputtered with a 16x16 array of pixels with a pixel pitch of 300 microns (inter-pixel gap is 50 microns). Instead of using discrete preamplifiers, the crystal is bonded to an ASIC that provides all of the front-end electronics to each of the 256 pixels. what degree the bias voltage (i.e. the electric field) and hence the drift and diffusion coefficients affect our measurements. Further, we compare the measured results with simulated results and discuss to

  16. Aerosols

    Atmospheric Science Data Center

    2013-04-17

    ... depth. A color scale is used to represent this quantity, and high aerosol amount is indicated by yellow or green pixels, and clearer skies ... out most clearly, whereas MISR's oblique cameras enhance sensitivity to even thin layers of aerosols. In the March image, the only ...

  17. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  18. Global Long-Term SeaWiFS Deep Blue Aerosol Products available at NASA GES DISC

    NASA Technical Reports Server (NTRS)

    Shen, Suhung; Sayer, A. M.; Bettenhausen, Corey; Wei, Jennifer C.; Ostrenga, Dana M.; Vollmer, Bruce E.; Hsu, Nai-Yung; Kempler, Steven J.

    2012-01-01

    Long-term climate data records about aerosols are needed in order to improve understanding of air quality, radiative forcing, and for many other applications. The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) provides a global well-calibrated 13- year (1997-2010) record of top-of-atmosphere radiance, suitable for use in retrieval of atmospheric aerosol optical depth (AOD). Recently, global aerosol products derived from SeaWiFS with Deep Blue algorithm (SWDB) have become available for the entire mission, as part of the NASA Making Earth Science data records for Use in Research for Earth Science (MEaSUREs) program. The latest Deep Blue algorithm retrieves aerosol properties not only over bright desert surfaces, but also vegetated surfaces, oceans, and inland water bodies. Comparisons with AERONET observations have shown that the data are suitable for quantitative scientific use [1],[2]. The resolution of Level 2 pixels is 13.5x13.5 km2 at the center of the swath. Level 3 daily and monthly data are composed by using best quality level 2 pixels at resolution of both 0.5ox0.5o and 1.0ox1.0o. Focusing on the southwest Asia region, this presentation shows seasonal variations of AOD, and the result of comparisons of 5-years (2003- 2007) of AOD from SWDB (Version 3) and MODIS Aqua (Version 5.1) for Dark Target (MYD-DT) and Deep Blue (MYD-DB) algorithms.

  19. Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land

    NASA Astrophysics Data System (ADS)

    Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti

    2018-03-01

    We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.

  20. Full-field transmission x-ray imaging with confocal polycapillary x-ray optics

    PubMed Central

    Sun, Tianxi; MacDonald, C. A.

    2013-01-01

    A transmission x-ray imaging setup based on a confocal combination of a polycapillary focusing x-ray optic followed by a polycapillary collimating x-ray optic was designed and demonstrated to have good resolution, better than the unmagnified pixel size and unlimited by the x-ray tube spot size. This imaging setup has potential application in x-ray imaging for small samples, for example, for histology specimens. PMID:23460760

  1. Spatial and Temporal Resolutions Pixel Level Performance Analysis of the Onboard Remote Sensing Electro-Optical Systems

    NASA Astrophysics Data System (ADS)

    El-Sheikh, H. M.; Yakushenkov, Y. G.

    2014-08-01

    Formulas for determination of the interconnection between the spatial resolution from perspective distortions and the temporal resolution of the onboard electro-optical system for remote sensing application for a variety of scene viewing modes is offered. These dependences can be compared with the user's requirements, upon the permission values of the design parameters of the modern main units of the electro-optical system is discussed.

  2. Three-dimensional ghost imaging lidar via sparsity constraint

    NASA Astrophysics Data System (ADS)

    Gong, Wenlin; Zhao, Chengqiang; Yu, Hong; Chen, Mingliang; Xu, Wendong; Han, Shensheng

    2016-05-01

    Three-dimensional (3D) remote imaging attracts increasing attentions in capturing a target’s characteristics. Although great progress for 3D remote imaging has been made with methods such as scanning imaging lidar and pulsed floodlight-illumination imaging lidar, either the detection range or application mode are limited by present methods. Ghost imaging via sparsity constraint (GISC), enables the reconstruction of a two-dimensional N-pixel image from much fewer than N measurements. By GISC technique and the depth information of targets captured with time-resolved measurements, we report a 3D GISC lidar system and experimentally show that a 3D scene at about 1.0 km range can be stably reconstructed with global measurements even below the Nyquist limit. Compared with existing 3D optical imaging methods, 3D GISC has the capability of both high efficiency in information extraction and high sensitivity in detection. This approach can be generalized in nonvisible wavebands and applied to other 3D imaging areas.

  3. Boundary segmentation for fluorescence microscopy using steerable filters

    NASA Astrophysics Data System (ADS)

    Ho, David Joon; Salama, Paul; Dunn, Kenneth W.; Delp, Edward J.

    2017-02-01

    Fluorescence microscopy is used to image multiple subcellular structures in living cells which are not readily observed using conventional optical microscopy. Moreover, two-photon microscopy is widely used to image structures deeper in tissue. Recent advancement in fluorescence microscopy has enabled the generation of large data sets of images at different depths, times, and spectral channels. Thus, automatic object segmentation is necessary since manual segmentation would be inefficient and biased. However, automatic segmentation is still a challenging problem as regions of interest may not have well defined boundaries as well as non-uniform pixel intensities. This paper describes a method for segmenting tubular structures in fluorescence microscopy images of rat kidney and liver samples using adaptive histogram equalization, foreground/background segmentation, steerable filters to capture directional tendencies, and connected-component analysis. The results from several data sets demonstrate that our method can segment tubular boundaries successfully. Moreover, our method has better performance when compared to other popular image segmentation methods when using ground truth data obtained via manual segmentation.

  4. Intraocular camera for retinal prostheses: Refractive and diffractive lens systems

    NASA Astrophysics Data System (ADS)

    Hauer, Michelle Christine

    The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.

  5. Imaging the eye fundus with real-time en-face spectral domain optical coherence tomography

    PubMed Central

    Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    Real-time display of processed en-face spectral domain optical coherence tomography (SD-OCT) images is important for diagnosis. However, due to many steps of data processing requirements, such as Fast Fourier transformation (FFT), data re-sampling, spectral shaping, apodization, zero padding, followed by software cut of the 3D volume acquired to produce an en-face slice, conventional high-speed SD-OCT cannot render an en-face OCT image in real time. Recently we demonstrated a Master/Slave (MS)-OCT method that is highly parallelizable, as it provides reflectivity values of points at depth within an A-scan in parallel. This allows direct production of en-face images. In addition, the MS-OCT method does not require data linearization, which further simplifies the processing. The computation in our previous paper was however time consuming. In this paper we present an optimized algorithm that can be used to provide en-face MS-OCT images much quicker. Using such an algorithm we demonstrate around 10 times faster production of sets of en-face OCT images than previously obtained as well as simultaneous real-time display of up to 4 en-face OCT images of 200 × 200 pixels2 from the fovea and the optic nerve of a volunteer. We also demonstrate 3D and B-scan OCT images obtained from sets of MS-OCT C-scans, i.e. with no FFT and no intermediate step of generation of A-scans. PMID:24761303

  6. New solutions and technologies for uncooled infrared imaging

    NASA Astrophysics Data System (ADS)

    Rollin, Joël.; Diaz, Frédéric; Fontaine, Christophe; Loiseaux, Brigitte; Lee, Mane-Si Laure; Clienti, Christophe; Lemonnier, Fabrice; Zhang, Xianghua; Calvez, Laurent

    2013-06-01

    The military uncooled infrared market is driven by the continued cost reduction of the focal plane arrays whilst maintaining high standards of sensitivity and steering towards smaller pixel sizes. As a consequence, new optical solutions are called for. Two approaches can come into play: the bottom up option consists in allocating improvements to each contributor and the top down process rather relies on an overall optimization of the complete image channel. The University of Rennes I with Thales Angénieux alongside has been working over the past decade through French MOD funding's, on low cost alternatives of infrared materials based upon chalcogenide glasses. A special care has been laid on the enhancement of their mechanical properties and their ability to be moulded according to complex shapes. New manufacturing means developments capable of better yields for the raw materials will be addressed, too. Beyond the mere lenses budget cuts, a wave front coding process can ease a global optimization. This technic gives a way of relaxing optical constraints or upgrading thermal device performances through an increase of the focus depths and desensitization against temperature drifts: it combines image processing and the use of smart optical components. Thales achievements in such topics will be enlightened and the trade-off between image quality correction levels and low consumption/ real time processing, as might be required in hand-free night vision devices, will be emphasized. It is worth mentioning that both approaches are deeply leaning on each other.

  7. Electro-optic characteristics of 4-domain vertical alignment nematic liquid crystal display with interdigital electrode

    NASA Astrophysics Data System (ADS)

    Hong, S. H.; Jeong, Y. H.; Kim, H. Y.; Cho, H. M.; Lee, W. G.; Lee, S. H.

    2000-06-01

    We have fabricated a vertically aligned 4-domain nematic liquid crystal display cell with thin film transistor. Unlike the conventional method constructing 4-domain, i.e., protrusion and surrounding electrode which needs additional processes, in this study the pixel design forming 4-domain with interdigital electrodes is suggested. In the device, one pixel is divided into two parts. One part has a horizontal electric field in the vertical direction and the other part has a horizontal one in the horizontal direction. Such fields in the horizontal and vertical direction drive the liquid crystal director to tilt down in four directions. In this article, the electro-optic characteristics of cells with 2 and 4 domain have been studied. The device with 4 domain shows faster response time than normal twisted-nematic and in-plane switching cells, wide viewing angle with optical compensation film, and more stable color characteristics than 2-domain vertical alignment cell with similar structure.

  8. Advanced imaging research and development at DARPA

    NASA Astrophysics Data System (ADS)

    Dhar, Nibir K.; Dat, Ravi

    2012-06-01

    Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.

  9. Improvements in Night-Time Low Cloud Detection and MODIS-Style Cloud Optical Properties from MSG SEVIRI

    NASA Technical Reports Server (NTRS)

    Wind, Galina (Gala); Platnick, Steven; Riedi, Jerome

    2011-01-01

    The MODIS cloud optical properties algorithm (MOD06IMYD06 for Terra and Aqua MODIS, respectively) slated for production in Data Collection 6 has been adapted to execute using available channels on MSG SEVIRI. Available MODIS-style retrievals include IR Window-derived cloud top properties, using the new Collection 6 cloud top properties algorithm, cloud optical thickness from VISINIR bands, cloud effective radius from 1.6 and 3.7Jlm and cloud ice/water path. We also provide pixel-level uncertainty estimate for successful retrievals. It was found that at nighttime the SEVIRI cloud mask tends to report unnaturally low cloud fraction for marine stratocumulus clouds. A correction algorithm that improves detection of such clouds has been developed. We will discuss the improvements to nighttime low cloud detection for SEVIRI and show examples and comparisons with MODIS and CALIPSO. We will also show examples of MODIS-style pixel-level (Level-2) cloud retrievals for SEVIRI with comparisons to MODIS.

  10. Full-frame, programmable hyperspectral imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, Steven P.; Graff, David L.

    A programmable, many-band spectral imager based on addressable spatial light modulators (ASLMs), such as micro-mirror-, micro-shutter- or liquid-crystal arrays, is described. Capable of collecting at once, without scanning, a complete two-dimensional spatial image with ASLM spectral processing applied simultaneously to the entire image, the invention employs optical assemblies wherein light from all image points is forced to impinge at the same angle onto the dispersing element, eliminating interplay between spatial position and wavelength. This is achieved, as examples, using telecentric optics to image light at the required constant angle, or with micro-optical array structures, such as micro-lens- or capillary arrays,more » that aim the light on a pixel-by-pixel basis. Light of a given wavelength then emerges from the disperser at the same angle for all image points, is collected at a unique location for simultaneous manipulation by the ASLM, then recombined with other wavelengths to form a final spectrally-processed image.« less

  11. Micromachined mirrors for raster-scanning displays and optical fiber switches

    NASA Astrophysics Data System (ADS)

    Hagelin, Paul Merritt

    Micromachines and micro-optics have the potential to shrink the size and cost of free-space optical systems, enabling a new generation of high-performance, compact projection displays and telecommunications equipment. In raster-scanning displays and optical fiber switches, a free-space optical beam can interact with multiple tilt- up micromirrors fabricated on a single substrate. The size, rotation angle, and flatness of the mirror surfaces determine the number of pixels in a raster-display or ports in an optical switch. Single-chip and two-chip optical raster display systems demonstrate static mirror curvature correction, an integrated electronic driver board, and dynamic micromirror performance. Correction for curvature caused by a stress gradient in the micromirror leads to resolution of 102 by 119 pixels in the single-chip display. The optical design of the two-chip display features in-situ mirror curvature measurement and adjustable image magnification with a single output lens. An electronic driver board synchronizes modulation of the optical source with micromirror actuation for the display of images. Dynamic off-axis mirror motion is shown to have minimal influence on resolution. The confocal switch, a free-space optical fiber cross- connect, incorporates micromirrors having a design similar to the image-refresh scanner. Two micromirror arrays redirect optical beams from an input fiber array to the output fibers. The switch architecture supports simultaneous switching of multiple wavelength channels. A 2x2 switch configuration, using single-mode optical fiber at 1550 mn, is demonstrated with insertion loss of -4.2 dB and cross-talk of -50.5 dB. The micromirrors have sufficient size and angular range for scaling to a 32x32 cross-connect switch that has low insertion-loss and low cross-talk.

  12. PRISM project optical instrument

    NASA Technical Reports Server (NTRS)

    Taylor, Charles R.

    1994-01-01

    The scientific goal of the Passively-cooled Reconnaissance of the InterStellar Medium (PRISM) project is to map the emission of molecular hydrogen at 17.035 micrometers and 28.221 micrometers. Since the atmosphere is opaque at these infrared wavelengths, an orbiting telescope is being studied. The availability of infrared focal plane arrays enables infrared imaging spectroscopy at the molecular hydrogen wavelengths. The array proposed for PRISM is 128 pixels square, with a pixel size of 75 micrometers. In order to map the sky in a period of six months, and to resolve the nearer molecular clouds, each pixel must cover 0.5 arcminutes. This sets the focal length at 51.6 cm. In order for the pixel size to be half the diameter of the central diffraction peak at 28 micrometers would require a telescope aperture of 24 cm; an aperture of 60 cm has been selected for the PRISM study for greater light gathering power.

  13. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  14. Multi-pixel high-resolution three-dimensional imaging radar

    NASA Technical Reports Server (NTRS)

    Cooper, Ken B. (Inventor); Dengler, Robert J. (Inventor); Siegel, Peter H. (Inventor); Chattopadhyay, Goutam (Inventor); Ward, John S. (Inventor); Juan, Nuria Llombart (Inventor); Bryllert, Tomas E. (Inventor); Mehdi, Imran (Inventor); Tarsala, Jan A. (Inventor)

    2012-01-01

    A three-dimensional imaging radar operating at high frequency e.g., 670 GHz radar using low phase-noise synthesizers and a fast chirper to generate a frequency-modulated continuous-wave (FMCW) waveform, is disclosed that operates with a multiplexed beam to obtain range information simultaneously on multiple pixels of a target. A source transmit beam may be divided by a hybrid coupler into multiple transmit beams multiplexed together and directed to be reflected off a target and return as a single receive beam which is demultiplexed and processed to reveal range information of separate pixels of the target associated with each transmit beam simultaneously. The multiple transmit beams may be developed with appropriate optics to be temporally and spatially differentiated before being directed to the target. Temporal differentiation corresponds to a different intermediate frequencies separating the range information of the multiple pixels. Collinear transmit beams having differentiated polarizations may also be implemented.

  15. High-MTF hybrid ferroelectric IRFPA

    NASA Astrophysics Data System (ADS)

    Evans, Scott B.; Hayden, Terrence

    1998-07-01

    Low cost, uncooled hybrid infrared focal plane arrays (IRFPA's) are in full-scale production at Raytheon Systems Company (RSC), formerly Texas Instruments Defense Systems and Electronics Group. Detectors consist of reticulated ceramic barium strontium titanate (BST) arrays of 320 X 240 pixels on 48.5 micrometer pitch. The principal performance shortcoming of the hybrid arrays has been low MTF due to thermal crosstalk between pixels. In the past two years, significant improvements have been made to increase MTF making hybrids more competitive in performance with monolithic arrays. The improvements are (1) the reduction of the thickness of the IR absorbing layer electrode that maintains electrical continuity and increases thermal isolation between pixels, (2) reduction of the electrical crosstalk from the ROIC, and (3) development of a process to increase the thermal path-length between pixels called 'elevated optical coat.' This paper describes all three activities and their efficacy. Also discussed is the uncooled IRFPA production capability at RSC.

  16. Characterization of a 2-mm thick, 16x16 Cadmium-Zinc-Telluride Pixel Array

    NASA Technical Reports Server (NTRS)

    Gaskin, Jessica; Richardson, Georgia; Mitchell, Shannon; Ramsey, Brian; Seller, Paul; Sharma, Dharma

    2003-01-01

    The detector under study is a 2-mm-thick, 16x16 Cadmium-Zinc-Telluride pixel array with a pixel pitch of 300 microns and inter-pixel gap of 50 microns. This detector is a precursor to that which will be used at the focal plane of the High Energy Replicated Optics (HERO) telescope currently being developed at Marshall Space Flight Center. With a telescope focal length of 6 meters, the detector needs to have a spatial resolution of around 200 microns in order to take full advantage of the HERO angular resolution. We discuss to what degree charge sharing will degrade energy resolution but will improve our spatial resolution through position interpolation. In addition, we discuss electric field modeling for this specific detector geometry and the role this mapping will play in terms of charge sharing and charge loss in the detector.

  17. Oil Motion Control by an Extra Pinning Structure in Electro-Fluidic Display.

    PubMed

    Dou, Yingying; Tang, Biao; Groenewold, Jan; Li, Fahong; Yue, Qiao; Zhou, Rui; Li, Hui; Shui, Lingling; Henzen, Alex; Zhou, Guofu

    2018-04-06

    Oil motion control is the key for the optical performance of electro-fluidic displays (EFD). In this paper, we introduced an extra pinning structure (EPS) into the EFD pixel to control the oil motion inside for the first time. The pinning structure canbe fabricated together with the pixel wall by a one-step lithography process. The effect of the relative location of the EPS in pixels on the oil motion was studied by a series of optoelectronic measurements. EPS showed good control of oil rupture position. The properly located EPS effectively guided the oil contraction direction, significantly accelerated switching on process, and suppressed oil overflow, without declining in aperture ratio. An asymmetrically designed EPS off the diagonal is recommended. This study provides a novel and facile way for oil motion control within an EFD pixel in both direction and timescale.

  18. A 10MHz Fiber-Coupled Photodiode Imaging Array for Plasma Diagnostics

    NASA Astrophysics Data System (ADS)

    Brockington, Samuel; Case, Andrew; Witherspoon, F. Douglas

    2013-10-01

    HyperV Technologies has been developing an imaging diagnostic comprised of arrays of fast, low-cost, long-record-length, fiber-optically-coupled photodiode channels to investigate plasma dynamics and other fast, bright events. By coupling an imaging fiber bundle to a bank of amplified photodiode channels, imagers and streak imagers of 100 to 10,000 pixels can be constructed. By interfacing analog photodiode systems directly to commercial analog to digital convertors and modern memory chips, a prototype pixel with an extremely deep record length (128 k points at 40 Msamples/s) has been achieved for a 10 bit resolution system with signal bandwidths of at least 10 MHz. Progress on a prototype 100 Pixel streak camera employing this technique is discussed along with preliminary experimental results and plans for a 10,000 pixel imager. Work supported by USDOE Phase 1 SBIR Grant DE-SC0009492.

  19. Casting ability of selected impression materials tested in different conditions in an in vitro sulcus model.

    PubMed

    Kolbeck, Carola; Rosentritt, Martin; Lang, Reinhold; Schiller, Manuela; Handel, Gerhard

    2009-10-01

    To test casting capacities of impression materials under dry and wet sulcular conditions in vitro. An incisor with a circular shoulder preparation (1 mm) was inserted in a primary mold. A shiftable secondary mold allowed adaptation of sulcular depth (1 to 4 mm). An outer circular chamfer assured reproducible positioning of an impression material carrier. Tested materials were PVS of differing viscosities (extra low, Panasil Contact Plus [ELV]; low, Affinis Light Body [LV]; and medium, Virtual Monophase [MV]) and one polyether material of low viscosity (Permadyne Garant [PE]). Impressions were made with sulcular depths of 1 to 4 mm in wet and 1 and 4 mm in dry conditions, cut in half, and digitized with a light microscope (Stemi SV8). Surface area of the region of interest (ROI, at inner angle of preparation) was determined with Optimas 6.2. Medians were calculated, and statistical analysis was performed using the Mann-Whitney U test (P # .05). Median values of the measurements under wet condition demonstrated the smallest ROI areas for the ELV (297-330[pixel]) and the MV (253-421[pixel]) materials followed by the LV (582-745[pixel]) and the PELV (544-823[pixel]). All materials showed significantly higher values for the wet compared to dry sulcular conditions. Repeated measurements showed no significant differences to the corresponding first determined series. The sulcus model is applicable to assess casting abilities of impression materials in clinically approximated sulcular conditions. The PVS materials with extra low and medium viscosities showed the best properties in dry and wet conditions.

  20. First experiences with ARNICA, the ARCETRI observatory imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.

    1994-03-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.

  1. The point-spread function of fiber-coupled area detectors

    PubMed Central

    Holton, James M.; Nielsen, Chris; Frankel, Kenneth A.

    2012-01-01

    The point-spread function (PSF) of a fiber-optic taper-coupled CCD area detector was measured over five decades of intensity using a 20 µm X-ray beam and ∼2000-fold averaging. The ‘tails’ of the PSF clearly revealed that it is neither Gaussian nor Lorentzian, but instead resembles the solid angle subtended by a pixel at a point source of light held a small distance (∼27 µm) above the pixel plane. This converges to an inverse cube law far from the beam impact point. Further analysis revealed that the tails are dominated by the fiber-optic taper, with negligible contribution from the phosphor, suggesting that the PSF of all fiber-coupled CCD-type detectors is best described as a Moffat function. PMID:23093762

  2. Current efforts on developing an HWIL synthetic environment for LADAR sensor testing at AMRDEC

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.

    2005-05-01

    Efforts in developing a synthetic environment for testing LADAR sensors in a hardware-in-the-loop simulation are continuing at the Aviation and Missile Research, Engineering, and Development Center (AMRDEC) of the U.S. Army Research, Engineering and Development Command (RDECOM). Current activities have concentrated on developing the optical projection hardware portion of the synthetic environment. These activities range from system level design down to component level testing. Of particular interest have been schemes for generating the optical signals representing the individual pixels of the projection. Several approaches have been investigated and tested with emphasis on operating wavelength, intensity dynamic range and uniformity, and flexibility in pixel waveform generation. This paper will discuss some of the results from these current efforts at RDECOM's Advanced Simulation Center (ASC).

  3. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  4. OSIRIS Detectors

    NASA Astrophysics Data System (ADS)

    Joven, E.; Gigante, J.; Beigbeder, F.

    OSIRIS (Optical System for Imaging and Low-Resolution Integrated Spectroscopy) is an instrument designed to obtain images and low-resolution spectra of astronomical objects in the optical domain (from 365-1000 nm). It will be installed on Day One (2004) in the Nasmyth focus of the 10-m Spanish GTC Telescope. The mosaic is composed of two abuttable 2Kx4K CCDs to yield a total of 4Kx4K pixels, 15 μm/pixel, 0.1252 plate scale. The arrangement allows the linking of a classical ARC-GenII controller to a PMC frame-grabber, plugged into a VME-CPU card, where a RTOS (VxWorks from Wind River) is running. Some tests and results, carried out with a couple of MAT44-82 engineering grade devices at room temperature, are given.

  5. Pixelated gamma detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolinsky, Sergei Ivanovich; Yanoff, Brian David; Guida, Renato

    2016-12-27

    A pixelated gamma detector includes a scintillator column assembly having scintillator crystals and optical transparent elements alternating along a longitudinal axis, a collimator assembly having longitudinal walls separated by collimator septum, the collimator septum spaced apart to form collimator channels, the scintillator column assembly positioned adjacent to the collimator assembly so that the respective ones of the scintillator crystal are positioned adjacent to respective ones of the collimator channels, the respective ones of the optical transparent element are positioned adjacent to respective ones of the collimator septum, and a first photosensor and a second photosensor, the first and the secondmore » photosensor each connected to an opposing end of the scintillator column assembly. A system and a method for inspecting and/or detecting defects in an interior of an object are also disclosed.« less

  6. Charge-sensitive front-end electronics with operational amplifiers for CdZnTe detectors

    NASA Astrophysics Data System (ADS)

    Födisch, P.; Berthel, M.; Lange, B.; Kirschke, T.; Enghardt, W.; Kaever, P.

    2016-09-01

    Cadmium zinc telluride (CdZnTe, CZT) radiation detectors are suitable for a variety of applications, due to their high spatial resolution and spectroscopic energy performance at room temperature. However, state-of-the-art detector systems require high-performance readout electronics. Though an application-specific integrated circuit (ASIC) is an adequate solution for the readout, requirements of high dynamic range and high throughput are not available in any commercial circuit. Consequently, the present study develops the analog front-end electronics with operational amplifiers for an 8×8 pixelated CZT detector. For this purpose, we modeled an electrical equivalent circuit of the CZT detector with the associated charge-sensitive amplifier (CSA). Based on a detailed network analysis, the circuit design is completed by numerical values for various features such as ballistic deficit, charge-to-voltage gain, rise time, and noise level. A verification of the performance is carried out by synthetic detector signals and a pixel detector. The experimental results with the pixel detector assembly and a 22Na radioactive source emphasize the depth dependence of the measured energy. After pulse processing with depth correction based on the fit of the weighting potential, the energy resolution is 2.2% (FWHM) for the 511 keV photopeak.

  7. Depth-resolved birefringence and differential optical axis orientation measurements with fiber-based polarization-sensitive optical coherence tomography.

    PubMed

    Guo, Shuguang; Zhang, Jun; Wang, Lei; Nelson, J Stuart; Chen, Zhongping

    2004-09-01

    Conventional polarization-sensitive optical coherence tomography (PS-OCT) can provide depth-resolved Stokes parameter measurements of light reflected from turbid media. A new algorithm that takes into account changes in the optical axis is introduced to provide depth-resolved birefringence and differential optical axis orientation images by use of fiber-based PS-OCT. Quaternion, a convenient mathematical tool, is used to represent an optical element and simplify the algorithm. Experimental results with beef tendon and rabbit tendon and muscle show that this technique has promising potential for imaging the birefringent structure of multiple-layer samples with varying optical axes.

  8. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  9. Precipitable water vapor and 212 GHz atmospheric optical depth correlation at El Leoncito site

    NASA Astrophysics Data System (ADS)

    Cassiano, Marta M.; Cornejo Espinoza, Deysi; Raulin, Jean-Pierre; Giménez de Castro, Carlos G.

    2018-03-01

    Time series of precipitable water vapor (PWV) and 212 GHz atmospheric optical depth were obtained in CASLEO (Complejo Astronómico El Leoncito), at El Leoncito site, Argentinean Andes, for the period of 2011-2013. The 212 GHz atmospheric optical depth data were derived from measurements by the Solar Submillimeter Telescope (SST) and the PWV data were obtained by the AERONET CASLEO station. The correlation between PWV and 212 GHz optical depth was analyzed for the whole period, when both parameters were simultaneously available. A very significant correlation was observed. Similar correlation was found when data were analyzed year by year. The results indicate that the correlation of PWV versus 212 GHz optical depth could be used as an indirect estimation method for PWV, when direct measurements are not available.

  10. SU-C-206-01: Impact of Charge Sharing Effect On Sub-Pitch Resolution for CZT-Based Photon Counting CT Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, X; Cheng, Z; Deen, J

    Purposes: Photon counting CT is a new imaging technology that can provide tissue composition information such as calcium/iodine content quantification. Cadmium zinc telluride CZT is considered a good candidate the photon counting CT due to its relatively high atomic number and band gap. One potential challenge is the degradation of both spatial and energy resolution as the fine electrode pitch is deployed (<50 µm). We investigated the extent of charge sharing effect as functions of gap width, bias voltage and depth-of-interaction (DOI). Methods: The initial electron cloud size and diffusion process were modeled analytically. The valid range of charge sharingmore » effect refers to the range over which both signals of adjacent electrodes are above the triggering threshold (10% of the amplitude of 60keV X-ray photons). The intensity ratios of output in three regions (I1/I2/I3: left pixel, gap area and right pixel) were calculated. With Gaussian white noises modeled (a SNR of 5 based upon the preliminary experiments), the sub-pitch resolution as a function of the spatial position in-between two pixels was studied. Results: The valid range of charge sharing increases linearly with depth-of-interaction (DOI) but decreases with gap width and bias voltage. For a 1.5mm thickness CZT detector (pitch: 50µm, bias: 400 V), the range increase from ∼90µm up to ∼110µm. Such an increase can be attributed to a longer travel distance and the associated electron cloud broadening. The achievable sub-pitch resolution is in the range of ∼10–30µm. Conclusion: The preliminary results demonstrate that sub-pixel spatial resolution can be achieved using the ratio of amplitudes of two neighboring pixels. Such ratio may also be used to correct charge loss and help improve energy resolution of a CZT detector. The impact of characteristic X-rays hitting adjacent pixels (i.e., multiple interaction) on charge sharing is currently being investigated.« less

  11. Enhanced optical clearing of skin in vivo and optical coherence tomography in-depth imaging

    NASA Astrophysics Data System (ADS)

    Wen, Xiang; Jacques, Steven L.; Tuchin, Valery V.; Zhu, Dan

    2012-06-01

    The strong optical scattering of skin tissue makes it very difficult for optical coherence tomography (OCT) to achieve deep imaging in skin. Significant optical clearing of in vivo rat skin sites was achieved within 15 min by topical application of an optical clearing agent PEG-400, a chemical enhancer (thiazone or propanediol), and physical massage. Only when all three components were applied together could a 15 min treatment achieve a three fold increase in the OCT reflectance from a 300 μm depth and 31% enhancement in image depth Zthreshold.

  12. APOLLO_NG - a probabilistic interpretation of the APOLLO legacy for AVHRR heritage channels

    NASA Astrophysics Data System (ADS)

    Klüser, L.; Killius, N.; Gesell, G.

    2015-10-01

    The cloud processing scheme APOLLO (AVHRR Processing scheme Over cLouds, Land and Ocean) has been in use for cloud detection and cloud property retrieval since the late 1980s. The physics of the APOLLO scheme still build the backbone of a range of cloud detection algorithms for AVHRR (Advanced Very High Resolution Radiometer) heritage instruments. The APOLLO_NG (APOLLO_NextGeneration) cloud processing scheme is a probabilistic interpretation of the original APOLLO method. It builds upon the physical principles that have served well in the original APOLLO scheme. Nevertheless, a couple of additional variables have been introduced in APOLLO_NG. Cloud detection is no longer performed as a binary yes/no decision based on these physical principles. It is rather expressed as cloud probability for each satellite pixel. Consequently, the outcome of the algorithm can be tuned from being sure to reliably identify clear pixels to conditions of reliably identifying definitely cloudy pixels, depending on the purpose. The probabilistic approach allows retrieving not only the cloud properties (optical depth, effective radius, cloud top temperature and cloud water path) but also their uncertainties. APOLLO_NG is designed as a standalone cloud retrieval method robust enough for operational near-realtime use and for application to large amounts of historical satellite data. The radiative transfer solution is approximated by the same two-stream approach which also had been used for the original APOLLO. This allows the algorithm to be applied to a wide range of sensors without the necessity of sensor-specific tuning. Moreover it allows for online calculation of the radiative transfer (i.e., within the retrieval algorithm) giving rise to a detailed probabilistic treatment of cloud variables. This study presents the algorithm for cloud detection and cloud property retrieval together with the physical principles from the APOLLO legacy it is based on. Furthermore a couple of example results from NOAA-18 are presented.

  13. Aerosol Climate Time Series Evaluation In ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, T.; de Leeuw, G.; Pinnock, S.

    2015-12-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. By the end of 2015 full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which are also validated. The paper will summarize and discuss the results of major reprocessing and validation conducted in 2015. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products (e.g. dust vs. total AOD, ensembles from different algorithms for the same sensor) will be discussed.

  14. Aerosol Climate Time Series in ESA Aerosol_cci

    NASA Astrophysics Data System (ADS)

    Popp, Thomas; de Leeuw, Gerrit; Pinnock, Simon

    2016-04-01

    Within the ESA Climate Change Initiative (CCI) Aerosol_cci (2010 - 2017) conducts intensive work to improve algorithms for the retrieval of aerosol information from European sensors. Meanwhile, full mission time series of 2 GCOS-required aerosol parameters are completely validated and released: Aerosol Optical Depth (AOD) from dual view ATSR-2 / AATSR radiometers (3 algorithms, 1995 - 2012), and stratospheric extinction profiles from star occultation GOMOS spectrometer (2002 - 2012). Additionally, a 35-year multi-sensor time series of the qualitative Absorbing Aerosol Index (AAI) together with sensitivity information and an AAI model simulator is available. Complementary aerosol properties requested by GCOS are in a "round robin" phase, where various algorithms are inter-compared: fine mode AOD, mineral dust AOD (from the thermal IASI spectrometer, but also from ATSR instruments and the POLDER sensor), absorption information and aerosol layer height. As a quasi-reference for validation in few selected regions with sparse ground-based observations the multi-pixel GRASP algorithm for the POLDER instrument is used. Validation of first dataset versions (vs. AERONET, MAN) and inter-comparison to other satellite datasets (MODIS, MISR, SeaWIFS) proved the high quality of the available datasets comparable to other satellite retrievals and revealed needs for algorithm improvement (for example for higher AOD values) which were taken into account for a reprocessing. The datasets contain pixel level uncertainty estimates which were also validated and improved in the reprocessing. For the three ATSR algorithms the use of an ensemble method was tested. The paper will summarize and discuss the status of dataset reprocessing and validation. The focus will be on the ATSR, GOMOS and IASI datasets. Pixel level uncertainties validation will be summarized and discussed including unknown components and their potential usefulness and limitations. Opportunities for time series extension with successor instruments of the Sentinel family will be described and the complementarity of the different satellite aerosol products (e.g. dust vs. total AOD, ensembles from different algorithms for the same sensor) will be discussed.

  15. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darne, C; Robertson, D; Alsanea, F

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less

  16. Miniature infrared hyperspectral imaging sensor for airborne applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-05-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  17. Infrared hyperspectral imaging miniaturized for UAV applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Hinnrichs, Bradford; McCutchen, Earl

    2017-02-01

    Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera, both MWIR and LWIR, small enough to serve as a payload on a miniature unmanned aerial vehicles. The optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of the sensor. This new and innovative approach to infrared hyperspectral imaging spectrometer uses micro-optics and will be explained in this paper. The micro-optics are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a mini-UAV or commercial quadcopter. Also, an example of how this technology can easily be used to quantify a hydrocarbon gas leak's volume and mass flowrates. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the spatial resolution. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame.

  18. Remote sensing depth invariant index parameters in shallow benthic habitats for bottom type classification.

    NASA Astrophysics Data System (ADS)

    Gapper, J.; El-Askary, H. M.; Linstead, E.

    2017-12-01

    Ground cover prediction of benthic habitats using remote sensing imagery requires substantial feature engineering. Artifacts that confound the ground cover characteristics must be severely reduced or eliminated while the distinguishing features must be exposed. In particular, the impact of wavelength attenuation in the water column means that a machine learning algorithm will primarily detect depth. However, the per pixel depths are difficult to know on a grand scale. Previous research has taken an in situ approach to applying depth invariant index on a small area of interest within a Landsat 8 scene. We aim to abstract this process for application to entire Landsat scene as well as other locations in order to study change detection in shallow benthic zones on a global scale. We have developed a methodology and applied it to more than 25 different Landsat 8 scenes. The images were first preprocessed to mask land, clouds, and other distortions then atmospheric correction via dark pixel subtraction was applied. Finally, depth invariant indices were calculated for each location and associated parameters recorded. Findings showed how robust the resulting parameters (deep-water radiance, depth invariant constant, band radiance variance/covariance, and ratio of attenuation) were across all scenes. We then created false color composite images of the depth invariant indices for each location. We noted several artifacts within some sites in the form of patterns or striations that did not appear to be aligned with variations in subsurface ground cover types. Further research into depth surveys for these sites revealed depths consistent with one or more wavelengths fully attenuating. This result showed that our model framework is generalizing well but limited to the penetration depths due to wavelength attenuation. Finally, we compared the parameters associated with the depth invariant calculation which were consistent across most scenes and explained any outliers observed. We concluded that the depth invariant index framework can be deployed on a large scale for ground cover detection in shallow waters (less than 16.8m or 5.2m for three DII measurements).

  19. Mueller matrix signature in advanced fluorescence microscopy imaging

    NASA Astrophysics Data System (ADS)

    Mazumder, Nirmal; Qiu, Jianjun; Kao, Fu-Jen; Diaspro, Alberto

    2017-02-01

    We have demonstrated the measurement and characterization of the polarization properties of a fluorescence signal using four-channel photon counting based Stokes-Mueller polarization microscopy. Thus, Lu-Chipman decomposition was applied to extract the critical polarization properties such as depolarization, linear retardance and the optical rotation of collagen type I fiber. We observed the spatial distribution of anisotropic and helical molecules of collagen from the reconstructed 2D Mueller images based on the fluorescence signal in a pixel-by-pixel manner.

  20. Active pixel sensor array with electronic shuttering

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor)

    2002-01-01

    An active pixel cell includes electronic shuttering capability. The cell can be shuttered to prevent additional charge accumulation. One mode transfers the current charge to a storage node that is blocked against accumulation of optical radiation. The charge is sampled from a floating node. Since the charge is stored, the node can be sampled at the beginning and the end of every cycle. Another aspect allows charge to spill out of the well whenever the charge amount gets higher than some amount, thereby providing anti blooming.

  1. VizieR Online Data Catalog: BVR light curves of UZ Leo (Lee+, 2018)

    NASA Astrophysics Data System (ADS)

    Lee, J. W.; Park, J.-H.

    2018-04-01

    We performed new CCD photometry of UZ Leo during two observing seasons between 2012 February and 2013 April, using a PIXIS: 2048B CCD and a BVR filter set attached to the 61 cm reflector at Sobaeksan Optical Astronomy Observatory (SOAO) in Korea. The CCD chip has 2048x2048pixels and a pixel size of 13.5um, so the field of view of a CCD frame is 17.6'x17.6'. (1 data file).

  2. Restoring the spatial resolution of refocus images on 4D light field

    NASA Astrophysics Data System (ADS)

    Lim, JaeGuyn; Park, ByungKwan; Kang, JooYoung; Lee, SeongDeok

    2010-01-01

    This paper presents the method for generating a refocus image with restored spatial resolution on a plenoptic camera, which functions controlling the depth of field after capturing one image unlike a traditional camera. It is generally known that the camera captures 4D light field (angular and spatial information of light) within a limited 2D sensor and results in reducing 2D spatial resolution due to inevitable 2D angular data. That's the reason why a refocus image is composed of a low spatial resolution compared with 2D sensor. However, it has recently been known that angular data contain sub-pixel spatial information such that the spatial resolution of 4D light field can be increased. We exploit the fact for improving the spatial resolution of a refocus image. We have experimentally scrutinized that the spatial information is different according to the depth of objects from a camera. So, from the selection of refocused regions (corresponding depth), we use corresponding pre-estimated sub-pixel spatial information for reconstructing spatial resolution of the regions. Meanwhile other regions maintain out-of-focus. Our experimental results show the effect of this proposed method compared to existing method.

  3. Direct measurement and calibration of the Kepler CCD Pixel Response Function for improved photometry and astrometry

    NASA Astrophysics Data System (ADS)

    Ninkov, Zoran

    Stellar images taken with telescopes and detectors in space are usually undersampled, and to correct for this, an accurate pixel response function is required. The standard approach for HST and KEPLER has been to measure the telescope PSF combined ("convolved") with the actual pixel response function, super-sampled by taking into account dithered or offset observed images of many stars (Lauer [1999]). This combined response function has been called the "PRF" (Bryson et al. [2011]). However, using such results has not allowed astrometry from KEPLER to reach its full potential (Monet et al. [2010], [2014]). Given the precision of KEPLER photometry, it should be feasible to use a pre-determined detector pixel response function (PRF) and an optical point spread function (PSF) as separable quantities to more accurately correct photometry and astrometry for undersampling. Wavelength (i.e. stellar color) and instrumental temperature should be affecting each of these differently. Discussion of the PRF in the "KEPLER Instrument Handbook" is limited to an ad-hoc extension of earlier measurements on a quite different CCD. It is known that the KEPLER PSF typically has a sharp spike in the middle, and the main bulk of the PSF is still small enough to be undersampled, so that any substructure in the pixel may interact significantly with the optical PSF. Both the PSF and PRF are probably asymmetric. We propose to measure the PRF for an example of the CCD sensors used on KEPLER at sufficient sampling resolution to allow significant improvement of KEPLER photometry and astrometry, in particular allowing PSF fitting techniques to be used on the data archive.

  4. Automatic determination of the artery vein ratio in retinal images

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Abràmoff, Michael D.

    2010-03-01

    A lower ratio between the width of the arteries and veins (Arteriolar-to-Venular diameter Ratio, AVR) on the retina, is well established to be predictive of stroke and other cardiovascular events in adults, as well as an increased risk of retinopathy of prematurity in premature infants. This work presents an automatic method that detects the location of the optic disc, determines the appropriate region of interest (ROI), classifies the vessels in the ROI into arteries and veins, measures their widths and calculates the AVR. After vessel segmentation and vessel width determination the optic disc is located and the system eliminates all vessels outside the AVR measurement ROI. The remaining vessels are thinned, vessel crossing and bifurcation points are removed leaving a set of vessel segments containing centerline pixels. Features are extracted from each centerline pixel that are used to assign them a soft label indicating the likelihood the pixel is part of a vein. As all centerline pixels in a connected segment should be the same type, the median soft label is assigned to each centerline pixel in the segment. Next artery vein pairs are matched using an iterative algorithm and the widths of the vessels is used to calculate the AVR. We train and test the algorithm using a set of 25 high resolution digital color fundus photographs a reference standard that indicates for the major vessels in the images whether they are an artery or a vein. We compared the AVR values produced by our system with those determined using a computer assisted method in 15 high resolution digital color fundus photographs and obtained a correlation coefficient of 0.881.

  5. Development of Extended-Depth Swept Source Optical Coherence Tomography for Applications in Ophthalmic Imaging of the Anterior and Posterior Eye

    NASA Astrophysics Data System (ADS)

    Dhalla, Al-Hafeez Zahir

    Optical coherence tomography (OCT) is a non-invasive optical imaging modality that provides micron-scale resolution of tissue micro-structure over depth ranges of several millimeters. This imaging technique has had a profound effect on the field of ophthalmology, wherein it has become the standard of care for the diagnosis of many retinal pathologies. Applications of OCT in the anterior eye, as well as for imaging of coronary arteries and the gastro-intestinal tract, have also shown promise, but have not yet achieved widespread clinical use. The usable imaging depth of OCT systems is most often limited by one of three factors: optical attenuation, inherent imaging range, or depth-of-focus. The first of these, optical attenuation, stems from the limitation that OCT only detects singly-scattered light. Thus, beyond a certain penetration depth into turbid media, essentially all of the incident light will have been multiply scattered, and can no longer be used for OCT imaging. For many applications (especially retinal imaging), optical attenuation is the most restrictive of the three imaging depth limitations. However, for some applications, especially anterior segment, cardiovascular (catheter-based) and GI (endoscopic) imaging, the usable imaging depth is often not limited by optical attenuation, but rather by the inherent imaging depth of the OCT systems. This inherent imaging depth, which is specific to only Fourier Domain OCT, arises due to two factors: sensitivity fall-off and the complex conjugate ambiguity. Finally, due to the trade-off between lateral resolution and axial depth-of-focus inherent in diffractive optical systems, additional depth limitations sometimes arises in either high lateral resolution or extended depth OCT imaging systems. The depth-of-focus limitation is most apparent in applications such as adaptive optics (AO-) OCT imaging of the retina, and extended depth imaging of the ocular anterior segment. In this dissertation, techniques for extending the imaging range of OCT systems are developed. These techniques include the use of a high spectral purity swept source laser in a full-field OCT system, as well as the use of a peculiar phenomenon known as coherence revival to resolve the complex conjugate ambiguity in swept source OCT. In addition, a technique for extending the depth of focus of OCT systems by using a polarization-encoded, dual-focus sample arm is demonstrated. Along the way, other related advances are also presented, including the development of techniques to reduce crosstalk and speckle artifacts in full-field OCT, and the use of fast optical switches to increase the imaging speed of certain low-duty cycle swept source OCT systems. Finally, the clinical utility of these techniques is demonstrated by combining them to demonstrate high-speed, high resolution, extended-depth imaging of both the anterior and posterior eye simultaneously and in vivo.

  6. Overview of 3D-TRACE, a NASA Initiative in Three-Dimensional Tomography of the Aerosol-Cloud Environment

    NASA Astrophysics Data System (ADS)

    Davis, Anthony; Diner, David; Yanovsky, Igor; Garay, Michael; Xu, Feng; Bal, Guillaume; Schechner, Yoav; Aides, Amit; Qu, Zheng; Emde, Claudia

    2013-04-01

    Remote sensing is a key tool for sorting cloud ensembles by dynamical state, aerosol environments by source region, and establishing causal relationships between aerosol amounts, type, and cloud microphysics-the so-called indirect aerosol climate impacts, and one of the main sources of uncertainty in current climate models. Current satellite imagers use data processing approaches that invariably start with cloud detection/masking to isolate aerosol air-masses from clouds, and then rely on one-dimensional (1D) radiative transfer (RT) to interpret the aerosol and cloud measurements in isolation. Not only does this lead to well-documented biases for the estimates of aerosol radiative forcing and cloud optical depths in current missions, but it is fundamentally inadequate for future missions such as EarthCARE where capturing the complex, three-dimensional (3D) interactions between clouds and aerosols is a primary objective. In order to advance the state of the art, the next generation of satellite information processing systems must incorporate technologies that will enable the treatment of the atmosphere as a fully 3D environment, represented more realistically as a continuum. At one end, there is an optically thin background dominated by aerosols and molecular scattering that is strongly stratified and relatively homogeneous in the horizontal. At the other end, there are optically thick embedded elements, clouds and aerosol plumes, which can be more or less uniform and quasi-planar or else highly 3D with boundaries in all directions; in both cases, strong internal variability may be present. To make this paradigm shift possible, we propose to combine the standard models for satellite signal prediction physically grounded in 1D and 3D RT, both scalar and vector, with technologies adapted from biomedical imaging, digital image processing, and computer vision. This will enable us to demonstrate how the 3D distribution of atmospheric constituents, and their associated microphysical properties, can be reconstructed from multi-angle/multi-spectral imaging radiometry and, more and more, polarimetry. Specific technologies of interest are computed tomography (reconstruction from projections), optical tomography (using cross-pixel radiation transport in the diffusion limit), stereoscopy (depth/height retrievals), blind source and scale separation (signal unmixing), and disocclusion (information recovery in the presence of obstructions). Later on, these potentially powerful inverse problem solutions will be fully integrated in a versatile satellite data analysis toolbox. At present, we can report substantial progress at the component level. Specifically, we will focus on the most elementary problems in atmospheric tomography with an emphasis on the vastly under-exploited class of multi-pixel techniques. One basic problem is to infer the outer shape and mean opacity of 3D clouds, along with a bulk measure of cloud particle size. Another is to separate high and low cloud layers based on their characteristically different spatial textures. Yet another is to reconstruct the 3D spatial distribution of aerosol density based on passive imaging. This suite of independent feasibility studies amounts to a compelling proofof- concept for the ambitious 3D-Tomographic Reconstruction of the Aerosol-Cloud Environment (3D-TRACE) project as a whole.

  7. Time-of-Travel Methods for Measuring Optical Flow on Board a Micro Flying Robot

    PubMed Central

    Vanhoutte, Erik; Mafrica, Stefano; Ruffier, Franck; Bootsma, Reinoud J.; Serres, Julien

    2017-01-01

    For use in autonomous micro air vehicles, visual sensors must not only be small, lightweight and insensitive to light variations; on-board autopilots also require fast and accurate optical flow measurements over a wide range of speeds. Using an auto-adaptive bio-inspired Michaelis–Menten Auto-adaptive Pixel (M2APix) analog silicon retina, in this article, we present comparative tests of two optical flow calculation algorithms operating under lighting conditions from 6×10−7 to 1.6×10−2 W·cm−2 (i.e., from 0.2 to 12,000 lux for human vision). Contrast “time of travel” between two adjacent light-sensitive pixels was determined by thresholding and by cross-correlating the two pixels’ signals, with measurement frequency up to 5 kHz for the 10 local motion sensors of the M2APix sensor. While both algorithms adequately measured optical flow between 25 ∘/s and 1000 ∘/s, thresholding gave rise to a lower precision, especially due to a larger number of outliers at higher speeds. Compared to thresholding, cross-correlation also allowed for a higher rate of optical flow output (99 Hz and 1195 Hz, respectively) but required substantially more computational resources. PMID:28287484

  8. Optical design of microlens array for CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Rongzhu; Lai, Liping

    2016-10-01

    The optical crosstalk between the pixel units can influence the image quality of CMOS image sensor. In the meantime, the duty ratio of CMOS is low because of its pixel structure. These two factors cause the low detection sensitivity of CMOS. In order to reduce the optical crosstalk and improve the fill factor of CMOS image sensor, a microlens array has been designed and integrated with CMOS. The initial parameters of the microlens array have been calculated according to the structure of a CMOS. Then the parameters have been optimized by using ZEMAX and the microlens arrays with different substrate thicknesses have been compared. The results show that in order to obtain the best imaging quality, when the effect of optical crosstalk for CMOS is the minimum, the best distance between microlens array and CMOS is about 19.3 μm. When incident light successively passes through microlens array and the distance, obtaining the minimum facula is around 0.347 um in the active area. In addition, when the incident angle of the light is 0o 22o, the microlens array has obvious inhibitory effect on the optical crosstalk. And the anti-crosstalk distance between microlens array and CMOS is 0 μm 162 μm.

  9. Development and Optical Testing of the Camera, Hand Lens, and Microscope Probe with Scannable Laser Spectroscopy (CHAMP-SLS)

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S.; Gursel, Yekta; Sepulveda, Cesar A.; Anderson, Mark; La Baw, Clayton; Johnson, Kenneth R.; Deans, Matthew; Beegle, Luther; Boynton, John

    2008-01-01

    Conducting high resolution field microscopy with coupled laser spectroscopy that can be used to selectively analyze the surface chemistry of individual pixels in a scene is an enabling capability for next generation robotic and manned spaceflight missions, civil, and military applications. In the laboratory, we use a range of imaging and surface preparation tools that provide us with in-focus images, context imaging for identifying features that we want to investigate at high magnification, and surface-optical coupling that allows us to apply optical spectroscopic analysis techniques for analyzing surface chemistry particularly at high magnifications. The camera, hand lens, and microscope probe with scannable laser spectroscopy (CHAMP-SLS) is an imaging/spectroscopy instrument capable of imaging continuously from infinity down to high resolution microscopy (resolution of approx. 1 micron/pixel in a final camera format), the closer CHAMP-SLS is placed to a feature, the higher the resultant magnification. At hand lens to microscopic magnifications, the imaged scene can be selectively interrogated with point spectroscopic techniques such as Raman spectroscopy, microscopic Laser Induced Breakdown Spectroscopy (micro-LIBS), laser ablation mass-spectrometry, Fluorescence spectroscopy, and/or Reflectance spectroscopy. This paper summarizes the optical design, development, and testing of the CHAMP-SLS optics.

  10. Development and application of variable-magnification x-ray Bragg optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirano, Keiichi, E-mail: keiichi.hirano@kek.jp; Takahashi, Yumiko; Sugiyama, Hiroshi

    2016-07-27

    A novel x-ray Bragg optics was developed for variable-magnification of an x-ray beam, and was combined with a module of the PILATUS pixel detector. A feasibility test of this optical system was carried out at the vertical-wiggler beamline BL-14B of the Photon Factory. By tuning the magnification factor, we could successfully control the spatial resolution of the optical system between 28 μm and 280 μm. X-ray absorption-contrast images of a leaf were observed at various magnification factors.

  11. Broadband optical equalizer using fault tolerant digital micromirrors.

    PubMed

    Riza, Nabeel; Mughal, M Junaid

    2003-06-30

    For the first time, the design and demonstration of a near continuous spectral processing mode broadband equalizer is described using the earlier proposed macro-pixel spatial approach for multiwavelength fiber-optic attenuation in combination with a high spectral resolution broadband transmissive volume Bragg grating. The demonstrated design features low loss and low polarization dependent loss with broadband operation. Such an analog mode spectral processor can impact optical applications ranging from test and instrumentation to dynamic alloptical networks.

  12. Temporal variations in atmospheric water vapor and aerosol optical depth determined by remote sensing

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Mcallum, W. E.; Heidt, M.; Jeske, K.; Lee, J. T.; Demonbrun, D.; Morgan, A.; Potter, J.

    1977-01-01

    By automatically tracking the sun, a four-channel solar radiometer was used to continuously measure optical depth and atmospheric water vapor. The design of this simple autotracking solar radiometer is presented. A technique for calculating the precipitable water from the ratio of a water band to a nearby nonabsorbing band is discussed. Studies of the temporal variability of precipitable water and atmospheric optical depth at 0.610, 0.8730 and 1.04 microns are presented. There was good correlation between the optical depth measured using the autotracker and visibility determined from National Weather Service Station data. However, much more temporal structure was evident in the autotracker data than in the visibility data. Cirrus clouds caused large changes in optical depth over short time periods. They appear to be the largest deleterious atmospheric effect over agricultural areas that are remote from urban pollution sources.

  13. Benthic Habitat Mapping by Combining Lyzenga’s Optical Model and Relative Water Depth Model in Lintea Island, Southeast Sulawesi

    NASA Astrophysics Data System (ADS)

    Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.

    2017-12-01

    Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.

  14. Infrared Cloud Imager Development for Atmospheric Optical Communication Characterization, and Measurements at the JPL Table Mountain Facility

    NASA Astrophysics Data System (ADS)

    Nugent, P. W.; Shaw, J. A.; Piazzolla, S.

    2013-02-01

    The continuous demand for high data return in deep space and near-Earth satellite missions has led NASA and international institutions to consider alternative technologies for high-data-rate communications. One solution is the establishment of wide-bandwidth Earth-space optical communication links, which require (among other things) a nearly obstruction-free atmospheric path. Considering the atmospheric channel, the most common and most apparent impairments on Earth-space optical communication paths arise from clouds. Therefore, the characterization of the statistical behavior of cloud coverage for optical communication ground station candidate sites is of vital importance. In this article, we describe the development and deployment of a ground-based, long-wavelength infrared cloud imaging system able to monitor and characterize the cloud coverage. This system is based on a commercially available camera with a 62-deg diagonal field of view. A novel internal-shutter-based calibration technique allows radiometric calibration of the camera, which operates without a thermoelectric cooler. This cloud imaging system provides continuous day-night cloud detection with constant sensitivity. The cloud imaging system also includes data-processing algorithms that calculate and remove atmospheric emission to isolate cloud signatures, and enable classification of clouds according to their optical attenuation. Measurements of long-wavelength infrared cloud radiance are used to retrieve the optical attenuation (cloud optical depth due to absorption and scattering) in the wavelength range of interest from visible to near-infrared, where the cloud attenuation is quite constant. This article addresses the specifics of the operation, calibration, and data processing of the imaging system that was deployed at the NASA/JPL Table Mountain Facility (TMF) in California. Data are reported from July 2008 to July 2010. These data describe seasonal variability in cloud cover at the TMF site, with cloud amount (percentage of cloudy pixels) peaking at just over 51 percent during February, of which more than 60 percent had optical attenuation exceeding 12 dB at wavelengths in the range from the visible to the near-infrared. The lowest cloud amount was found during August, averaging 19.6 percent, and these clouds were mostly optically thin, with low attenuation.

  15. Cloud Optical Depth Measured with Ground-Based, Uncooled Infrared Imagers

    NASA Technical Reports Server (NTRS)

    Shaw, Joseph A.; Nugent, Paul W.; Pust, Nathan J.; Redman, Brian J.; Piazzolla, Sabino

    2012-01-01

    Recent advances in uncooled, low-cost, long-wave infrared imagers provide excellent opportunities for remotely deployed ground-based remote sensing systems. However, the use of these imagers in demanding atmospheric sensing applications requires that careful attention be paid to characterizing and calibrating the system. We have developed and are using several versions of the ground-based "Infrared Cloud Imager (ICI)" instrument to measure spatial and temporal statistics of clouds and cloud optical depth or attenuation for both climate research and Earth-space optical communications path characterization. In this paper we summarize the ICI instruments and calibration methodology, then show ICI-derived cloud optical depths that are validated using a dual-polarization cloud lidar system for thin clouds (optical depth of approximately 4 or less).

  16. Mapping snow depth in open alpine terrain from stereo satellite imagery

    NASA Astrophysics Data System (ADS)

    Marti, R.; Gascoin, S.; Berthier, E.; de Pinel, M.; Houet, T.; Laffly, D.

    2016-07-01

    To date, there is no definitive approach to map snow depth in mountainous areas from spaceborne sensors. Here, we examine the potential of very-high-resolution (VHR) optical stereo satellites to this purpose. Two triplets of 0.70 m resolution images were acquired by the Pléiades satellite over an open alpine catchment (14.5 km2) under snow-free and snow-covered conditions. The open-source software Ame's Stereo Pipeline (ASP) was used to match the stereo pairs without ground control points to generate raw photogrammetric clouds and to convert them into high-resolution digital elevation models (DEMs) at 1, 2, and 4 m resolutions. The DEM differences (dDEMs) were computed after 3-D coregistration, including a correction of a -0.48 m vertical bias. The bias-corrected dDEM maps were compared to 451 snow-probe measurements. The results show a decimetric accuracy and precision in the Pléiades-derived snow depths. The median of the residuals is -0.16 m, with a standard deviation (SD) of 0.58 m at a pixel size of 2 m. We compared the 2 m Pléiades dDEM to a 2 m dDEM that was based on a winged unmanned aircraft vehicle (UAV) photogrammetric survey that was performed on the same winter date over a portion of the catchment (3.1 km2). The UAV-derived snow depth map exhibits the same patterns as the Pléiades-derived snow map, with a median of -0.11 m and a SD of 0.62 m when compared to the snow-probe measurements. The Pléiades images benefit from a very broad radiometric range (12 bits), allowing a high correlation success rate over the snow-covered areas. This study demonstrates the value of VHR stereo satellite imagery to map snow depth in remote mountainous areas even when no field data are available.

  17. Nearby Exo-Earth Astrometric Telescope (NEAT)

    NASA Technical Reports Server (NTRS)

    Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R.

    2011-01-01

    NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels.

  18. The ATLAS Diamond Beam Monitor: Luminosity detector at the LHC

    NASA Astrophysics Data System (ADS)

    Schaefer, D. M.; ATLAS Collaboration

    2016-07-01

    After the first three years of the LHC running, the ATLAS experiment extracted its pixel detector system to refurbish and re-position the optical readout drivers and install a new barrel layer of pixels. The experiment has also taken advantage of this access to install a set of beam monitoring telescopes with pixel sensors, four each in the forward and backward regions. These telescopes are based on chemical vapor deposited (CVD) diamond sensors to survive in this high radiation environment without needing extensive cooling. This paper describes the lessons learned in construction and commissioning of the ATLAS Diamond Beam Monitor (DBM). We show results from the construction quality assurance tests and commissioning performance, including results from cosmic ray running in early 2015.

  19. Optical Design of the WFIRST Phase-A Wide Field Instrument

    NASA Technical Reports Server (NTRS)

    Pasquale, Bert A.; Marx, Catherine T.; Gao, Guangjun; Armani, Nerses; Casey, Thomas

    2017-01-01

    The WFIRST Wide-Field Infrared Survey Telescope TMA optical design provides 0.28-sq degrees FOV at 0.11” pixel scale to the Wide Field Instrument, operating between 0.48-2.0 micrometers, including a spectrograph mode (1.0-2.0 micrometers). An Integral Field Channel provides 2-D discrete spectroscopy at 0.15” & 0.3” sampling.

  20. Liquid crystal optics for communications, signal processing and 3-D microscopic imaging

    NASA Astrophysics Data System (ADS)

    Khan, Sajjad Ali

    This dissertation proposes, studies and experimentally demonstrates novel liquid crystal (LC) optics to solve challenging problems in RF and photonic signal processing, freespace and fiber optic communications and microscopic imaging. These include free-space optical scanners for military and optical wireless applications, variable fiber-optic attenuators for optical communications, photonic control techniques for phased array antennas and radar, and 3-D microscopic imaging. At the heart of the applications demonstrated in this thesis are LC devices that are non-pixelated and can be controlled either electrically or optically. Instead of the typical pixel-by-pixel control as is custom in LC devices, the phase profile across the aperture of these novel LC devices is varied through the use of high impedance layers. Due to the presence of the high impedance layer, there forms a voltage gradient across the aperture of such a device which results in a phase gradient across the LC layer which in turn is accumulated by the optical beam traversing through this LC device. The geometry of the electrical contacts that are used to apply the external voltage will define the nature of the phase gradient present across the optical beam. In order to steer a laser beam in one angular dimension, straight line electrical contacts are used to form a one dimensional phase gradient while an annular electrical contact results in a circularly symmetric phase profile across the optical beam making it suitable for focusing the optical beam. The geometry of the electrical contacts alone is not sufficient to form the linear and the quadratic phase profiles that are required to either deflect or focus an optical beam. Clever use of the phase response of a typical nematic liquid crystal (NLC) is made such that the linear response region is used for the angular beam deflection while the high voltage quadratic response region is used for focusing the beam. Employing an NLC deflector, a device that uses the linear angular deflection, laser beam steering is demonstrated in two orthogonal dimensions whereas an NLC lens is used to address the third dimension to complete a three dimensional (3-D) scanner. Such an NLC deflector was then used in a variable optical attenuator (VOA), whereby a laser beam coupled between two identical single mode fibers (SMF) was mis-aligned away from the output fiber causing the intensity of the output coupled light to decrease as a function of the angular deflection. Since the angular deflection is electrically controlled, hence the VOA operation is fairly simple and repeatable. An extension of this VOA for wavelength tunable operation is also shown in this dissertation. (Abstract shortened by UMI.)

Top