Sample records for side pixel registration

  1. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  2. Avoiding Stair-Step Artifacts in Image Registration for GOES-R Navigation and Registration Assessment

    NASA Technical Reports Server (NTRS)

    Grycewicz, Thomas J.; Tan, Bin; Isaacson, Peter J.; De Luccia, Frank J.; Dellomo, John

    2016-01-01

    In developing software for independent verification and validation (IVV) of the Image Navigation and Registration (INR) capability for the Geostationary Operational Environmental Satellite R Series (GOES-R) Advanced Baseline Imager (ABI), we have encountered an image registration artifact which limits the accuracy of image offset estimation at the subpixel scale using image correlation. Where the two images to be registered have the same pixel size, subpixel image registration preferentially selects registration values where the image pixel boundaries are close to lined up. Because of the shape of a curve plotting input displacement to estimated offset, we call this a stair-step artifact. When one image is at a higher resolution than the other, the stair-step artifact is minimized by correlating at the higher resolution. For validating ABI image navigation, GOES-R images are correlated with Landsat-based ground truth maps. To create the ground truth map, the Landsat image is first transformed to the perspective seen from the GOES-R satellite, and then is scaled to an appropriate pixel size. Minimizing processing time motivates choosing the map pixels to be the same size as the GOES-R pixels. At this pixel size image processing of the shift estimate is efficient, but the stair-step artifact is present. If the map pixel is very small, stair-step is not a problem, but image correlation is computation-intensive. This paper describes simulation-based selection of the scale for truth maps for registering GOES-R ABI images.

  3. Correlation and registration of ERTS multispectral imagery. [by a digital processing technique

    NASA Technical Reports Server (NTRS)

    Bonrud, L. O.; Henrikson, P. J.

    1974-01-01

    Examples of automatic digital processing demonstrate the feasibility of registering one ERTS multispectral scanner (MSS) image with another obtained on a subsequent orbit, and automatic matching, correlation, and registration of MSS imagery with aerial photography (multisensor correlation) is demonstrated. Excellent correlation was obtained with patch sizes exceeding 16 pixels square. Qualities which lead to effective control point selection are distinctive features, good contrast, and constant feature characteristics. Results of the study indicate that more than 300 degrees of freedom are required to register two standard ERTS-1 MSS frames covering 100 by 100 nautical miles to an accuracy of 0.6 pixel mean radial displacement error. An automatic strip processing technique demonstrates 600 to 1200 degrees of freedom over a quater frame of ERTS imagery. Registration accuracies in the range of 0.3 pixel to 0.5 pixel mean radial error were confirmed by independent error analysis. Accuracies in the range of 0.5 pixel to 1.4 pixel mean radial error were demonstrated by semi-automatic registration over small geographic areas.

  4. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  5. Making a Back-Illuminated Imager with Back-Side Contact and Alignment Markers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2008-01-01

    A design modification and a fabrication process that implements the modification have been conceived to solve two problems encountered in the development of back-illuminated, back-sidethinned complementary metal oxide/ semiconductor (CMOS) image-detector integrated circuits. The two problems are (1) how to form metal electrical-contact pads on the back side that are electrically connected through the thickness in proper alignment with electrical contact points on the front side and (2) how to provide alignment keys on the back side to ensure proper registration of backside optical components (e.g., microlenses and/or color filters) with the front-side pixel pattern. The essence of the design modification is to add metal plugs that extend from the desired front-side locations through the thickness and protrude from the back side of the substrate. The plugs afford the required front-to-back electrical conduction, and the protrusions of the plugs serve as both the alignment keys and the bases upon which the back-side electrical-contact pads can be formed.

  6. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  7. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Ting; Kim, Sung; Goyal, Sharad

    2010-01-15

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintainmore » the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a displacement map was generated. Segmented volumes in the CT images deformed using the displacement field were compared against the manual segmentations in the CBCT images to quantitatively measure the convergence of the shape and the volume. Other image features were also used to evaluate the overall performance of the registration. Results: The algorithm was able to complete the segmentation and registration process within 1 min, and the superimposed clinical objects achieved a volumetric similarity measure of over 90% between the reference and the registered data. Validation results also showed that the proposed registration could accurately trace the deformation inside the target volume with average errors of less than 1 mm. The method had a solid performance in registering the simulated images with up to 20 Hounsfield unit white noise added. Also, the side by side comparison with the original demons algorithm demonstrated its improved registration performance over the local pixel-based registration approaches. Conclusions: Given the strength and efficiency of the algorithm, the proposed method has significant clinical potential to accelerate and to improve the CBCT delineation and targets tracking in online IGRT applications.« less

  8. Registration of Panoramic/Fish-Eye Image Sequence and LiDAR Points Using Skyline Features

    PubMed Central

    Zhu, Ningning; Jia, Yonghong; Ji, Shunping

    2018-01-01

    We propose utilizing a rigorous registration model and a skyline-based method for automatic registration of LiDAR points and a sequence of panoramic/fish-eye images in a mobile mapping system (MMS). This method can automatically optimize original registration parameters and avoid the use of manual interventions in control point-based registration methods. First, the rigorous registration model between the LiDAR points and the panoramic/fish-eye image was built. Second, skyline pixels from panoramic/fish-eye images and skyline points from the MMS’s LiDAR points were extracted, relying on the difference in the pixel values and the registration model, respectively. Third, a brute force optimization method was used to search for optimal matching parameters between skyline pixels and skyline points. In the experiments, the original registration method and the control point registration method were used to compare the accuracy of our method with a sequence of panoramic/fish-eye images. The result showed: (1) the panoramic/fish-eye image registration model is effective and can achieve high-precision registration of the image and the MMS’s LiDAR points; (2) the skyline-based registration method can automatically optimize the initial attitude parameters, realizing a high-precision registration of a panoramic/fish-eye image and the MMS’s LiDAR points; and (3) the attitude correction values of the sequences of panoramic/fish-eye images are different, and the values must be solved one by one. PMID:29883431

  9. Landsat image registration for agricultural applications

    NASA Technical Reports Server (NTRS)

    Wolfe, R. H., Jr.; Juday, R. D.; Wacker, A. G.; Kaneko, T.

    1982-01-01

    An image registration system has been developed at the NASA Johnson Space Center (JSC) to spatially align multi-temporal Landsat acquisitions for use in agriculture and forestry research. Working in conjunction with the Master Data Processor (MDP) at the Goddard Space Flight Center, it functionally replaces the long-standing LACIE Registration Processor as JSC's data supplier. The system represents an expansion of the techniques developed for the MDP and LACIE Registration Processor, and it utilizes the experience gained in an IBM/JSC effort evaluating the performance of the latter. These techniques are discussed in detail. Several tests were developed to evaluate the registration performance of the system. The results indicate that 1/15-pixel accuracy (about 4m for Landsat MSS) is achievable in ideal circumstances, sub-pixel accuracy (often to 0.2 pixel or better) was attained on a representative set of U.S. acquisitions, and a success rate commensurate with the LACIE Registration Processor was realized. The system has been employed in a production mode on U.S. and foreign data, and a performance similar to the earlier tests has been noted.

  10. The fusion of large scale classified side-scan sonar image mosaics.

    PubMed

    Reed, Scott; Tena, Ruiz Ioseba; Capus, Chris; Petillot, Yvan

    2006-07-01

    This paper presents a unified framework for the creation of classified maps of the seafloor from sonar imagery. Significant challenges in photometric correction, classification, navigation and registration, and image fusion are addressed. The techniques described are directly applicable to a range of remote sensing problems. Recent advances in side-scan data correction are incorporated to compensate for the sonar beam pattern and motion of the acquisition platform. The corrected images are segmented using pixel-based textural features and standard classifiers. In parallel, the navigation of the sonar device is processed using Kalman filtering techniques. A simultaneous localization and mapping framework is adopted to improve the navigation accuracy and produce georeferenced mosaics of the segmented side-scan data. These are fused within a Markovian framework and two fusion models are presented. The first uses a voting scheme regularized by an isotropic Markov random field and is applicable when the reliability of each information source is unknown. The Markov model is also used to inpaint regions where no final classification decision can be reached using pixel level fusion. The second model formally introduces the reliability of each information source into a probabilistic model. Evaluation of the two models using both synthetic images and real data from a large scale survey shows significant quantitative and qualitative improvement using the fusion approach.

  11. Assessment of Thematic Mapper band-to-band registration by the block correlation method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1983-01-01

    Rectangular blocks of pixels from one band image were statistically correlated against blocks centered on identical pixels from a second band image. The block pairs were shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient to the maximum correlation was taken as the best estimate of registration error for each block pair. For the band combinations of the Arkansas scene studied, the misregistration of TM spectral bands within the noncooled focal plane lie well within the 0.2 pixel target specification. Misregistration between the middle IR bands is well within this specification also. The thermal IR band has an apparent misregistration with TM band 7 of approximately 3 pixels in each direction. The TM band 3 has a misregistration of approximately 0.2 pixel in the across-scan direction and 0.5 pixel in the along-scan direction, with both TM bands 5 and 7.

  12. Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1985-01-01

    Rectangular blocks of pixels from one band image were statistically correlated against blocks centered on identical pixels from a second band image. The block pairs were shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient to the maximum correlation was taken as the best estimate of registration error for each block pair. For the band combinations of the Arkansas scene studied, the misregistration of TM spectral bands within the noncooled focal plane lie well within the 0.2 pixel target specification. Misregistration between the middle IR bands is well within this specification also. The thermal IR band has an apparent misregistration with TM band 7 of approximately 3 pixels in each direction. The TM band 3 has a misregistration of approximately 0.2 pixel in the across-scan direction and 0.5 pixel in the along-scan direction, with both TM bands 5 and 7.

  13. Performance evaluations of demons and free form deformation algorithms for the liver region.

    PubMed

    Wang, Hui; Gong, Guanzhong; Wang, Hongjun; Li, Dengwang; Yin, Yong; Lu, Jie

    2014-04-01

    We investigated the influence of breathing motion on radiation therapy according to four- dimensional computed tomography (4D-CT) technology and indicated the registration of 4D-CT images was significant. The demons algorithm in two interpolation modes was compared to the FFD model algorithm to register the different phase images of 4D-CT in tumor tracking, using iodipin as verification. Linear interpolation was used in both mode 1 and mode 2. Mode 1 set outside pixels to nearest pixel, while mode 2 set outside pixels to zero. We used normalized mutual information (NMI), sum of squared differences, modified Hausdorff-distance, and registration speed to evaluate the performance of each algorithm. The average NMI after demons registration method in mode 1 improved 1.76% and 4.75% when compared to mode 2 and FFD model algorithm, respectively. Further, the modified Hausdorff-distance was no different between demons modes 1 and 2, but mode 1 was 15.2% lower than FFD. Finally, demons algorithm has the absolute advantage in registration speed. The demons algorithm in mode 1 was therefore found to be much more suitable for the registration of 4D-CT images. The subtractions of floating images and reference image before and after registration by demons further verified that influence of breathing motion cannot be ignored and the demons registration method is feasible.

  14. An image warping technique for rodent brain MRI-histology registration based on thin-plate splines with landmark optimization

    NASA Astrophysics Data System (ADS)

    Liu, Yutong; Uberti, Mariano; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael D.

    2009-02-01

    Coregistration of in vivo magnetic resonance imaging (MRI) with histology provides validation of disease biomarker and pathobiology studies. Although thin-plate splines are widely used in such image registration, point landmark selection is error prone and often time-consuming. We present a technique to optimize landmark selection for thin-plate splines and demonstrate its usefulness in warping rodent brain MRI to histological sections. In this technique, contours are drawn on the corresponding MRI slices and images of histological sections. The landmarks are extracted from the contours by equal spacing then optimized by minimizing a cost function consisting of the landmark displacement and contour curvature. The technique was validated using simulation data and brain MRI-histology coregistration in a murine model of HIV-1 encephalitis. Registration error was quantified by calculating target registration error (TRE). The TRE of approximately 8 pixels for 20-80 landmarks without optimization was stable at different landmark numbers. The optimized results were more accurate at low landmark numbers (TRE of approximately 2 pixels for 50 landmarks), while the accuracy decreased (TRE approximately 8 pixels for larger numbers of landmarks (70- 80). The results demonstrated that registration accuracy decreases with the increasing landmark numbers offering more confidence in MRI-histology registration using thin-plate splines.

  15. Investigation of TM Band-to-band Registration Using the JSC Registration Processor

    NASA Technical Reports Server (NTRS)

    Yao, S. S.; Amis, M. L.

    1984-01-01

    The JSC registration processor performs scene-to-scene (or band-to-band) correlation based on edge images. The edge images are derived from a percentage of the edge pixels calculated from the raw scene data, excluding clouds and other extraneous data in the scene. Correlations are performed on patches (blocks) of the edge images, and the correlation peak location in each patch is estimated iteratively to fractional pixel location accuracy. Peak offset locations from all patches over the scene are then considered together, and a variety of tests are made to weed out outliers and other inconsistencies before a distortion model is assumed. Thus, the correlation peak offset locations in each patch indicate quantitatively how well the two TM bands register to each other over that patch of scene data. The average of these offsets indicate the overall accuracies of the band-to-band registration. The registration processor was also used to register one acquisition to another acquisition of multitemporal TM data acquired over the same ground track. Band 4 images from both acquisitions were correlated and an rms error of a fraction of a pixel was routinely obtained.

  16. Evaluation of registration accuracy between Sentinel-2 and Landsat 8

    NASA Astrophysics Data System (ADS)

    Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia

    2016-08-01

    Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).

  17. Generalized procrustean image deformation for subtraction of mammograms

    NASA Astrophysics Data System (ADS)

    Good, Walter F.; Zheng, Bin; Chang, Yuan-Hsiang; Wang, Xiao Hui; Maitz, Glenn S.

    1999-05-01

    This project is a preliminary evaluation of two simple fully automatic nonlinear transformations which can map any mammographic image onto a reference image while guaranteeing registration of specific features. The first method automatically identifies skin lines, after which each pixel is given coordinates in the range [0,1] X [0,1], where the actual value of a coordinate is the fractional distance of the pixel between tissue boundaries in either the horizontal or vertical direction. This insures that skin lines are put in registration. The second method, which is the method of primary interest, automatically detects pectoral muscles, skin lines and nipple locations. For each image, a polar coordinate system is established with its origin at the intersection of the nipple axes line (NAL) and a line indicating the pectoral muscle. Points within a mammogram are identified by the angle of their position vector, relative to the NAL, and by their fractional distance between the origin and the skin line. This deforms mammograms in such a way that their pectoral lines, NALs and skin lines are all in registration. After images are deformed, their grayscales are adjusted by applying linear regression to pixel value pairs for corresponding tissue pixels. In a comparison of these methods to a previously reported 'translation/rotation' technique, evaluation of difference images clearly indicates that the polar coordinates method results in the most accurate registration of the transformations considered.

  18. Multimodality Non-Rigid Image Registration for Planning, Targeting and Monitoring during CT-guided Percutaneous Liver Tumor Cryoablation

    PubMed Central

    Elhawary, Haytham; Oguro, Sota; Tuncali, Kemal; Morrison, Paul R.; Tatli, Servet; Shyn, Paul B.; Silverman, Stuart G.; Hata, Nobuhiko

    2010-01-01

    Rationale and Objectives To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures. Materials and Methods After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated. Results Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm. Conclusion Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable. PMID:20817574

  19. Junction-side illuminated silicon detector arrays

    DOEpatents

    Iwanczyk, Jan S.; Patt, Bradley E.; Tull, Carolyn

    2004-03-30

    A junction-side illuminated detector array of pixelated detectors is constructed on a silicon wafer. A junction contact on the front-side may cover the whole detector array, and may be used as an entrance window for light, x-ray, gamma ray and/or other particles. The back-side has an array of individual ohmic contact pixels. Each of the ohmic contact pixels on the back-side may be surrounded by a grid or a ring of junction separation implants. Effective pixel size may be changed by separately biasing different sections of the grid. A scintillator may be coupled directly to the entrance window while readout electronics may be coupled directly to the ohmic contact pixels. The detector array may be used as a radiation hardened detector for high-energy physics research or as avalanche imaging arrays.

  20. Registration of Heat Capacity Mapping Mission day and night images

    NASA Technical Reports Server (NTRS)

    Watson, K.; Hummer-Miller, S.; Sawatzky, D. L. (Principal Investigator)

    1982-01-01

    Neither iterative registration, using drainage intersection maps for control, nor cross correlation techniques were satisfactory in registering day and night HCMM imagery. A procedure was developed which registers the image pairs by selecting control points and mapping the night thermal image to the daytime thermal and reflectance images using an affine transformation on a 1300 by 1100 pixel image. The resulting image registration is accurate to better than two pixels (RMS) and does not exhibit the significant misregistration that was noted in the temperature-difference and thermal-inertia products supplied by NASA. The affine transformation was determined using simple matrix arithmetic, a step that can be performed rapidly on a minicomputer.

  1. An approach to defect inspection for packing presswork with virtual orientation points and threshold template image

    NASA Astrophysics Data System (ADS)

    Hao, Xiangyang; Liu, Songlin; Zhao, Fulai; Jiang, Lixing

    2015-05-01

    The packing presswork is an important factor of industrial product, especially for the luxury commodities such as cigarettes. In order to ensure the packing presswork to be qualified, the products should be inspected and unqualified one be picked out piece by piece with the vision-based inspection method, which has such advantages as no-touch inspection, high efficiency and automation. Vision-based inspection of packing presswork mainly consists of steps as image acquisition, image registration and defect inspection. The registration between inspected image and reference image is the foundation and premise of visual inspection. In order to realize rapid, reliable and accurate image registration, a registration method based on virtual orientation points is put forward. The precision of registration between inspected image and reference image can reach to sub pixels. Since defect is without fixed position, shape, size and color, three measures are taken to improve the inspection effect. Firstly, the concept of threshold template image is put forward to resolve the problem of variable threshold of intensity difference. Secondly, the color difference is calculated by comparing each pixel with the adjacent pixels of its correspondence on reference image to avoid false defect resulted from color registration error. Thirdly, the strategy of image pyramid is applied in the inspection algorithm to enhance the inspection efficiency. Experiments show that the related algorithm is effective to defect inspection and it takes 27.4 ms on average to inspect a piece of cigarette packing presswork.

  2. Image registration with uncertainty analysis

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM

    2011-03-22

    In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.

  3. Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method

    NASA Technical Reports Server (NTRS)

    Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.

    1984-01-01

    The design of the Thematic Mapper (TM) multispectral radiometer makes it susceptible to band-to-band misregistration. To estimate band-to-band misregistration a block correlation method is employed. This method is chosen over other possible techniques (band differencing and flickering) because quantitative results are produced. The method correlates rectangular blocks of pixels from one band against blocks centered on identical pixels from a second band. The block pairs are shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient for each shift position is computed. The displacement corresponding to the maximum correlation is taken as the best estimate of registration error for each block pair. Subpixel shifts are estimated by a bi-quadratic interpolation of the correlation values surrounding the maximum correlation. To obtain statistical summaries for each band combination post processing of the block correlation results performed. The method results in estimates of registration error that are consistent with expectations.

  4. Non-rigid image registration using graph-cuts.

    PubMed

    Tang, Tommy W H; Chung, Albert C S

    2007-01-01

    Non-rigid image registration is an ill-posed yet challenging problem due to its supernormal high degree of freedoms and inherent requirement of smoothness. Graph-cuts method is a powerful combinatorial optimization tool which has been successfully applied into image segmentation and stereo matching. Under some specific constraints, graph-cuts method yields either a global minimum or a local minimum in a strong sense. Thus, it is interesting to see the effects of using graph-cuts in non-rigid image registration. In this paper, we formulate non-rigid image registration as a discrete labeling problem. Each pixel in the source image is assigned a displacement label (which is a vector) indicating which position in the floating image it is spatially corresponding to. A smoothness constraint based on first derivative is used to penalize sharp changes in displacement labels across pixels. The whole system can be optimized by using the graph-cuts method via alpha-expansions. We compare 2D and 3D registration results of our method with two state-of-the-art approaches. It is found that our method is more robust to different challenging non-rigid registration cases with higher registration accuracy.

  5. Multi-Sensor Registration of Earth Remotely Sensed Imagery

    NASA Technical Reports Server (NTRS)

    LeMoigne, Jacqueline; Cole-Rhodes, Arlene; Eastman, Roger; Johnson, Kisha; Morisette, Jeffrey; Netanyahu, Nathan S.; Stone, Harold S.; Zavorin, Ilya; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    Assuming that approximate registration is given within a few pixels by a systematic correction system, we develop automatic image registration methods for multi-sensor data with the goal of achieving sub-pixel accuracy. Automatic image registration is usually defined by three steps; feature extraction, feature matching, and data resampling or fusion. Our previous work focused on image correlation methods based on the use of different features. In this paper, we study different feature matching techniques and present five algorithms where the features are either original gray levels or wavelet-like features, and the feature matching is based on gradient descent optimization, statistical robust matching, and mutual information. These algorithms are tested and compared on several multi-sensor datasets covering one of the EOS Core Sites, the Konza Prairie in Kansas, from four different sensors: IKONOS (4m), Landsat-7/ETM+ (30m), MODIS (500m), and SeaWIFS (1000m).

  6. Comparison of time-series registration methods in breast dynamic infrared imaging

    NASA Astrophysics Data System (ADS)

    Riyahi-Alam, S.; Agostini, V.; Molinari, F.; Knaflitz, M.

    2015-03-01

    Automated motion reduction in dynamic infrared imaging is on demand in clinical applications, since movement disarranges time-temperature series of each pixel, thus originating thermal artifacts that might bias the clinical decision. All previously proposed registration methods are feature based algorithms requiring manual intervention. The aim of this work is to optimize the registration strategy specifically for Breast Dynamic Infrared Imaging and to make it user-independent. We implemented and evaluated 3 different 3D time-series registration methods: 1. Linear affine, 2. Non-linear Bspline, 3. Demons applied to 12 datasets of healthy breast thermal images. The results are evaluated through normalized mutual information with average values of 0.70 ±0.03, 0.74 ±0.03 and 0.81 ±0.09 (out of 1) for Affine, Bspline and Demons registration, respectively, as well as breast boundary overlap and Jacobian determinant of the deformation field. The statistical analysis of the results showed that symmetric diffeomorphic Demons' registration method outperforms also with the best breast alignment and non-negative Jacobian values which guarantee image similarity and anatomical consistency of the transformation, due to homologous forces enforcing the pixel geometric disparities to be shortened on all the frames. We propose Demons' registration as an effective technique for time-series dynamic infrared registration, to stabilize the local temperature oscillation.

  7. Estimation bias from using nonlinear Fourier plane correlators for sub-pixel image shift measurement and implications for the binary joint transform correlator

    NASA Astrophysics Data System (ADS)

    Grycewicz, Thomas J.; Florio, Christopher J.; Franz, Geoffrey A.; Robinson, Ross E.

    2007-09-01

    When using Fourier plane digital algorithms or an optical correlator to measure the correlation between digital images, interpolation by center-of-mass or quadratic estimation techniques can be used to estimate image displacement to the sub-pixel level. However, this can lead to a bias in the correlation measurement. This bias shifts the sub-pixel output measurement to be closer to the nearest pixel center than the actual location. The paper investigates the bias in the outputs of both digital and optical correlators, and proposes methods to minimize this effect. We use digital studies and optical implementations of the joint transform correlator to demonstrate optical registration with accuracies better than 0.1 pixels. We use both simulations of image shift and movies of a moving target as inputs. We demonstrate bias error for both center-of-mass and quadratic interpolation, and discuss the reasons that this bias is present. Finally, we suggest measures to reduce or eliminate the bias effects. We show that when sub-pixel bias is present, it can be eliminated by modifying the interpolation method. By removing the bias error, we improve registration accuracy by thirty percent.

  8. Fast and Robust Registration of Multimodal Remote Sensing Images via Dense Orientated Gradient Feature

    NASA Astrophysics Data System (ADS)

    Ye, Y.

    2017-09-01

    This paper presents a fast and robust method for the registration of multimodal remote sensing data (e.g., optical, LiDAR, SAR and map). The proposed method is based on the hypothesis that structural similarity between images is preserved across different modalities. In the definition of the proposed method, we first develop a pixel-wise feature descriptor named Dense Orientated Gradient Histogram (DOGH), which can be computed effectively at every pixel and is robust to non-linear intensity differences between images. Then a fast similarity metric based on DOGH is built in frequency domain using the Fast Fourier Transform (FFT) technique. Finally, a template matching scheme is applied to detect tie points between images. Experimental results on different types of multimodal remote sensing images show that the proposed similarity metric has the superior matching performance and computational efficiency than the state-of-the-art methods. Moreover, based on the proposed similarity metric, we also design a fast and robust automatic registration system for multimodal images. This system has been evaluated using a pair of very large SAR and optical images (more than 20000 × 20000 pixels). Experimental results show that our system outperforms the two popular commercial software systems (i.e. ENVI and ERDAS) in both registration accuracy and computational efficiency.

  9. SU-E-J-113: Effects of Deformable Registration On First-Order Texture Maps Calculated From Thoracic Lung CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, C; Cunliffe, A; Al-Hallaq, H

    Purpose: To determine the stability of eight first-order texture features following the deformable registration of serial computed tomography (CT) scans. Methods: CT scans at two different time points from 10 patients deemed to have no lung abnormalities by a radiologist were collected. Following lung segmentation using an in-house program, texture maps were calculated from 32×32-pixel regions of interest centered at every pixel in the lungs. The texture feature value of the ROI was assigned to the center pixel of the ROI in the corresponding location of the texture map. Pixels in the square ROI not contained within the segmented lungmore » were not included in the calculation. To quantify the agreement between ROI texture features in corresponding pixels of the baseline and follow-up texture maps, the Fraunhofer MEVIS EMPIRE10 deformable registration algorithm was used to register the baseline and follow-up scans. Bland-Altman analysis was used to compare registered scan pairs by computing normalized bias (nBias), defined as the feature value change normalized to the mean feature value, and normalized range of agreement (nRoA), defined as the range spanned by the 95% limits of agreement normalized to the mean feature value. Results: Each patient’s scans contained between 6.8–15.4 million ROIs. All of the first-order features investigated were found to have an nBias value less than 0.04% and an nRoA less than 19%, indicating that the variability introduced by deformable registration was low. Conclusion: The eight first-order features investigated were found to be registration stable. Changes in CT texture maps could allow for temporal-spatial evaluation of the evolution of lung abnormalities relating to a variety of diseases on a patient-by-patient basis. SGA and HA receives royalties and licensing fees through the University of Chicago for computer-aided diagnosis technology. Research reported in this publication was supported by the National Institute Of General Medical Sciences of the National Institutes of Health under Award Number R25GM109439.« less

  10. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  11. SU-E-J-114: A Practical Hybrid Method for Improving the Quality of CT-CBCT Deformable Image Registration for Head and Neck Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C; Kumarasiri, A; Chetvertkov, M

    2015-06-15

    Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less

  12. An analysis of Landsat-4 Thematic Mapper geometric properties

    NASA Technical Reports Server (NTRS)

    Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gohkman, B.; Friedman, S. Z.; Logan, T. L.

    1984-01-01

    Landsat-4 Thematic Mapper data of Washington, DC, Harrisburg, PA, and Salton Sea, CA were analyzed to determine geometric integrity and conformity of the data to known earth surface geometry. Several tests were performed. Intraband correlation and interband registration were investigated. No problems were observed in the intraband analysis, and aside from indications of slight misregistration between bands of the primary versus bands of the secondary focal planes, interband registration was well within the specified tolerances. A substantial number of ground control points were found and used to check the images' conformity to the Space Oblique Mercator (SOM) projection of their respective areas. The means of the residual offsets, which included nonprocessing related measurement errors, were close to the one pixel level in the two scenes examined. The Harrisburg scene residual mean was 28.38 m (0.95 pixels) with a standard deviation of 19.82 m (0.66 pixels), while the mean and standard deviation for the Salton Sea scene were 40.46 (1.35 pixels) and 30.57 m (1.02 pixels), respectively. Overall, the data were judged to be a high geometric quality with errors close to those targeted by the TM sensor design specifications.

  13. SU-E-J-91: FFT Based Medical Image Registration Using a Graphics Processing Unit (GPU).

    PubMed

    Luce, J; Hoggarth, M; Lin, J; Block, A; Roeske, J

    2012-06-01

    To evaluate the efficiency gains obtained from using a Graphics Processing Unit (GPU) to perform a Fourier Transform (FT) based image registration. Fourier-based image registration involves obtaining the FT of the component images, and analyzing them in Fourier space to determine the translations and rotations of one image set relative to another. An important property of FT registration is that by enlarging the images (adding additional pixels), one can obtain translations and rotations with sub-pixel resolution. The expense, however, is an increased computational time. GPUs may decrease the computational time associated with FT image registration by taking advantage of their parallel architecture to perform matrix computations much more efficiently than a Central Processor Unit (CPU). In order to evaluate the computational gains produced by a GPU, images with known translational shifts were utilized. A program was written in the Interactive Data Language (IDL; Exelis, Boulder, CO) to performCPU-based calculations. Subsequently, the program was modified using GPU bindings (Tech-X, Boulder, CO) to perform GPU-based computation on the same system. Multiple image sizes were used, ranging from 256×256 to 2304×2304. The time required to complete the full algorithm by the CPU and GPU were benchmarked and the speed increase was defined as the ratio of the CPU-to-GPU computational time. The ratio of the CPU-to- GPU time was greater than 1.0 for all images, which indicates the GPU is performing the algorithm faster than the CPU. The smallest improvement, a 1.21 ratio, was found with the smallest image size of 256×256, and the largest speedup, a 4.25 ratio, was observed with the largest image size of 2304×2304. GPU programming resulted in a significant decrease in computational time associated with a FT image registration algorithm. The inclusion of the GPU may provide near real-time, sub-pixel registration capability. © 2012 American Association of Physicists in Medicine.

  14. Model Development and Testing for THEMIS Controlled Mars Mosaics

    NASA Technical Reports Server (NTRS)

    Archinal, B. A.; Sides, S.; Weller, L.; Cushing, G.; Titus, T.; Kirk, R. L.; Soderblom, L. A.; Duxbury, T. C.

    2005-01-01

    As part of our work [1] to develop techniques and procedures to create regional and eventually global THEMIS mosaics of Mars, we are developing algorithms and software to photogrammetrically control THEMIS IR line scanner camera images. We have found from comparison of a limited number of images to MOLA digital image models (DIMs) [2] that the a priori geometry information (i.e. SPICE [3]) for THEMIS images generally allows their relative positions to be specified at the several pixel level (e.g. approx.5 to 13 pixels). However a need for controlled solutions to improve this geometry to the sub-pixel level still exists. Only with such solutions can seamless mosaics be obtained and likely distortion from spacecraft motion during image collection removed at such levels. Past experience has shown clearly that such mosaics are in heavy demand by users for operational and scientific use, and that they are needed over large areas or globally (as opposed to being available only on a limited basis via labor intensive custom mapping projects). Uses include spacecraft navigation, landing site planning and mapping, registration of multiple data types and image sets, registration of multispectral images, registration of images with topographic information, recovery of thermal properties, change detection searches, etc.

  15. Sensitivity of geographic information system outputs to errors in remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Boyd, R. K.; Gunther, F. J.; Lu, Y. C.

    1981-01-01

    The sensitivity of the outputs of a geographic information system (GIS) to errors in inputs derived from remotely sensed data (RSD) is investigated using a suitability model with per-cell decisions and a gridded geographic data base whose cells are larger than the RSD pixels. The process of preparing RSD as input to a GIS is analyzed, and the errors associated with classification and registration are examined. In the case of the model considered, it is found that the errors caused during classification and registration are partially compensated by the aggregation of pixels. The compensation is quantified by means of an analytical model, a Monte Carlo simulation, and experiments with Landsat data. The results show that error reductions of the order of 50% occur because of aggregation when 25 pixels of RSD are used per cell in the geographic data base.

  16. LANDSAT-4 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Anuta, P. E. (Principal Investigator)

    1982-01-01

    Work done on evaluating the geometric and radiometric quality of early LANDSAT-4 sensor data is described. Band to band and channel to channel registration evaluations were carried out using a line correlator. Visual blink comparisons were run on an image display to observe band to band registration over 512 x 512 pixel blocks. The results indicate a .5 pixel line misregistration between the 1.55 to 1.75, 2.08 to 2.35 micrometer bands and the first four bands. Also a four 30M line and column misregistration of the thermal IR band was observed. Radiometric evaluation included mean and variance analysis of individual detectors and principal components analysis. Results indicate that detector bias for all bands is very close or within tolerance. Bright spots were observed in the thermal IR band on an 18 line by 128 pixel grid. No explanation for this was pursued. The general overall quality of the TM was judged to be very high.

  17. Automatic Sub-Pixel Co-Registration of LandSat-8 OLI and Sentinel-2A MSI Images Using Phase Correlation and Machine Learning Based Mapping

    NASA Technical Reports Server (NTRS)

    Skakun, Sergii; Roger, Jean-Claude; Vermote, Eric F.; Masek, Jeffrey G.; Justice, Christopher O.

    2017-01-01

    This study investigates misregistration issues between Landsat-8/OLI and Sentinel-2A/MSI at 30 m resolution, and between multi-temporal Sentinel-2A images at 10 m resolution using a phase correlation approach and multiple transformation functions. Co-registration of 45 Landsat-8 to Sentinel-2A pairs and 37 Sentinel-2A to Sentinel-2A pairs were analyzed. Phase correlation proved to be a robust approach that allowed us to identify hundreds and thousands of control points on images acquired more than 100 days apart. Overall, misregistration of up to 1.6 pixels at 30 m resolution between Landsat-8 and Sentinel-2A images, and 1.2 pixels and 2.8 pixels at 10 m resolution between multi-temporal Sentinel-2A images from the same and different orbits, respectively, were observed. The non-linear Random Forest regression used for constructing the mapping function showed best results in terms of root mean square error (RMSE), yielding an average RMSE error of 0.07+/-0.02 pixels at 30 m resolution, and 0.09+/-0.05 and 0.15+/-0.06 pixels at 10 m resolution for the same and adjacent Sentinel-2A orbits, respectively, for multiple tiles and multiple conditions. A simpler 1st order polynomial function (affine transformation) yielded RMSE of 0.08+/-0.02 pixels at 30 m resolution and 0.12+/-0.06 (same Sentinel-2A orbits) and 0.20+/-0.09 (adjacent orbits) pixels at 10 m resolution.

  18. Optimization of an on-board imaging system for extremely rapid radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cherry Kemmerling, Erica M.; Wu, Meng, E-mail: mengwu@stanford.edu; Yang, He

    2015-11-15

    Purpose: Next-generation extremely rapid radiation therapy systems could mitigate the need for motion management, improve patient comfort during the treatment, and increase patient throughput for cost effectiveness. Such systems require an on-board imaging system that is competitively priced, fast, and of sufficiently high quality to allow good registration between the image taken on the day of treatment and the image taken the day of treatment planning. In this study, three different detectors for a custom on-board CT system were investigated to select the best design for integration with an extremely rapid radiation therapy system. Methods: Three different CT detectors aremore » proposed: low-resolution (all 4 × 4 mm pixels), medium-resolution (a combination of 4 × 4 mm pixels and 2 × 2 mm pixels), and high-resolution (all 1 × 1 mm pixels). An in-house program was used to generate projection images of a numerical anthropomorphic phantom and to reconstruct the projections into CT datasets, henceforth called “realistic” images. Scatter was calculated using a separate Monte Carlo simulation, and the model included an antiscatter grid and bowtie filter. Diagnostic-quality images of the phantom were generated to represent the patient scan at the time of treatment planning. Commercial deformable registration software was used to register the diagnostic-quality scan to images produced by the various on-board detector configurations. The deformation fields were compared against a “gold standard” deformation field generated by registering initial and deformed images of the numerical phantoms that were used to make the diagnostic and treatment-day images. Registrations of on-board imaging system data were judged by the amount their deformation fields differed from the corresponding gold standard deformation fields—the smaller the difference, the better the system. To evaluate the registrations, the pointwise distance between gold standard and realistic registration deformation fields was computed. Results: By most global metrics (e.g., mean, median, and maximum pointwise distance), the high-resolution detector had the best performance but the medium-resolution detector was comparable. For all medium- and high-resolution detector registrations, mean error between the realistic and gold standard deformation fields was less than 4 mm. By pointwise metrics (e.g., tracking a small lesion), the high- and medium-resolution detectors performed similarly. For these detectors, the smallest error between the realistic and gold standard registrations was 0.6 mm and the largest error was 3.6 mm. Conclusions: The medium-resolution CT detector was selected as the best for an extremely rapid radiation therapy system. In essentially all test cases, data from this detector produced a significantly better registration than data from the low-resolution detector and a comparable registration to data from the high-resolution detector. The medium-resolution detector provides an appropriate compromise between registration accuracy and system cost.« less

  19. Scene-based nonuniformity correction with video sequences and registration.

    PubMed

    Hardie, R C; Hayat, M M; Armstrong, E; Yasuda, B

    2000-03-10

    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity.

  20. Mutual information registration of multi-spectral and multi-resolution images of DigitalGlobe's WorldView-3 imaging satellite

    NASA Astrophysics Data System (ADS)

    Miecznik, Grzegorz; Shafer, Jeff; Baugh, William M.; Bader, Brett; Karspeck, Milan; Pacifici, Fabio

    2017-05-01

    WorldView-3 (WV-3) is a DigitalGlobe commercial, high resolution, push-broom imaging satellite with three instruments: visible and near-infrared VNIR consisting of panchromatic (0.3m nadir GSD) plus multi-spectral (1.2m), short-wave infrared SWIR (3.7m), and multi-spectral CAVIS (30m). Nine VNIR bands, which are on one instrument, are nearly perfectly registered to each other, whereas eight SWIR bands, belonging to the second instrument, are misaligned with respect to VNIR and to each other. Geometric calibration and ortho-rectification results in a VNIR/SWIR alignment which is accurate to approximately 0.75 SWIR pixel at 3.7m GSD, whereas inter-SWIR, band to band registration is 0.3 SWIR pixel. Numerous high resolution, spectral applications, such as object classification and material identification, require more accurate registration, which can be achieved by utilizing image processing algorithms, for example Mutual Information (MI). Although MI-based co-registration algorithms are highly accurate, implementation details for automated processing can be challenging. One particular challenge is how to compute bin widths of intensity histograms, which are fundamental building blocks of MI. We solve this problem by making the bin widths proportional to instrument shot noise. Next, we show how to take advantage of multiple VNIR bands, and improve registration sensitivity to image alignment. To meet this goal, we employ Canonical Correlation Analysis, which maximizes VNIR/SWIR correlation through an optimal linear combination of VNIR bands. Finally we explore how to register images corresponding to different spatial resolutions. We show that MI computed at a low-resolution grid is more sensitive to alignment parameters than MI computed at a high-resolution grid. The proposed modifications allow us to improve VNIR/SWIR registration to better than ¼ of a SWIR pixel, as long as terrain elevation is properly accounted for, and clouds and water are masked out.

  1. Strengthening of local vital events registration: lessons learnt from a voluntary sector initiative in a district in southern India.

    PubMed

    Mony, Prem; Sankar, Kiruba; Thomas, Tinku; Vaz, Mario

    2011-05-01

    Birth and death registration rates are low in most parts of India. Poor registration rates are due to constraints in both the government system (supply-side) and the general population (demand-side). We strengthened vital event registration at the local level within the existing legal framework by: (i) involving a non-profit organization as an interface between the government and the community; (ii) conducting supply-side interventions such as sensitization workshops for government officials, training for hospital staff and building data-sharing partnerships between stakeholders; (iii) monitoring for vital events by active surveillance through lay-informants; and (iv) conducting demand-side interventions such as publicity campaigns, education of families and assistance with registration. In the government sector, registration is given low priority and there is an attitude of blaming the victim, ascribing low levels of vital event registration to "cultural reasons/ignorance ". In the community, low registration was due to lack of awareness about the importance of and procedures for registration. This initiative helped improve registration of births and deaths at the subdistrict level. Vital event registration was significantly associated with local equity stratifiers such as gender, socioeconomic status and geography. The voluntary sector can interface effectively between the government and the community to strengthen vital registration. With political support from the government, outreach activities can dramatically improve vital event registration rates, especially in disadvantaged populations. The potential relevance of the data and the data collection process to stakeholders at the local level is a critical factor for success.

  2. A preliminary evaluation of LANDSAT-4 thematic mapper data for their geometric and radiometric accuracies

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Bender, L. U.; Falcone, N.; Jones, O. D.

    1983-01-01

    Some LANDSAT thematic mapper data collected over the eastern United States were analyzed for their whole scene geometric accuracy, band to band registration and radiometric accuracy. Band ratio images were created for a part of one scene in order to assess the capability of mapping geologic units with contrasting spectral properties. Systematic errors were found in the geometric accuracy of whole scenes, part of which were attributable to the film writing device used to record the images to film. Band to band registration showed that bands 1 through 4 were registered to within one pixel. Likewise, bands 5 and 7 also were registered to within one pixel. However, bands 5 and 7 were misregistered with bands 1 through 4 by 1 to 2 pixels. Band 6 was misregistered by 4 pixels to bands 1 through 4. Radiometric analysis indicated two kinds of banding, a modulo-16 stripping and an alternate light dark group of 16 scanlines. A color ratio composite image consisting of TM band ratios 3/4, 5/2, and 5/7 showed limonitic clay rich soils, limonitic clay poor soils, and nonlimonitic materials as distinctly different colors on the image.

  3. Medical image registration based on normalized multidimensional mutual information

    NASA Astrophysics Data System (ADS)

    Li, Qi; Ji, Hongbing; Tong, Ming

    2009-10-01

    Registration of medical images is an essential research topic in medical image processing and applications, and especially a preliminary and key step for multimodality image fusion. This paper offers a solution to medical image registration based on normalized multi-dimensional mutual information. Firstly, affine transformation with translational and rotational parameters is applied to the floating image. Then ordinal features are extracted by ordinal filters with different orientations to represent spatial information in medical images. Integrating ordinal features with pixel intensities, the normalized multi-dimensional mutual information is defined as similarity criterion to register multimodality images. Finally the immune algorithm is used to search registration parameters. The experimental results demonstrate the effectiveness of the proposed registration scheme.

  4. Image Registration: A Necessary Evil

    NASA Technical Reports Server (NTRS)

    Bell, James; McLachlan, Blair; Hermstad, Dexter; Trosin, Jeff; George, Michael W. (Technical Monitor)

    1995-01-01

    Registration of test and reference images is a key component of nearly all PSP data reduction techniques. This is done to ensure that a test image pixel viewing a particular point on the model is ratioed by the reference image pixel which views the same point. Typically registration is needed to account for model motion due to differing airloads when the wind-off and wind-on images are taken. Registration is also necessary when two cameras are used for simultaneous acquisition of data from a dual-frequency paint. This presentation will discuss the advantages and disadvantages of several different image registration techniques. In order to do so, it is necessary to propose both an accuracy requirement for image registration and a means for measuring the accuracy of a particular technique. High contrast regions in the unregistered images are most sensitive to registration errors, and it is proposed that these regions be used to establish the error limits for registration. Once this is done, the actual registration error can be determined by locating corresponding points on the test and reference images, and determining how well a particular registration technique matches them. An example of this procedure is shown for three transforms used to register images of a semispan model. Thirty control points were located on the model. A subset of the points were used to determine the coefficients of each registration transform, and the error with which each transform aligned the remaining points was determined. The results indicate the general superiority of a third-order polynomial over other candidate transforms, as well as showing how registration accuracy varies with number of control points. Finally, it is proposed that image registration may eventually be done away with completely. As more accurate image resection techniques and more detailed model surface grids become available, it will be possible to map raw image data onto the model surface accurately. Intensity ratio data can then be obtained by a "model surface ratio," rather than an image ratio. The problems and advantages of this technique will be discussed.

  5. Geometric registration of remotely sensed data with SAMIR

    NASA Astrophysics Data System (ADS)

    Gianinetto, Marco; Barazzetti, Luigi; Dini, Luigi; Fusiello, Andrea; Toldo, Roberto

    2015-06-01

    The commercial market offers several software packages for the registration of remotely sensed data through standard one-to-one image matching. Although very rapid and simple, this strategy does not take into consideration all the interconnections among the images of a multi-temporal data set. This paper presents a new scientific software, called Satellite Automatic Multi-Image Registration (SAMIR), able to extend the traditional registration approach towards multi-image global processing. Tests carried out with high-resolution optical (IKONOS) and high-resolution radar (COSMO-SkyMed) data showed that SAMIR can improve the registration phase with a more rigorous and robust workflow without initial approximations, user's interaction or limitation in spatial/spectral data size. The validation highlighted a sub-pixel accuracy in image co-registration for the considered imaging technologies, including optical and radar imagery.

  6. Nonrigid motion compensation in B-mode and contrast enhanced ultrasound image sequences of the carotid artery

    NASA Astrophysics Data System (ADS)

    Carvalho, Diego D. B.; Akkus, Zeynettin; Bosch, Johan G.; van den Oord, Stijn C. H.; Niessen, Wiro J.; Klein, Stefan

    2014-03-01

    In this work, we investigate nonrigid motion compensation in simultaneously acquired (side-by-side) B-mode ultrasound (BMUS) and contrast enhanced ultrasound (CEUS) image sequences of the carotid artery. These images are acquired to study the presence of intraplaque neovascularization (IPN), which is a marker of plaque vulnerability. IPN quantification is visualized by performing the maximum intensity projection (MIP) on the CEUS image sequence over time. As carotid images contain considerable motion, accurate global nonrigid motion compensation (GNMC) is required prior to the MIP. Moreover, we demonstrate that an improved lumen and plaque differentiation can be obtained by averaging the motion compensated BMUS images over time. We propose to use a previously published 2D+t nonrigid registration method, which is based on minimization of pixel intensity variance over time, using a spatially and temporally smooth B-spline deformation model. The validation compares displacements of plaque points with manual trackings by 3 experts in 11 carotids. The average (+/- standard deviation) root mean square error (RMSE) was 99+/-74μm for longitudinal and 47+/-18μm for radial displacements. These results were comparable with the interobserver variability, and with results of a local rigid registration technique based on speckle tracking, which estimates motion in a single point, whereas our approach applies motion compensation to the entire image. In conclusion, we evaluated that the GNMC technique produces reliable results. Since this technique tracks global deformations, it can aid in the quantification of IPN and the delineation of lumen and plaque contours.

  7. Co-registration of Laser Altimeter Tracks with Digital Terrain Models and Applications in Planetary Science

    NASA Technical Reports Server (NTRS)

    Glaeser, P.; Haase, I.; Oberst, J.; Neumann, G. A.

    2013-01-01

    We have derived algorithms and techniques to precisely co-register laser altimeter profiles with gridded Digital Terrain Models (DTMs), typically derived from stereo images. The algorithm consists of an initial grid search followed by a least-squares matching and yields the translation parameters at sub-pixel level needed to align the DTM and the laser profiles in 3D space. This software tool was primarily developed and tested for co-registration of laser profiles from the Lunar Orbiter Laser Altimeter (LOLA) with DTMs derived from the Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) stereo images. Data sets can be co-registered with positional accuracy between 0.13 m and several meters depending on the pixel resolution and amount of laser shots, where rough surfaces typically result in more accurate co-registrations. Residual heights of the data sets are as small as 0.18 m. The software can be used to identify instrument misalignment, orbit errors, pointing jitter, or problems associated with reference frames being used. Also, assessments of DTM effective resolutions can be obtained. From the correct position between the two data sets, comparisons of surface morphology and roughness can be made at laser footprint- or DTM pixel-level. The precise co-registration allows us to carry out joint analysis of the data sets and ultimately to derive merged high-quality data products. Examples of matching other planetary data sets, like LOLA with LRO Wide Angle Camera (WAC) DTMs or Mars Orbiter Laser Altimeter (MOLA) with stereo models from the High Resolution Stereo Camera (HRSC) as well as Mercury Laser Altimeter (MLA) with Mercury Dual Imaging System (MDIS) are shown to demonstrate the broad science applications of the software tool.

  8. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  9. Robust image registration for multiple exposure high dynamic range image synthesis

    NASA Astrophysics Data System (ADS)

    Yao, Susu

    2011-03-01

    Image registration is an important preprocessing technique in high dynamic range (HDR) image synthesis. This paper proposed a robust image registration method for aligning a group of low dynamic range images (LDR) that are captured with different exposure times. Illumination change and photometric distortion between two images would result in inaccurate registration. We propose to transform intensity image data into phase congruency to eliminate the effect of the changes in image brightness and use phase cross correlation in the Fourier transform domain to perform image registration. Considering the presence of non-overlapped regions due to photometric distortion, evolutionary programming is applied to search for the accurate translation parameters so that the accuracy of registration is able to be achieved at a hundredth of a pixel level. The proposed algorithm works well for under and over-exposed image registration. It has been applied to align LDR images for synthesizing high quality HDR images..

  10. Image registration method for medical image sequences

    DOEpatents

    Gee, Timothy F.; Goddard, James S.

    2013-03-26

    Image registration of low contrast image sequences is provided. In one aspect, a desired region of an image is automatically segmented and only the desired region is registered. Active contours and adaptive thresholding of intensity or edge information may be used to segment the desired regions. A transform function is defined to register the segmented region, and sub-pixel information may be determined using one or more interpolation methods.

  11. DeepSkeleton: Learning Multi-Task Scale-Associated Deep Side Outputs for Object Skeleton Extraction in Natural Images

    NASA Astrophysics Data System (ADS)

    Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan

    2017-11-01

    Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.

  12. Multiple-Event, Single-Photon Counting Imaging Sensor

    NASA Technical Reports Server (NTRS)

    Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.

    2011-01-01

    The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.

  13. Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.

    PubMed

    Han, Youkyung; Oh, Jaehong

    2018-05-17

    For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.

  14. Registration of retinal sequences from new video-ophthalmoscopic camera.

    PubMed

    Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan; Liberdova, Ivana

    2016-05-20

    Analysis of fast temporal changes on retinas has become an important part of diagnostic video-ophthalmology. It enables investigation of the hemodynamic processes in retinal tissue, e.g. blood-vessel diameter changes as a result of blood-pressure variation, spontaneous venous pulsation influenced by intracranial-intraocular pressure difference, blood-volume changes as a result of changes in light reflection from retinal tissue, and blood flow using laser speckle contrast imaging. For such applications, image registration of the recorded sequence must be performed. Here we use a new non-mydriatic video-ophthalmoscope for simple and fast acquisition of low SNR retinal sequences. We introduce a novel, two-step approach for fast image registration. The phase correlation in the first stage removes large eye movements. Lucas-Kanade tracking in the second stage removes small eye movements. We propose robust adaptive selection of the tracking points, which is the most important part of tracking-based approaches. We also describe a method for quantitative evaluation of the registration results, based on vascular tree intensity profiles. The achieved registration error evaluated on 23 sequences (5840 frames) is 0.78 ± 0.67 pixels inside the optic disc and 1.39 ± 0.63 pixels outside the optic disc. We compared the results with the commonly used approaches based on Lucas-Kanade tracking and scale-invariant feature transform, which achieved worse results. The proposed method can efficiently correct particular frames of retinal sequences for shift and rotation. The registration results for each frame (shift in X and Y direction and eye rotation) can also be used for eye-movement evaluation during single-spot fixation tasks.

  15. ViCAR: An Adaptive and Landmark-Free Registration of Time Lapse Image Data from Microfluidics Experiments

    PubMed Central

    Hattab, Georges; Schlüter, Jan-Philip; Becker, Anke; Nattkemper, Tim W.

    2017-01-01

    In order to understand gene function in bacterial life cycles, time lapse bioimaging is applied in combination with different marker protocols in so called microfluidics chambers (i.e., a multi-well plate). In one experiment, a series of T images is recorded for one visual field, with a pixel resolution of 60 nm/px. Any (semi-)automatic analysis of the data is hampered by a strong image noise, low contrast and, last but not least, considerable irregular shifts during the acquisition. Image registration corrects such shifts enabling next steps of the analysis (e.g., feature extraction or tracking). Image alignment faces two obstacles in this microscopic context: (a) highly dynamic structural changes in the sample (i.e., colony growth) and (b) an individual data set-specific sample environment which makes the application of landmarks-based alignments almost impossible. We present a computational image registration solution, we refer to as ViCAR: (Vi)sual (C)ues based (A)daptive (R)egistration, for such microfluidics experiments, consisting of (1) the detection of particular polygons (outlined and segmented ones, referred to as visual cues), (2) the adaptive retrieval of three coordinates throughout different sets of frames, and finally (3) an image registration based on the relation of these points correcting both rotation and translation. We tested ViCAR with different data sets and have found that it provides an effective spatial alignment thereby paving the way to extract temporal features pertinent to each resulting bacterial colony. By using ViCAR, we achieved an image registration with 99.9% of image closeness, based on the average rmsd of 4.10−2 pixels, and superior results compared to a state of the art algorithm. PMID:28620411

  16. Modeling misregistration and related effects on multispectral classification

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1981-01-01

    The effects of misregistration on the multispectral classification accuracy when the scene registration accuracy is relaxed from 0.3 to 0.5 pixel are investigated. Noise, class separability, spatial transient response, and field size are considered simultaneously with misregistration in their effects on accuracy. Any noise due to the scene, sensor, or to the analog/digital conversion, causes a finite fraction of the measurements to fall outside of the classification limits, even within nominally uniform fields. Misregistration causes field borders in a given band or set of bands to be closer than expected to a given pixel, causing additional pixels to be misclassified due to the mixture of materials in the pixel. Simplified first order models of the various effects are presented, and are used to estimate the performance to be expected.

  17. Longitudinal Analysis of Mouse SDOCT Volumes

    PubMed Central

    Antony, Bhavna J.; Carass, Aaron; Lang, Andrew; Kim, Byung-Jin; Zack, Donald J.; Prince, Jerry L.

    2017-01-01

    Spectral-domain optical coherence tomography (SDOCT), in addition to its routine clinical use in the diagnosis of ocular diseases, has begun to find increasing use in animal studies. Animal models are frequently used to study disease mechanisms as well as to test drug efficacy. In particular, SDOCT provides the ability to study animals longitudinally and non-invasively over long periods of time. However, the lack of anatomical landmarks makes the longitudinal scan acquisition prone to inconsistencies in orientation. Here, we propose a method for the automated registration of mouse SDOCT volumes. The method begins by accurately segmenting the blood vessels and the optic nerve head region in the scans using a pixel classification approach. The segmented vessel maps from follow-up scans were registered using an iterative closest point (ICP) algorithm to the baseline scan to allow for the accurate longitudinal tracking of thickness changes. Eighteen SDOCT volumes from a light damage model study were used to train a random forest utilized in the pixel classification step. The area under the curve (AUC) in a leave-one-out study for the retinal blood vessels and the optic nerve head (ONH) was found to be 0.93 and 0.98, respectively. The complete proposed framework, the retinal vasculature segmentation and the ICP registration, was applied to a secondary set of scans obtained from a light damage model. A qualitative assessment of the registration showed no registration failures. PMID:29138527

  18. Longitudinal analysis of mouse SDOCT volumes

    NASA Astrophysics Data System (ADS)

    Antony, Bhavna J.; Carass, Aaron; Lang, Andrew; Kim, Byung-Jin; Zack, Donald J.; Prince, Jerry L.

    2017-03-01

    Spectral-domain optical coherence tomography (SDOCT), in addition to its routine clinical use in the diagnosis of ocular diseases, has begun to fund increasing use in animal studies. Animal models are frequently used to study disease mechanisms as well as to test drug efficacy. In particular, SDOCT provides the ability to study animals longitudinally and non-invasively over long periods of time. However, the lack of anatomical landmarks makes the longitudinal scan acquisition prone to inconsistencies in orientation. Here, we propose a method for the automated registration of mouse SDOCT volumes. The method begins by accurately segmenting the blood vessels and the optic nerve head region in the scans using a pixel classification approach. The segmented vessel maps from follow-up scans were registered using an iterative closest point (ICP) algorithm to the baseline scan to allow for the accurate longitudinal tracking of thickness changes. Eighteen SDOCT volumes from a light damage model study were used to train a random forest utilized in the pixel classification step. The area under the curve (AUC) in a leave-one-out study for the retinal blood vessels and the optic nerve head (ONH) was found to be 0.93 and 0.98, respectively. The complete proposed framework, the retinal vasculature segmentation and the ICP registration, was applied to a secondary set of scans obtained from a light damage model. A qualitative assessment of the registration showed no registration failures.

  19. An efficient direct method for image registration of flat objects

    NASA Astrophysics Data System (ADS)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  20. High throughput dual-wavelength temperature distribution imaging via compressive imaging

    NASA Astrophysics Data System (ADS)

    Yao, Xu-Ri; Lan, Ruo-Ming; Liu, Xue-Feng; Zhu, Ge; Zheng, Fu; Yu, Wen-Kai; Zhai, Guang-Jie

    2018-03-01

    Thermal imaging is an essential tool in a wide variety of research areas. In this work we demonstrate high-throughput double-wavelength temperature distribution imaging using a modified single-pixel camera without the requirement of a beam splitter (BS). A digital micro-mirror device (DMD) is utilized to display binary masks and split the incident radiation, which eliminates the necessity of a BS. Because the spatial resolution is dictated by the DMD, this thermal imaging system has the advantage of perfect spatial registration between the two images, which limits the need for the pixel registration and fine adjustments. Two bucket detectors, which measures the total light intensity reflected from the DMD, are employed in this system and yield an improvement in the detection efficiency of the narrow-band radiation. A compressive imaging algorithm is utilized to achieve under-sampling recovery. A proof-of-principle experiment was presented to demonstrate the feasibility of this structure.

  1. Super resolution for astronomical observations

    NASA Astrophysics Data System (ADS)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  2. An accelerated image matching technique for UAV orthoimage registration

    NASA Astrophysics Data System (ADS)

    Tsai, Chung-Hsien; Lin, Yu-Ching

    2017-06-01

    Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.

  3. Side by Side Views of a Dark Hill

    NASA Image and Video Library

    2011-09-02

    NASA Dawn spacecraft obtained these side-by-side views of a dark hill of the surface of asteroid Vesta with its framing camera on August 19, 2011. The images have a resolution of about 260 meters per pixel.

  4. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  5. 14 CFR 47.41 - Duration and return of Certificate.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AIRCRAFT REGISTRATION Certificates of Aircraft Registration § 47.41 Duration and return of Certificate. (a) Each Certificate of Aircraft Registration issued by the FAA under this subpart is effective, unless... Certificate of Aircraft Registration, with the reverse side completed, must be returned to the FAA Aircraft...

  6. An effective non-rigid registration approach for ultrasound image based on "demons" algorithm.

    PubMed

    Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong; Tian, Jiawei

    2013-06-01

    Medical image registration is an important component of computer-aided diagnosis system in diagnostics, therapy planning, and guidance of surgery. Because of its low signal/noise ratio (SNR), ultrasound (US) image registration is a difficult task. In this paper, a fully automatic non-rigid image registration algorithm based on demons algorithm is proposed for registration of ultrasound images. In the proposed method, an "inertia force" derived from the local motion trend of pixels in a Moore neighborhood system is produced and integrated into optical flow equation to estimate the demons force, which is helpful to handle the speckle noise and preserve the geometric continuity of US images. In the experiment, a series of US images and several similarity measure metrics are utilized for evaluating the performance. The experimental results demonstrate that the proposed method can register ultrasound images efficiently, robust to noise, quickly and automatically.

  7. MREG V1.1 : a multi-scale image registration algorithm for SAR applications.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichel, Paul H.

    2013-08-01

    MREG V1.1 is the sixth generation SAR image registration algorithm developed by the Signal Processing&Technology Department for Synthetic Aperture Radar applications. Like its predecessor algorithm REGI, it employs a powerful iterative multi-scale paradigm to achieve the competing goals of sub-pixel registration accuracy and the ability to handle large initial offsets. Since it is not model based, it allows for high fidelity tracking of spatially varying terrain-induced misregistration. Since it does not rely on image domain phase, it is equally adept at coherent and noncoherent image registration. This document provides a brief history of the registration processors developed by Dept. 5962more » leading up to MREG V1.1, a full description of the signal processing steps involved in the algorithm, and a user's manual with application specific recommendations for CCD, TwoColor MultiView, and SAR stereoscopy.« less

  8. 14 CFR 47.41 - Duration and return of Certificate.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... AIRCRAFT REGISTRATION Certificates of Aircraft Registration § 47.41 Duration and return of Certificate. (a) Each Certificate of Aircraft Registration, AC Form 8050-3, issued by the FAA under this subpart is... Certificate of Aircraft Registration, with the reverse side completed, must be returned to the Registry— (1...

  9. 14 CFR 47.41 - Duration and return of Certificate.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... AIRCRAFT REGISTRATION Certificates of Aircraft Registration § 47.41 Duration and return of Certificate. (a) Each Certificate of Aircraft Registration, AC Form 8050-3, issued by the FAA under this subpart is... Certificate of Aircraft Registration, with the reverse side completed, must be returned to the Registry— (1...

  10. 14 CFR 47.41 - Duration and return of Certificate.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AIRCRAFT REGISTRATION Certificates of Aircraft Registration § 47.41 Duration and return of Certificate. (a) Each Certificate of Aircraft Registration, AC Form 8050-3, issued by the FAA under this subpart is... Certificate of Aircraft Registration, with the reverse side completed, must be returned to the Registry— (1...

  11. 14 CFR 47.41 - Duration and return of Certificate.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... AIRCRAFT REGISTRATION Certificates of Aircraft Registration § 47.41 Duration and return of Certificate. (a) Each Certificate of Aircraft Registration, AC Form 8050-3, issued by the FAA under this subpart is... Certificate of Aircraft Registration, with the reverse side completed, must be returned to the Registry— (1...

  12. A translational registration system for LANDSAT image segments

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Erthal, G. J.; Velasco, F. R. D.; Mascarenhas, N. D. D.

    1983-01-01

    The use of satellite images obtained from various dates is essential for crop forecast systems. In order to make possible a multitemporal analysis, it is necessary that images belonging to each acquisition have pixel-wise correspondence. A system developed to obtain, register and record image segments from LANDSAT images in computer compatible tapes is described. The translational registration of the segments is performed by correlating image edges in different acquisitions. The system was constructed for the Burroughs B6800 computer in ALGOL language.

  13. Automatic Marker-free Longitudinal Infrared Image Registration by Shape Context Based Matching and Competitive Winner-guided Optimal Corresponding

    PubMed Central

    Lee, Chia-Yen; Wang, Hao-Jen; Lai, Jhih-Hao; Chang, Yeun-Chung; Huang, Chiun-Sheng

    2017-01-01

    Long-term comparisons of infrared image can facilitate the assessment of breast cancer tissue growth and early tumor detection, in which longitudinal infrared image registration is a necessary step. However, it is hard to keep markers attached on a body surface for weeks, and rather difficult to detect anatomic fiducial markers and match them in the infrared image during registration process. The proposed study, automatic longitudinal infrared registration algorithm, develops an automatic vascular intersection detection method and establishes feature descriptors by shape context to achieve robust matching, as well as to obtain control points for the deformation model. In addition, competitive winner-guided mechanism is developed for optimal corresponding. The proposed algorithm is evaluated in two ways. Results show that the algorithm can quickly lead to accurate image registration and that the effectiveness is superior to manual registration with a mean error being 0.91 pixels. These findings demonstrate that the proposed registration algorithm is reasonably accurate and provide a novel method of extracting a greater amount of useful data from infrared images. PMID:28145474

  14. Fast DRR generation for 2D to 3D registration on GPUs.

    PubMed

    Tornai, Gábor János; Cserey, György; Pappas, Ion

    2012-08-01

    The generation of digitally reconstructed radiographs (DRRs) is the most time consuming step on the CPU in intensity based two-dimensional x-ray to three-dimensional (CT or 3D rotational x-ray) medical image registration, which has application in several image guided interventions. This work presents optimized DRR rendering on graphical processor units (GPUs) and compares performance achievable on four commercially available devices. A ray-cast based DRR rendering was implemented for a 512 × 512 × 72 CT volume. The block size parameter was optimized for four different GPUs for a region of interest (ROI) of 400 × 225 pixels with different sampling ratios (1.1%-9.1% and 100%). Performance was statistically evaluated and compared for the four GPUs. The method and the block size dependence were validated on the latest GPU for several parameter settings with a public gold standard dataset (512 × 512 × 825 CT) for registration purposes. Depending on the GPU, the full ROI is rendered in 2.7-5.2 ms. If sampling ratio of 1.1%-9.1% is applied, execution time is in the range of 0.3-7.3 ms. On all GPUs, the mean of the execution time increased linearly with respect to the number of pixels if sampling was used. The presented results outperform other results from the literature. This indicates that automatic 2D to 3D registration, which typically requires a couple of hundred DRR renderings to converge, can be performed quasi on-line, in less than a second or depending on the application and hardware in less than a couple of seconds. Accordingly, a whole new field of applications is opened for image guided interventions, where the registration is continuously performed to match the real-time x-ray.

  15. The use of multisensor images for Earth Science applications

    NASA Technical Reports Server (NTRS)

    Evans, D.; Stromberg, B.

    1983-01-01

    The use of more than one remote sensing technique is particularly important for Earth Science applications because of the compositional and textural information derivable from the images. The ability to simultaneously analyze images acquired by different sensors requires coregistration of the multisensor image data sets. In order to insure pixel to pixel registration in areas of high relief, images must be rectified to eliminate topographic distortions. Coregistered images can be analyzed using a variety of multidimensional techniques and the acquired knowledge of topographic effects in the images can be used in photogeologic interpretations.

  16. Plantar fascia softening in plantar fasciitis with normal B-mode sonography.

    PubMed

    Wu, Chueh-Hung; Chen, Wen-Shiang; Wang, Tyng-Guey

    2015-11-01

    To investigate plantar fascia elasticity in patients with typical clinical manifestations of plantar fasciitis but normal plantar fascia morphology on B-mode sonography. Twenty patients with plantar fasciitis (10 unilateral and 10 bilateral) and 30 healthy volunteers, all with normal plantar fascia morphology on B-mode sonography, were included in the study. Plantar fascia elasticity was evaluated by sonoelastographic examination. All sonoelastograms were quantitatively analyzed, and less red pixel intensity was representative of softer tissue. Pixel intensity was compared among unilateral plantar fasciitis patients, bilateral plantar fasciitis patients, and healthy volunteers by one-way ANOVA. A post hoc Scheffé's test was used to identify where the differences occurred. Compared to healthy participants (red pixel intensity: 146.9 ± 9.1), there was significantly less red pixel intensity in the asymptomatic sides of unilateral plantar fasciitis (140.4 ± 7.3, p = 0.01), symptomatic sides of unilateral plantar fasciitis (127.1 ± 7.4, p < 0.001), and both sides of bilateral plantar fasciitis (129.4 ± 7.5, p < 0.001). There were no significant differences in plantar fascia thickness or green or blue pixel intensity among these groups. Sonoelastography revealed that the plantar fascia is softer in patients with typical clinical manifestations of plantar fasciitis, even if they exhibit no abnormalities on B-mode sonography.

  17. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  18. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations

    PubMed Central

    Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed. PMID:26120356

  19. Deep Adaptive Log-Demons: Diffeomorphic Image Registration with Very Large Deformations.

    PubMed

    Zhao, Liya; Jia, Kebin

    2015-01-01

    This paper proposes a new framework for capturing large and complex deformation in image registration. Traditionally, this challenging problem relies firstly on a preregistration, usually an affine matrix containing rotation, scale, and translation and afterwards on a nonrigid transformation. According to preregistration, the directly calculated affine matrix, which is obtained by limited pixel information, may misregistrate when large biases exist, thus misleading following registration subversively. To address this problem, for two-dimensional (2D) images, the two-layer deep adaptive registration framework proposed in this paper firstly accurately classifies the rotation parameter through multilayer convolutional neural networks (CNNs) and then identifies scale and translation parameters separately. For three-dimensional (3D) images, affine matrix is located through feature correspondences by a triplanar 2D CNNs. Then deformation removal is done iteratively through preregistration and demons registration. By comparison with the state-of-the-art registration framework, our method gains more accurate registration results on both synthetic and real datasets. Besides, principal component analysis (PCA) is combined with correlation like Pearson and Spearman to form new similarity standards in 2D and 3D registration. Experiment results also show faster convergence speed.

  20. CdZnTe Image Detectors for Hard-X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    Chen, C. M. Hubert; Cook, Walter R.; Harrison, Fiona A.; Lin, Jiao Y. Y.; Mao, Peter H.; Schindler, Stephen M.

    2005-01-01

    Arrays of CdZnTe photodetectors and associated electronic circuitry have been built and tested in a continuing effort to develop focal-plane image sensor systems for hard-x-ray telescopes. Each array contains 24 by 44 pixels at a pitch of 498 m. The detector designs are optimized to obtain low power demand with high spectral resolution in the photon- energy range of 5 to 100 keV. More precisely, each detector array is a hybrid of a CdZnTe photodetector array and an application-specific integrated circuit (ASIC) containing an array of amplifiers in the same pixel pattern as that of the detectors. The array is fabricated on a single crystal of CdZnTe having dimensions of 23.6 by 12.9 by 2 mm. The detector-array cathode is a monolithic platinum contact. On the anode plane, the contact metal is patterned into the aforementioned pixel array, surrounded by a guard ring that is 1 mm wide on three sides and is 0.1 mm wide on the fourth side so that two such detector arrays can be placed side-by-side to form a roughly square sensor area with minimal dead area between them. Figure 1 shows two anode patterns. One pattern features larger pixel anode contacts, with a 30-m gap between them. The other pattern features smaller pixel anode contacts plus a contact for a shaping electrode in the form of a grid that separates all the pixels. In operation, the grid is held at a potential intermediate between the cathode and anode potentials to steer electric charges toward the anode in order to reduce the loss of charges in the inter-anode gaps. The CdZnTe photodetector array is mechanically and electrically connected to the ASIC (see Figure 2), either by use of indium bump bonds or by use of conductive epoxy bumps on the CdZnTe array joined to gold bumps on the ASIC. Hence, the output of each pixel detector is fed to its own amplifier chain.

  1. Improving depth estimation from a plenoptic camera by patterned illumination

    NASA Astrophysics Data System (ADS)

    Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.

    2015-05-01

    Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.

  2. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  3. An Analysis LANDSAT-4 Thematic Mapper Geometric Properties

    NASA Technical Reports Server (NTRS)

    Walker, R. E.; Zobrist, A. L.; Bryant, N. A.; Gokhman, B.; Friedman, S. Z.; Logan, T. L.

    1984-01-01

    LANDSAT Thematic Mapper P-data of Washington, D. C., Harrisburg, PA, and Salton Sea, CA are analyzed to determine magnitudes and causes of error in the geometric conformity of the data to known Earth surface geometry. Several tests of data geometry are performed. Intraband and interband correlation and registration are investigated, exclusive of map based ground truth. The magnitudes and statistical trends of pixel offsets between a single band's mirror scans (due to processing procedures) are computed, and the inter-band integrity of registration is analyzed. A line to line correlation analysis is included.

  4. Evaluation of LANDSAT-4 TM and MSS ground geometry performance without ground control

    NASA Technical Reports Server (NTRS)

    Bryant, N. A.; Zobrist, A.

    1983-01-01

    LANDSAT thematic mapper P-data of Washington, D.C., Harrisburg, PA, and Salton Sea, CA were analyzed to determine magnitudes and causes of error in the geometric conformity of the data to known earth-surface geometry. Several tests of data geometry were performed. Intra-band and inter-band correlation and registration were investigated, exclusive of map-based ground truth. Specifically, the magnitudes and statistical trends of pixel offsets between a single band's mirror scans (due to processing procedures) were computed, and the inter-band integrity of registration was analyzed.

  5. Performance of the STIS CCD Dark Rate Temperature Correction

    NASA Astrophysics Data System (ADS)

    Branton, Doug; STScI STIS Team

    2018-06-01

    Since July 2001, the Space Telescope Imaging Spectrograph (STIS) onboard Hubble has operated on its Side-2 electronics due to a failure in the primary Side-1 electronics. While nearly identical, Side-2 lacks a functioning temperature sensor for the CCD, introducing a variability in the CCD operating temperature. Previous analysis utilized the CCD housing temperature telemetry to characterize the relationship between the housing temperature and the dark rate. It was found that a first-order 7%/°C uniform dark correction demonstrated a considerable improvement in the quality of dark subtraction on Side-2 era CCD data, and that value has been used on all Side-2 CCD darks since. In this report, we show how this temperature correction has performed historically. We compare the current 7%/°C value against the ideal first-order correction at a given time (which can vary between ~6%/°C and ~10%/°C) as well as against a more complex second-order correction that applies a unique slope to each pixel as a function of dark rate and time. At worst, the current correction has performed ~1% worse than the second-order correction. Additionally, we present initial evidence suggesting that the variability in pixel temperature-sensitivity is significant enough to warrant a temperature correction that considers pixels individually rather than correcting them uniformly.

  6. Image Processing Of Images From Peripheral-Artery Digital Subtraction Angiography (DSA) Studies

    NASA Astrophysics Data System (ADS)

    Wilson, David L.; Tarbox, Lawrence R.; Cist, David B.; Faul, David D.

    1988-06-01

    A system is being developed to test the possibility of doing peripheral, digital subtraction angiography (DSA) with a single contrast injection using a moving gantry system. Given repositioning errors that occur between the mask and contrast-containing images, factors affecting the success of subtractions following image registration have been investigated theoretically and experimentally. For a 1 mm gantry displacement, parallax and geometric image distortion (pin-cushion) both give subtraction errors following registration that are approximately 25% of the error resulting from no registration. Image processing techniques improve the subtractions. The geometric distortion effect is reduced using a piece-wise, 8 parameter unwarping method. Plots of image similarity measures versus pixel shift are well behaved and well fit by a parabola, leading to the development of an iterative, automatic registration algorithm that uses parabolic prediction of the new minimum. The registration algorithm converges quickly (less than 1 second on a MicroVAX) and is relatively immune to the region of interest (ROI) selected.

  7. Fully 3D-Integrated Pixel Detectors for X-Rays

    DOE PAGES

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul; ...

    2016-01-01

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  8. All-passive pixel super-resolution of time-stretch imaging

    PubMed Central

    Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.

    2017-01-01

    Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936

  9. Automated Transmission-Mode Scanning Electron Microscopy (tSEM) for Large Volume Analysis at Nanoscale Resolution

    PubMed Central

    Kuwajima, Masaaki; Mendenhall, John M.; Lindsey, Laurence F.; Harris, Kristen M.

    2013-01-01

    Transmission-mode scanning electron microscopy (tSEM) on a field emission SEM platform was developed for efficient and cost-effective imaging of circuit-scale volumes from brain at nanoscale resolution. Image area was maximized while optimizing the resolution and dynamic range necessary for discriminating key subcellular structures, such as small axonal, dendritic and glial processes, synapses, smooth endoplasmic reticulum, vesicles, microtubules, polyribosomes, and endosomes which are critical for neuronal function. Individual image fields from the tSEM system were up to 4,295 µm2 (65.54 µm per side) at 2 nm pixel size, contrasting with image fields from a modern transmission electron microscope (TEM) system, which were only 66.59 µm2 (8.160 µm per side) at the same pixel size. The tSEM produced outstanding images and had reduced distortion and drift relative to TEM. Automated stage and scan control in tSEM easily provided unattended serial section imaging and montaging. Lens and scan properties on both TEM and SEM platforms revealed no significant nonlinear distortions within a central field of ∼100 µm2 and produced near-perfect image registration across serial sections using the computational elastic alignment tool in Fiji/TrakEM2 software, and reliable geometric measurements from RECONSTRUCT™ or Fiji/TrakEM2 software. Axial resolution limits the analysis of small structures contained within a section (∼45 nm). Since this new tSEM is non-destructive, objects within a section can be explored at finer axial resolution in TEM tomography with current methods. Future development of tSEM tomography promises thinner axial resolution producing nearly isotropic voxels and should provide within-section analyses of structures without changing platforms. Brain was the test system given our interest in synaptic connectivity and plasticity; however, the new tSEM system is readily applicable to other biological systems. PMID:23555711

  10. FITPix COMBO—Timepix detector with integrated analog signal spectrometric readout

    NASA Astrophysics Data System (ADS)

    Holik, M.; Kraus, V.; Georgiev, V.; Granja, C.

    2016-02-01

    The hybrid semiconductor pixel detector Timepix has proven a powerful tool in radiation detection and imaging. Energy loss and directional sensitivity as well as particle type resolving power are possible by high resolution particle tracking and per-pixel energy and quantum-counting capability. The spectrometric resolving power of the detector can be further enhanced by analyzing the analog signal of the detector common sensor electrode (also called back-side pulse). In this work we present a new compact readout interface, based on the FITPix readout architecture, extended with integrated analog electronics for the detector's common sensor signal. Integrating simultaneous operation of the digital per-pixel information with the common sensor (called also back-side electrode) analog pulse processing circuitry into one device enhances the detector capabilities and opens new applications. Thanks to noise suppression and built-in electromagnetic interference shielding the common hardware platform enables parallel analog signal spectroscopy on the back side pulse signal with full operation and read-out of the pixelated digital part, the noise level is 600 keV and spectrometric resolution around 100 keV for 5.5 MeV alpha particles. Self-triggering is implemented with delay of few tens of ns making use of adjustable low-energy threshold of the particle analog signal amplitude. The digital pixelated full frame can be thus triggered and recorded together with the common sensor analog signal. The waveform, which is sampled with frequency 100 MHz, can be recorded in adjustable time window including time prior to the trigger level. An integrated software tool provides control, on-line display and read-out of both analog and digital channels. Both the pixelated digital record and the analog waveform are synchronized and written out by common time stamp.

  11. Reproducibility of Centric Relation Techniques by means of Condyle Position Analysis

    PubMed Central

    Galeković, Nikolina Holen; Fugošić, Vesna; Braut, Vedrana

    2017-01-01

    Purpose The aim of this study was to determine the reproducibility of clinical centric relation (CR) registration techniques (bimanual manipulation, chin point guidance and Roth's method) by means of condyle position analysis. Material and methods Thirty two fully dentate asymptomatic subjects (16 female and 16 male) with normal occlusal relations (Angle class I) participated in the study (mean age, 22.6 ± 4.7 years). The mandibular position indicator (MPI) was used to analyze the three-dimensional (anteroposterior (ΔX), superoinferior (ΔZ), mediolateral (ΔY)) condylar shift generated by the difference between the centric relation position (CR) and the maximal intercuspation position (MI) observed in dental arches. Results The mean value and standard deviation of three-dimensional condylar shift of the tested clinical CR techniques was 0.19 ± 0.34 mm. Significant differences within the tested clinical CR registration techniques were found for anteroposterior condylar shift on the right side posterior (Δ Xrp; P ≤ 0.012); and superoinferior condylar shift on the left side inferior (Δ Zli; P ≤ 0.011), whereas between the tested CR registration techniques were found for anteroposterior shift on the right side posterior (ΔXrp, P ≤ 0.037) and superoinferior shift on the right side inferior (ΔZri, P ≤ 0.004), on the left side inferior (ΔZli, P ≤ 0.005) and on the left side superior (ΔZls, P ≤ 0.007). Conclusion Bimanual manipulation, chin point guidance and Roth's method are clinical CR registration techniques of equal accuracy and reproducibility in asymptomatic subjects with normal occlusal relationship. PMID:28740266

  12. Reproducibility of Centric Relation Techniques by means of Condyle Position Analysis.

    PubMed

    Galeković, Nikolina Holen; Fugošić, Vesna; Braut, Vedrana; Ćelić, Robert

    2017-03-01

    The aim of this study was to determine the reproducibility of clinical centric relation (CR) registration techniques (bimanual manipulation, chin point guidance and Roth's method) by means of condyle position analysis. Thirty two fully dentate asymptomatic subjects (16 female and 16 male) with normal occlusal relations (Angle class I) participated in the study (mean age, 22.6 ± 4.7 years). The mandibular position indicator (MPI) was used to analyze the three-dimensional (anteroposterior (ΔX), superoinferior (ΔZ), mediolateral (ΔY)) condylar shift generated by the difference between the centric relation position (CR) and the maximal intercuspation position (MI) observed in dental arches. The mean value and standard deviation of three-dimensional condylar shift of the tested clinical CR techniques was 0.19 ± 0.34 mm. Significant differences within the tested clinical CR registration techniques were found for anteroposterior condylar shift on the right side posterior (Δ Xrp; P ≤ 0.012); and superoinferior condylar shift on the left side inferior (Δ Zli; P ≤ 0.011), whereas between the tested CR registration techniques were found for anteroposterior shift on the right side posterior (ΔXrp, P ≤ 0.037) and superoinferior shift on the right side inferior (ΔZri, P ≤ 0.004), on the left side inferior (ΔZli, P ≤ 0.005) and on the left side superior (ΔZls, P ≤ 0.007). Bimanual manipulation, chin point guidance and Roth's method are clinical CR registration techniques of equal accuracy and reproducibility in asymptomatic subjects with normal occlusal relationship.

  13. Alignment, orientation, and Coulomb explosion of difluoroiodobenzene studied with the pixel imaging mass spectrometry (PImMS) camera.

    PubMed

    Amini, Kasra; Boll, Rebecca; Lauer, Alexandra; Burt, Michael; Lee, Jason W L; Christensen, Lauge; Brauβe, Felix; Mullins, Terence; Savelyev, Evgeny; Ablikim, Utuq; Berrah, Nora; Bomme, Cédric; Düsterer, Stefan; Erk, Benjamin; Höppner, Hauke; Johnsson, Per; Kierspel, Thomas; Krecinic, Faruk; Küpper, Jochen; Müller, Maria; Müller, Erland; Redlin, Harald; Rouzée, Arnaud; Schirmel, Nora; Thøgersen, Jan; Techert, Simone; Toleikis, Sven; Treusch, Rolf; Trippel, Sebastian; Ulmer, Anatoli; Wiese, Joss; Vallance, Claire; Rudenko, Artem; Stapelfeldt, Henrik; Brouard, Mark; Rolles, Daniel

    2017-07-07

    Laser-induced adiabatic alignment and mixed-field orientation of 2,6-difluoroiodobenzene (C 6 H 3 F 2 I) molecules are probed by Coulomb explosion imaging following either near-infrared strong-field ionization or extreme-ultraviolet multi-photon inner-shell ionization using free-electron laser pulses. The resulting photoelectrons and fragment ions are captured by a double-sided velocity map imaging spectrometer and projected onto two position-sensitive detectors. The ion side of the spectrometer is equipped with a pixel imaging mass spectrometry camera, a time-stamping pixelated detector that can record the hit positions and arrival times of up to four ions per pixel per acquisition cycle. Thus, the time-of-flight trace and ion momentum distributions for all fragments can be recorded simultaneously. We show that we can obtain a high degree of one-and three-dimensional alignment and mixed-field orientation and compare the Coulomb explosion process induced at both wavelengths.

  14. Feature-Based Retinal Image Registration Using D-Saddle Feature

    PubMed Central

    Hasikin, Khairunnisa; A. Karim, Noor Khairiah; Ahmedy, Fatimah

    2017-01-01

    Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle. PMID:29204257

  15. SU-F-J-177: A Novel Image Analysis Technique (center Pixel Method) to Quantify End-To-End Tests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Chetty, I; Snyder, K

    Purpose: To implement a novel image analysis technique, “center pixel method”, to quantify end-to-end tests accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy was determined by delivering radiation to an end-to-end prototype phantom. The phantom was scanned with 0.8 mm slice thickness. The treatment isocenter was placed at the center of the phantom. In the treatment room, CBCT images of the phantom (kVp=77, mAs=1022, slice thickness 1 mm) were acquired to register to the reference CT images. 6D couch correction were applied based on the registration results. Electronic Portal Imaging Device (EPID)-based Winston Lutz (WL)more » tests were performed to quantify the errors of the targeting accuracy of the system at 15 combinations of gantry, collimator and couch positions. The images were analyzed using two different methods. a) The classic method. The deviation was calculated by measuring the radial distance between the center of the central BB and the full width at half maximum of the radiation field. b) The center pixel method. Since the imager projection offset from the treatment isocenter was known from the IsoCal calibration, the deviation was determined between the center of the BB and the central pixel of the imager panel. Results: Using the automatic registration method to localize the phantom and the classic method of measuring the deviation of the BB center, the mean and standard deviation of the radial distance was 0.44 ± 0.25, 0.47 ± 0.26, and 0.43 ± 0.13 mm for the jaw, MLC and cone defined field sizes respectively. When the center pixel method was used, the mean and standard deviation was 0.32 ± 0.18, 0.32 ± 0.17, and 0.32 ± 0.19 mm respectively. Conclusion: Our results demonstrated that the center pixel method accurately analyzes the WL images to evaluate the targeting accuracy of the radiosurgery system. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  16. Super-Resolution Enhancement From Multiple Overlapping Images: A Fractional Area Technique

    NASA Astrophysics Data System (ADS)

    Michaels, Joshua A.

    With the availability of large quantities of relatively low-resolution data from several decades of space borne imaging, methods of creating an accurate, higher-resolution image from the multiple lower-resolution images (i.e. super-resolution), have been developed almost since such imagery has been around. The fractional-area super-resolution technique developed in this thesis has never before been documented. Satellite orbits, like Landsat, have a quantifiable variation, which means each image is not centered on the exact same spot more than once and the overlapping information from these multiple images may be used for super-resolution enhancement. By splitting a single initial pixel into many smaller, desired pixels, a relationship can be created between them using the ratio of the area within the initial pixel. The ideal goal for this technique is to obtain smaller pixels with exact values and no error, yielding a better potential result than those methods that yield interpolated pixel values with consequential loss of spatial resolution. A Fortran 95 program was developed to perform all calculations associated with the fractional-area super-resolution technique. The fractional areas are calculated using traditional trigonometry and coordinate geometry and Linear Algebra Package (LAPACK; Anderson et al., 1999) is used to solve for the higher-resolution pixel values. In order to demonstrate proof-of-concept, a synthetic dataset was created using the intrinsic Fortran random number generator and Adobe Illustrator CS4 (for geometry). To test the real-life application, digital pictures from a Sony DSC-S600 digital point-and-shoot camera with a tripod were taken of a large US geological map under fluorescent lighting. While the fractional-area super-resolution technique works in perfect synthetic conditions, it did not successfully produce a reasonable or consistent solution in the digital photograph enhancement test. The prohibitive amount of processing time (up to 60 days for a relatively small enhancement area) severely limits the practical usefulness of fraction-area super-resolution. Fractional-area super-resolution is very sensitive to relative input image co-registration, which must be accurate to a sub-pixel degree. However, use of this technique, if input conditions permit, could be applied as a "pinpoint" super-resolution technique. Such an application could be possible by only applying it to only very small areas with very good input image co-registration.

  17. MatchGUI: A Graphical MATLAB-Based Tool for Automatic Image Co-Registration

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.

    2011-01-01

    MatchGUI software, based on MATLAB, automatically matches two images and displays the match result by superimposing one image on the other. A slider bar allows focus to shift between the two images. There are tools for zoom, auto-crop to overlap region, and basic image markup. Given a pair of ortho-rectified images (focused primarily on Mars orbital imagery for now), this software automatically co-registers the imagery so that corresponding image pixels are aligned. MatchGUI requires minimal user input, and performs a registration over scale and inplane rotation fully automatically

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deptuch, Grzegorz W.; Gabriella, Carini; Enquist, Paul

    The vertically integrated photon imaging chip (VIPIC1) pixel detector is a stack consisting of a 500-μm-thick silicon sensor, a two-tier 34-μm-thick integrated circuit, and a host printed circuit board (PCB). The integrated circuit tiers were bonded using the direct bonding technology with copper, and each tier features 1-μm-diameter through-silicon vias that were used for connections to the sensor on one side, and to the host PCB on the other side. The 80-μm-pixel-pitch sensor was the direct bonding technology with nickel bonded to the integrated circuit. The stack was mounted on the board using Sn–Pb balls placed on a 320-μm pitch,more » yielding an entirely wire-bond-less structure. The analog front-end features a pulse response peaking at below 250 ns, and the power consumption per pixel is 25 μW. We successful completed the 3-D integration and have reported here. Additionally, all pixels in the matrix of 64 × 64 pixels were responding on well-bonded devices. Correct operation of the sparsified readout, allowing a single 153-ns bunch timing resolution, was confirmed in the tests on a synchrotron beam of 10-keV X-rays. An equivalent noise charge of 36.2 e - rms and a conversion gain of 69.5 μV/e - with 2.6 e - rms and 2.7 μV/e - rms pixel-to-pixel variations, respectively, were measured.« less

  19. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  20. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment.

    PubMed

    Kreich, Eliane Maria; Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-03-01

    This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment.

  1. Tritium autoradiography with thinned and back-side illuminated monolithic active pixel sensor device

    NASA Astrophysics Data System (ADS)

    Deptuch, G.

    2005-05-01

    The first autoradiographic results of the tritium ( 3H) marked source obtained with monolithic active pixel sensors are presented. The detector is a high-resolution, back-side illuminated imager, developed within the SUCIMA collaboration for low-energy (<30 keV) electrons detection. The sensitivity to these energies is obtained by thinning the detector, originally fabricated in the form of a standard VLSI chip, down to the thickness of the epitaxial layer. The detector used is the 1×10 6 pixel, thinned MIMOSA V chip. The low noise performance and thin (˜160 nm) entrance window provide the sensitivity of the device to energies as low as ˜4 keV. A polymer tritium source was parked directly atop the detector in open-air conditions. A real-time image of the source was obtained.

  2. Photovoltaic Retinal Prosthesis for Restoring Sight to Patients Blinded by Retinal Injury or Degeneration

    DTIC Science & Technology

    2016-02-01

    are rapidly expanding [1][2]. Cochlear implants [3] have seen the most remarkable success in sensory neuroprosthetics, while retinal implants [4][5...intensities: Imax (bright pixels) and Imin (dark pixels). In the common return configurations (connected and monopolar), the current flowing through the...array forces the injected current to flow to the back side, which results in a potential build-up in front of the device. Near each pixel (within

  3. Practical image registration concerns overcome by the weighted and filtered mutual information metric

    NASA Astrophysics Data System (ADS)

    Keane, Tommy P.; Saber, Eli; Rhody, Harvey; Savakis, Andreas; Raj, Jeffrey

    2012-04-01

    Contemporary research in automated panorama creation utilizes camera calibration or extensive knowledge of camera locations and relations to each other to achieve successful results. Research in image registration attempts to restrict these same camera parameters or apply complex point-matching schemes to overcome the complications found in real-world scenarios. This paper presents a novel automated panorama creation algorithm by developing an affine transformation search based on maximized mutual information (MMI) for region-based registration. Standard MMI techniques have been limited to applications with airborne/satellite imagery or medical images. We show that a novel MMI algorithm can approximate an accurate registration between views of realistic scenes of varying depth distortion. The proposed algorithm has been developed using stationary, color, surveillance video data for a scenario with no a priori camera-to-camera parameters. This algorithm is robust for strict- and nearly-affine-related scenes, while providing a useful approximation for the overlap regions in scenes related by a projective homography or a more complex transformation, allowing for a set of efficient and accurate initial conditions for pixel-based registration.

  4. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  5. Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahim Farah, Fahim Farah; Deptuch, Grzegorz W.; Hoff, James R.

    The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array withoutmore » any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.« less

  6. Design methodology: edgeless 3D ASICs with complex in-pixel processing for pixel detectors

    NASA Astrophysics Data System (ADS)

    Fahim, Farah; Deptuch, Grzegorz W.; Hoff, James R.; Mohseni, Hooman

    2015-08-01

    The design methodology for the development of 3D integrated edgeless pixel detectors with in-pixel processing using Electronic Design Automation (EDA) tools is presented. A large area 3 tier 3D detector with one sensor layer and two ASIC layers containing one analog and one digital tier, is built for x-ray photon time of arrival measurement and imaging. A full custom analog pixel is 65μm x 65μm. It is connected to a sensor pixel of the same size on one side, and on the other side it has approximately 40 connections to the digital pixel. A 32 x 32 edgeless array without any peripheral functional blocks constitutes a sub-chip. The sub-chip is an indivisible unit, which is further arranged in a 6 x 6 array to create the entire 1.248cm x 1.248cm ASIC. Each chip has 720 bump-bond I/O connections, on the back of the digital tier to the ceramic PCB. All the analog tier power and biasing is conveyed through the digital tier from the PCB. The assembly has no peripheral functional blocks, and hence the active area extends to the edge of the detector. This was achieved by using a few flavors of almost identical analog pixels (minimal variation in layout) to allow for peripheral biasing blocks to be placed within pixels. The 1024 pixels within a digital sub-chip array have a variety of full custom, semi-custom and automated timing driven functional blocks placed together. The methodology uses a modified mixed-mode on-top digital implementation flow to not only harness the tool efficiency for timing and floor-planning but also to maintain designer control over compact parasitically aware layout. The methodology uses the Cadence design platform, however it is not limited to this tool.

  7. Impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing pulmonary dynamic contrast-enhanced MR imaging.

    PubMed

    Tokuda, Junichi; Mamata, Hatsuho; Gill, Ritu R; Hata, Nobuhiko; Kikinis, Ron; Padera, Robert F; Lenkinski, Robert E; Sugarbaker, David J; Hatabu, Hiroto

    2011-04-01

    To investigates the impact of nonrigid motion correction on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in patients with solitary pulmonary nodules (SPNs). Misalignment of focal lesions due to respiratory motion in free-breathing dynamic contrast-enhanced MRI (DCE-MRI) precludes obtaining reliable time-intensity curves, which are crucial for pharmacokinetic analysis for tissue characterization. Single-slice 2D DCE-MRI was obtained in 15 patients. Misalignments of SPNs were corrected using nonrigid B-spline image registration. Pixel-wise pharmacokinetic parameters K(trans) , v(e) , and k(ep) were estimated from both original and motion-corrected DCE-MRI by fitting the two-compartment pharmacokinetic model to the time-intensity curve obtained in each pixel. The "goodness-of-fit" was tested with χ(2) -test in pixel-by-pixel basis to evaluate the reliability of the parameters. The percentages of reliable pixels within the SPNs were compared between the original and motion-corrected DCE-MRI. In addition, the parameters obtained from benign and malignant SPNs were compared. The percentage of reliable pixels in the motion-corrected DCE-MRI was significantly larger than the original DCE-MRI (P = 4 × 10(-7) ). Both K(trans) and k(ep) derived from the motion-corrected DCE-MRI showed significant differences between benign and malignant SPNs (P = 0.024, 0.015). The study demonstrated the impact of nonrigid motion correction technique on pixel-wise pharmacokinetic analysis of free-breathing DCE-MRI in SPNs. Copyright © 2011 Wiley-Liss, Inc.

  8. Automated Registration of Images from Multiple Bands of Resourcesat-2 Liss-4 camera

    NASA Astrophysics Data System (ADS)

    Radhadevi, P. V.; Solanki, S. S.; Jyothi, M. V.; Varadan, G.

    2014-11-01

    Continuous and automated co-registration and geo-tagging of images from multiple bands of Liss-4 camera is one of the interesting challenges of Resourcesat-2 data processing. Three arrays of the Liss-4 camera are physically separated in the focal plane in alongtrack direction. Thus, same line on the ground will be imaged by extreme bands with a time interval of as much as 2.1 seconds. During this time, the satellite would have covered a distance of about 14 km on the ground and the earth would have rotated through an angle of 30". A yaw steering is done to compensate the earth rotation effects, thus ensuring a first level registration between the bands. But this will not do a perfect co-registration because of the attitude fluctuations, satellite movement, terrain topography, PSM steering and small variations in the angular placement of the CCD lines (from the pre-launch values) in the focal plane. This paper describes an algorithm based on the viewing geometry of the satellite to do an automatic band to band registration of Liss-4 MX image of Resourcesat-2 in Level 1A. The algorithm is using the principles of photogrammetric collinearity equations. The model employs an orbit trajectory and attitude fitting with polynomials. Then, a direct geo-referencing with a global DEM with which every pixel in the middle band is mapped to a particular position on the surface of the earth with the given attitude. Attitude is estimated by interpolating measurement data obtained from star sensors and gyros, which are sampled at low frequency. When the sampling rate of attitude information is low compared to the frequency of jitter or micro-vibration, images processed by geometric correction suffer from distortion. Therefore, a set of conjugate points are identified between the bands to perform a relative attitude error estimation and correction which will ensure the internal accuracy and co-registration of bands. Accurate calculation of the exterior orientation parameters with GCPs is not required. Instead, the relative line of sight vector of each detector in different bands in relation to the payload is addressed. With this method a band to band registration accuracy of better than 0.3 pixels could be achieved even in high hill areas.

  9. Geometric registration of images by similarity transformation using two reference points

    NASA Technical Reports Server (NTRS)

    Kang, Yong Q. (Inventor); Jo, Young-Heon (Inventor); Yan, Xiao-Hai (Inventor)

    2011-01-01

    A method for registering a first image to a second image using a similarity transformation. The each image includes a plurality of pixels. The first image pixels are mapped to a set of first image coordinates and the second image pixels are mapped to a set of second image coordinates. The first image coordinates of two reference points in the first image are determined. The second image coordinates of these reference points in the second image are determined. A Cartesian translation of the set of second image coordinates is performed such that the second image coordinates of the first reference point match its first image coordinates. A similarity transformation of the translated set of second image coordinates is performed. This transformation scales and rotates the second image coordinates about the first reference point such that the second image coordinates of the second reference point match its first image coordinates.

  10. Acquisition and preprocessing of LANDSAT data

    NASA Technical Reports Server (NTRS)

    Horn, T. N.; Brown, L. E.; Anonsen, W. H. (Principal Investigator)

    1979-01-01

    The original configuration of the GSFC data acquisition, preprocessing, and transmission subsystem, designed to provide LANDSAT data inputs to the LACIE system at JSC, is described. Enhancements made to support LANDSAT -2, and modifications for LANDSAT -3 are discussed. Registration performance throughout the 3 year period of LACIE operations satisfied the 1 pixel root-mean-square requirements established in 1974, with more than two of every three attempts at data registration proving successful, notwithstanding cosmetic faults or content inadequacies to which the process is inherently susceptible. The cloud/snow rejection rate experienced throughout the last 3 years has approached 50%, as expected in most LANDSAT data use situations.

  11. Functional magnetic resonance imaging of visual object construction and shape discrimination : relations among task, hemispheric lateralization, and gender.

    PubMed

    Georgopoulos, A P; Whang, K; Georgopoulos, M A; Tagaris, G A; Amirikian, B; Richter, W; Kim, S G; Uğurbil, K

    2001-01-01

    We studied the brain activation patterns in two visual image processing tasks requiring judgements on object construction (FIT task) or object sameness (SAME task). Eight right-handed healthy human subjects (four women and four men) performed the two tasks in a randomized block design while 5-mm, multislice functional images of the whole brain were acquired using a 4-tesla system using blood oxygenation dependent (BOLD) activation. Pairs of objects were picked randomly from a set of 25 oriented fragments of a square and presented to the subjects approximately every 5 sec. In the FIT task, subjects had to indicate, by pushing one of two buttons, whether the two fragments could match to form a perfect square, whereas in the SAME task they had to decide whether they were the same or not. In a control task, preceding and following each of the two tasks above, a single square was presented at the same rate and subjects pushed any of the two keys at random. Functional activation maps were constructed based on a combination of conservative criteria. The areas with activated pixels were identified using Talairach coordinates and anatomical landmarks, and the number of activated pixels was determined for each area. Altogether, 379 pixels were activated. The counts of activated pixels did not differ significantly between the two tasks or between the two genders. However, there were significantly more activated pixels in the left (n = 218) than the right side of the brain (n = 161). Of the 379 activated pixels, 371 were located in the cerebral cortex. The Talairach coordinates of these pixels were analyzed with respect to their overall distribution in the two tasks. These distributions differed significantly between the two tasks. With respect to individual dimensions, the two tasks differed significantly in the anterior--posterior and superior--inferior distributions but not in the left--right (including mediolateral, within the left or right side) distribution. Specifically, the FIT distribution was, overall, more anterior and inferior than that of the SAME task. A detailed analysis of the counts and spatial distributions of activated pixels was carried out for 15 brain areas (all in the cerebral cortex) in which a consistent activation (in > or = 3 subjects) was observed (n = 323 activated pixels). We found the following. Except for the inferior temporal gyrus, which was activated exclusively in the FIT task, all other areas showed activation in both tasks but to different extents. Based on the extent of activation, areas fell within two distinct groups (FIT or SAME) depending on which pixel count (i.e., FIT or SAME) was greater. The FIT group consisted of the following areas, in decreasing FIT/SAME order (brackets indicate ties): GTi, GTs, GC, GFi, GFd, [GTm, GF], GO. The SAME group consisted of the following areas, in decreasing SAME/FIT order : GOi, LPs, Sca, GPrC, GPoC, [GFs, GFm]. These results indicate that there are distributed, graded, and partially overlapping patterns of activation during performance of the two tasks. We attribute these overlapping patterns of activation to the engagement of partially shared processes. Activated pixels clustered to three types of clusters : FIT-only (111 pixels), SAME-only (97 pixels), and FIT + SAME (115 pixels). Pixels contained in FIT-only and SAME-only clusters were distributed approximately equally between the left and right hemispheres, whereas pixels in the SAME + FIT clusters were located mostly in the left hemisphere. With respect to gender, the left-right distribution of activated pixels was very similar in women and men for the SAME-only and FIT + SAME clusters but differed for the FIT-only case in which there was a prominent left side preponderance for women, in contrast to a right side preponderance for men. We conclude that (a) cortical mechanisms common for processing visual object construction and discrimination involve mostly the left hemisphere, (b) cortical mechanisms specific for these tasks engage both hemispheres, and (c) in object construction only, men engage predominantly the right hemisphere whereas women show a left-hemisphere preponderance.

  12. Deformable and rigid registration of MRI and microPET images for photodynamic therapy of cancer in mice

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei Baowei; Wang Hesheng; Muzic, Raymond F. Jr.

    2006-03-15

    We are investigating imaging techniques to study the tumor response to photodynamic therapy (PDT). Positron emission tomography (PET) can provide physiological and functional information. High-resolution magnetic resonance imaging (MRI) can provide anatomical and morphological changes. Image registration can combine MRI and PET images for improved tumor monitoring. In this study, we acquired high-resolution MRI and microPET {sup 18}F-fluorodeoxyglucose (FDG) images from C3H mice with RIF-1 tumors that were treated with Pc 4-based PDT. We developed two registration methods for this application. For registration of the whole mouse body, we used an automatic three-dimensional, normalized mutual information algorithm. For tumor registration,more » we developed a finite element model (FEM)-based deformable registration scheme. To assess the quality of whole body registration, we performed slice-by-slice review of both image volumes; manually segmented feature organs, such as the left and right kidneys and the bladder, in each slice; and computed the distance between corresponding centroids. Over 40 volume registration experiments were performed with MRI and microPET images. The distance between corresponding centroids of organs was 1.5{+-}0.4 mm which is about 2 pixels of microPET images. The mean volume overlap ratios for tumors were 94.7% and 86.3% for the deformable and rigid registration methods, respectively. Registration of high-resolution MRI and microPET images combines anatomical and functional information of the tumors and provides a useful tool for evaluating photodynamic therapy.« less

  13. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment

    PubMed Central

    Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-01-01

    Purpose This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. Materials and Methods A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. Results The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. Conclusion A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment. PMID:27051635

  14. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  15. Realistic full wave modeling of focal plane array pixels

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.; ...

    2017-11-01

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  16. Topology-guided deformable registration with local importance preservation for biomedical images

    NASA Astrophysics Data System (ADS)

    Zheng, Chaojie; Wang, Xiuying; Zeng, Shan; Zhou, Jianlong; Yin, Yong; Feng, Dagan; Fulham, Michael

    2018-01-01

    The demons registration (DR) model is well recognized for its deformation capability. However, it might lead to misregistration due to erroneous diffusion direction when there are no overlaps between corresponding regions. We propose a novel registration energy function, introducing topology energy, and incorporating a local energy function into the DR in a progressive registration scheme, to address these shortcomings. The topology energy that is derived from the topological information of the images serves as a direction inference to guide diffusion transformation to retain the merits of DR. The local energy constrains the deformation disparity of neighbouring pixels to maintain important local texture and density features. The energy function is minimized in a progressive scheme steered by a topology tree graph and we refer to it as topology-guided deformable registration (TDR). We validated our TDR on 20 pairs of synthetic images with Gaussian noise, 20 phantom PET images with artificial deformations and 12 pairs of clinical PET-CT studies. We compared it to three methods: (1) free-form deformation registration method, (2) energy-based DR and (3) multi-resolution DR. The experimental results show that our TDR outperformed the other three methods in regard to structural correspondence and preservation of the local important information including texture and density, while retaining global correspondence.

  17. Supervoxels for graph cuts-based deformable image registration using guided image filtering

    NASA Astrophysics Data System (ADS)

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-11-01

    We propose combining a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for three-dimensional (3-D) deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to two-dimensional (2-D) applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation combined with graph cuts-based optimization can be applied to 3-D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model "sliding motion." Applying this method to lung image registration results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available computed tomography lung image dataset leads to the observation that our approach compares very favorably with state of the art methods in continuous and discrete image registration, achieving target registration error of 1.16 mm on average per landmark.

  18. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering.

    PubMed

    Szmul, Adam; Papież, Bartłomiej W; Hallack, Andre; Grau, Vicente; Schnabel, Julia A

    2017-10-04

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model 'sliding motion'. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark.

  19. Joint image registration and fusion method with a gradient strength regularization

    NASA Astrophysics Data System (ADS)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  20. Band co-registration modeling of LAPAN-A3/IPB multispectral imager based on satellite attitude

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Syafrudin, A. H.; Utama, S.; Jayani, A. P. S.

    2018-05-01

    One of significant geometric distortion on images of LAPAN-A3/IPB multispectral imager is co-registration error between each color channel detector. Band co-registration distortion usually can be corrected by using several approaches, which are manual method, image matching algorithm, or sensor modeling and calibration approach. This paper develops another approach to minimize band co-registration distortion on LAPAN-A3/IPB multispectral image by using supervised modeling of image matching with respect to satellite attitude. Modeling results show that band co-registration error in across-track axis is strongly influenced by yaw angle, while error in along-track axis is fairly influenced by both pitch and roll angle. Accuracy of the models obtained is pretty good, which lies between 1-3 pixels error for each axis of each pair of band co-registration. This mean that the model can be used to correct the distorted images without the need of slower image matching algorithm, nor the laborious effort needed in manual approach and sensor calibration. Since the calculation can be executed in order of seconds, this approach can be used in real time quick-look image processing in ground station or even in satellite on-board image processing.

  1. Supervoxels for Graph Cuts-Based Deformable Image Registration Using Guided Image Filtering

    PubMed Central

    Szmul, Adam; Papież, Bartłomiej W.; Hallack, Andre; Grau, Vicente; Schnabel, Julia A.

    2017-01-01

    In this work we propose to combine a supervoxel-based image representation with the concept of graph cuts as an efficient optimization technique for 3D deformable image registration. Due to the pixels/voxels-wise graph construction, the use of graph cuts in this context has been mainly limited to 2D applications. However, our work overcomes some of the previous limitations by posing the problem on a graph created by adjacent supervoxels, where the number of nodes in the graph is reduced from the number of voxels to the number of supervoxels. We demonstrate how a supervoxel image representation, combined with graph cuts-based optimization can be applied to 3D data. We further show that the application of a relaxed graph representation of the image, followed by guided image filtering over the estimated deformation field, allows us to model ‘sliding motion’. Applying this method to lung image registration, results in highly accurate image registration and anatomically plausible estimations of the deformations. Evaluation of our method on a publicly available Computed Tomography lung image dataset (www.dir-lab.com) leads to the observation that our new approach compares very favorably with state-of-the-art in continuous and discrete image registration methods achieving Target Registration Error of 1.16mm on average per landmark. PMID:29225433

  2. Clementine High Resolution Camera Mosaicking Project

    NASA Technical Reports Server (NTRS)

    1998-01-01

    This report constitutes the final report for NASA Contract NASW-5054. This project processed Clementine I high resolution images of the Moon, mosaicked these images together, and created a 22-disk set of compact disk read-only memory (CD-ROM) volumes. The mosaics were produced through semi-automated registration and calibration of the high resolution (HiRes) camera's data against the geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic produced by the US Geological Survey (USGS). The HiRes mosaics were compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution nadir-looking observations. The images were spatially warped using the sinusoidal equal-area projection at a scale of 20 m/pixel for sub-polar mosaics (below 80 deg. latitude) and using the stereographic projection at a scale of 30 m/pixel for polar mosaics. Only images with emission angles less than approximately 50 were used. Images from non-mapping cross-track slews, which tended to have large SPICE errors, were generally omitted. The locations of the resulting image population were found to be offset from the UV/Vis basemap by up to 13 km (0.4 deg.). Geometric control was taken from the 100 m/pixel global and 150 m/pixel polar USGS Clementine Basemap Mosaics compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Radiometric calibration was achieved by removing the image nonuniformity dominated by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap, that approximately transform the 8-bit HiRes data to photometric units. The sub-polar mosaics are divided into tiles that cover approximately 1.75 deg. of latitude and span the longitude range of the mosaicked frames. Images from a given orbit are map projected using the orbit's nominal central latitude. Polar mosaics are tiled into squares 2250 pixels on a side, which spans approximately 2.2 deg. Two mosaics are provided for each pole: one corresponding to data acquired while periapsis was in the south, the other while periapsis was in the north. The CD-ROMs also contain ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files.

  3. Detection of X-ray spectra and images by Timepix

    NASA Astrophysics Data System (ADS)

    Urban, M.; Nentvich, O.; Stehlikova, V.; Sieger, L.

    2017-07-01

    X-ray monitoring for astrophysical applications mainly consists of two parts - optics and detector. The article describes an approach based on a combination of Lobster Eye (LE) optics with Timepix detector. Timepix is a semiconductor detector with 256 × 256 pixels on one electrode and a second electrode is common. Usage of the back-side-pulse from an common electrode of pixelated detector brings the possibility of an additional spectroscopic or trigger signal. In this article are described effects of the thermal stabilisation, and the cooling effect of the detector working as single pixel.

  4. Validation of elastic registration algorithms based on adaptive irregular grids for medical applications

    NASA Astrophysics Data System (ADS)

    Franz, Astrid; Carlsen, Ingwer C.; Renisch, Steffen; Wischmann, Hans-Aloys

    2006-03-01

    Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.

  5. Automatic registration of panoramic image sequence and mobile laser scanning data using semantic features

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Yang, Bisheng; Chen, Chi; Huang, Ronggang; Dong, Zhen; Xiao, Wen

    2018-02-01

    Inaccurate exterior orientation parameters (EoPs) between sensors obtained by pre-calibration leads to failure of registration between panoramic image sequence and mobile laser scanning data. To address this challenge, this paper proposes an automatic registration method based on semantic features extracted from panoramic images and point clouds. Firstly, accurate rotation parameters between the panoramic camera and the laser scanner are estimated using GPS and IMU aided structure from motion (SfM). The initial EoPs of panoramic images are obtained at the same time. Secondly, vehicles in panoramic images are extracted by the Faster-RCNN as candidate primitives to be matched with potential corresponding primitives in point clouds according to the initial EoPs. Finally, translation between the panoramic camera and the laser scanner is refined by maximizing the overlapping area of corresponding primitive pairs based on the Particle Swarm Optimization (PSO), resulting in a finer registration between panoramic image sequences and point clouds. Two challenging urban scenes were experimented to assess the proposed method, and the final registration errors of these two scenes were both less than three pixels, which demonstrates a high level of automation, robustness and accuracy.

  6. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  7. [Affine transformation-based automatic registration for peripheral digital subtraction angiography (DSA)].

    PubMed

    Kong, Gang; Dai, Dao-Qing; Zou, Lu-Min

    2008-07-01

    In order to remove the artifacts of peripheral digital subtraction angiography (DSA), an affine transformation-based automatic image registration algorithm is introduced here. The whole process is described as follows: First, rectangle feature templates are constructed with their centers of the extracted Harris corners in the mask, and motion vectors of the central feature points are estimated using template matching technology with the similarity measure of maximum histogram energy. And then the optimal parameters of the affine transformation are calculated with the matrix singular value decomposition (SVD) method. Finally, bilinear intensity interpolation is taken to the mask according to the specific affine transformation. More than 30 peripheral DSA registrations are performed with the presented algorithm, and as the result, moving artifacts of the images are removed with sub-pixel precision, and the time consumption is less enough to satisfy the clinical requirements. Experimental results show the efficiency and robustness of the algorithm.

  8. The Zernike expansion--an example of a merit function for 2D/3D registration based on orthogonal functions.

    PubMed

    Dong, Shuo; Kettenbach, Joachim; Hinterleitner, Isabella; Bergmann, Helmar; Birkfellner, Wolfgang

    2008-01-01

    Current merit functions for 2D/3D registration usually rely on comparing pixels or small regions of images using some sort of statistical measure. Problems connected to this paradigm the sometimes problematic behaviour of the method if noise or artefacts (for instance a guide wire) are present on the projective image. We present a merit function for 2D/3D registration which utilizes the decomposition of the X-ray and the DRR under comparison into orthogonal Zernike moments; the quality of the match is assessed by an iterative comparison of expansion coefficients. Results in a imaging study on a physical phantom show that--compared to standard cross--correlation the Zernike moment based merit function shows better robustness if histogram content in images under comparison is different, and that time expenses are comparable if the merit function is constructed out of a few significant moments only.

  9. Aircraft MSS data registration and vegetation classification of wetland change detection

    USGS Publications Warehouse

    Christensen, E.J.; Jensen, J.R.; Ramsey, Elijah W.; Mackey, H.E.

    1988-01-01

    Portions of the Savannah River floodplain swamp were evaluated for vegetation change using high resolution (5a??6 m) aircraft multispectral scanner (MSS) data. Image distortion from aircraft movement prevented precise image-to-image registration in some areas. However, when small scenes were used (200-250 ha), a first-order linear transformation provided registration accuracies of less than or equal to one pixel. A larger area was registered using a piecewise linear method. Five major wetland classes were identified and evaluated for change. Phenological differences and the variable distribution of vegetation limited wetland type discrimination. Using unsupervised methods and ground-collected vegetation data, overall classification accuracies ranged from 84 per cent to 87 per cent for each scene. Results suggest that high-resolution aircraft MSS data can be precisely registered, if small areas are used, and that wetland vegetation change can be accurately detected and monitored.

  10. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias.

    PubMed

    Stefanov, Konstantin D; Clarke, Andrew S; Ivory, James; Holland, Andrew D

    2018-01-03

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths.

  11. Design and Performance of a Pinned Photodiode CMOS Image Sensor Using Reverse Substrate Bias †

    PubMed Central

    Clarke, Andrew S.; Ivory, James; Holland, Andrew D.

    2018-01-01

    A new pinned photodiode (PPD) CMOS image sensor with reverse biased p-type substrate has been developed and characterized. The sensor uses traditional PPDs with one additional deep implantation step to suppress the parasitic reverse currents, and can be fully depleted. The first prototypes have been manufactured on an 18 µm thick, 1000 Ω·cm epitaxial silicon wafers using 180 nm PPD image sensor process. Both front-side illuminated (FSI) and back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v. The characterization results from a number of arrays of 10 µm and 5.4 µm PPD pixels, with different shape, the size and the depth of the new implant are in good agreement with device simulations. The new pixels could be reverse-biased without parasitic leakage currents well beyond full depletion, and demonstrate nearly identical optical response to the reference non-modified pixels. The observed excessive charge sharing in some pixel variants is shown to not be a limiting factor in operation. This development promises to realize monolithic PPD CIS with large depleted thickness and correspondingly high quantum efficiency at near-infrared and soft X-ray wavelengths. PMID:29301379

  12. Stopping Criteria for Log-Domain Diffeomorphic Demons Registration: An Experimental Survey for Radiotherapy Application.

    PubMed

    Peroni, M; Golland, P; Sharp, G C; Baroni, G

    2016-02-01

    A crucial issue in deformable image registration is achieving a robust registration algorithm at a reasonable computational cost. Given the iterative nature of the optimization procedure an algorithm must automatically detect convergence, and stop the iterative process when most appropriate. This paper ranks the performances of three stopping criteria and six stopping value computation strategies for a Log-Domain Demons Deformable registration method simulating both a coarse and a fine registration. The analyzed stopping criteria are: (a) velocity field update magnitude, (b) mean squared error, and (c) harmonic energy. Each stoping condition is formulated so that the user defines a threshold ∊, which quantifies the residual error that is acceptable for the particular problem and calculation strategy. In this work, we did not aim at assigning a value to e, but to give insights in how to evaluate and to set the threshold on a given exit strategy in a very popular registration scheme. Experiments on phantom and patient data demonstrate that comparing the optimization metric minimum over the most recent three iterations with the minimum over the fourth to sixth most recent iterations can be an appropriate algorithm stopping strategy. The harmonic energy was found to provide best trade-off between robustness and speed of convergence for the analyzed registration method at coarse registration, but was outperformed by mean squared error when all the original pixel information is used. This suggests the need of developing mathematically sound new convergence criteria in which both image and vector field information could be used to detect the actual convergence, which could be especially useful when considering multi-resolution registrations. Further work should be also dedicated to study same strategies performances in other deformable registration methods and body districts. © The Author(s) 2014.

  13. Pre-processing, registration and selection of adaptive optics corrected retinal images.

    PubMed

    Ramaswamy, Gomathy; Devaney, Nicholas

    2013-07-01

    In this paper, the aim is to demonstrate enhanced processing of sequences of fundus images obtained using a commercial AO flood illumination system. The purpose of the work is to (1) correct for uneven illumination at the retina (2) automatically select the best quality images and (3) precisely register the best images. Adaptive optics corrected retinal images are pre-processed to correct uneven illumination using different methods; subtracting or dividing by the average filtered image, homomorphic filtering and a wavelet based approach. These images are evaluated to measure the image quality using various parameters, including sharpness, variance, power spectrum kurtosis and contrast. We have carried out the registration in two stages; a coarse stage using cross-correlation followed by fine registration using two approaches; parabolic interpolation on the peak of the cross-correlation and maximum-likelihood estimation. The angle of rotation of the images is measured using a combination of peak tracking and Procrustes transformation. We have found that a wavelet approach (Daubechies 4 wavelet at 6th level decomposition) provides good illumination correction with clear improvement in image sharpness and contrast. The assessment of image quality using a 'Designer metric' works well when compared to visual evaluation, although it is highly correlated with other metrics. In image registration, sub-pixel translation measured using parabolic interpolation on the peak of the cross-correlation function and maximum-likelihood estimation are found to give very similar results (RMS difference 0.047 pixels). We have confirmed that correcting rotation of the images provides a significant improvement, especially at the edges of the image. We observed that selecting the better quality frames (e.g. best 75% images) for image registration gives improved resolution, at the expense of poorer signal-to-noise. The sharpness map of the registered and de-rotated images shows increased sharpness over most of the field of view. Adaptive optics assisted images of the cone photoreceptors can be better pre-processed using a wavelet approach. These images can be assessed for image quality using a 'Designer Metric'. Two-stage image registration including correcting for rotation significantly improves the final image contrast and sharpness. © 2013 The Authors Ophthalmic & Physiological Optics © 2013 The College of Optometrists.

  14. Writing next-generation display photomasks

    NASA Astrophysics Data System (ADS)

    Sandstrom, Tor; Wahlsten, Mikael; Park, Youngjin

    2016-10-01

    Recent years have seen a fast technical development within the display area. Displays get ever higher pixel density and the pixels get smaller. Current displays have over 800 PPI and market forces will eventually drive for densities of 2000 PPI or higher. The transistor backplanes also get more complex. OLED displays require 4-7 transistors per pixel instead of the typical 1-2 transistors used for LCDs, and they are significantly more sensitive to errors. New large-area maskwriters have been developed for masks used in high volume production of screens for state-of-theart smartphones. Redesigned laser optics with higher NA and lower aberrations improve resolution and CD uniformity and reduce mura effects. The number of beams has been increased to maintain the throughput despite the higher writing resolution. OLED displays are highly sensitive to placement errors and registration in the writers has been improved. To verify the registration of produced masks a separate metrology system has been developed. The metrology system is self-calibrated to high accuracy. The calibration is repeatable across machines and sites using Z-correction. The repeatability of the coordinate system makes it possible to standardize the coordinate system across an entire supply chain or indeed across the entire industry. In-house metrology is a commercial necessity for high-end mask shop, but also the users of the masks, the panel makers, would benefit from having in-house metrology. It would act as the reference for their mask suppliers, give better predictive and post mortem diagnostic power for the panel process, and the metrology could be used to characterize and improve the entire production loop from data to panel.

  15. Influence of Articulating Paper Thickness on Occlusal Contacts Registration: A Preliminary Report.

    PubMed

    Brizuela-Velasco, Aritza; Álvarez-Arenal, Ángel; Ellakuria-Echevarria, Joseba; del Río-Highsmith, Jaime; Santamaría-Arrieta, Gorka; Martín-Blanco, Nerea

    2015-01-01

    The objective of this preliminary study was to determine if the occlusal contact surface registered with an articulating paper during fixed prosthodontic treatment was contained within the area marked on a thicker articulating paper. This information would optimize any necessary occlusal adjustment of a prosthesis' veneering material. A convenience sample of 15 patients who were being treated with an implant-supported fixed singleunit dental prosthesis was selected. Occlusal registrations were obtained from each patient using 12-μm, 40-μm, 80-μm, and 200-μm articulating paper. Photographs of the occlusal registrations were obtained, and pixel measurements of the surfaces were taken and overlapped for comparison. The results showed that the thicker the articulating paper, the larger the occlusal contact area obtained. The differences were statistically significant. In all cases, the occlusal registrations obtained with the thinnest articulating paper were contained within the area marked on the thickest articulating paper. The results suggested that the use of thin articulating papers (12-μm or 40-μm) can avoid unnecessary grinding of veneering material or teeth during occlusal adjustment.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahim, Farah; Deptuch, Grzegorz; Shenai, Alpana

    The Vertically Integrated Photon Imaging Chip - Large, (VIPIC-L), is a large area, small pixel (65μm), 3D integrated, photon counting ASIC with zero-suppressed or full frame dead-time-less data readout. It features data throughput of 14.4 Gbps per chip with a full frame readout speed of 56kframes/s in the imaging mode. VIPIC-L contain 192 x 192 pixel array and the total size of the chip is 1.248cm x 1.248cm with only a 5μm periphery. It contains about 120M transistors. A 1.3M pixel camera module will be developed by arranging a 6 x 6 array of 3D VIPIC-L’s bonded to a largemore » area silicon sensor on the analog side and to a readout board on the digital side. The readout board hosts a bank of FPGA’s, one per VIPIC-L to allow processing of up to 0.7 Tbps of raw data produced by the camera.« less

  17. 41 CFR 102-33.420 - How must we declassify an aircraft?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... registration form from the aircraft, complete the reverse side of the registration form, and send both... an aircraft? 102-33.420 Section 102-33.420 Public Contracts and Property Management Federal Property... GOVERNMENT AIRCRAFT Reporting Information on Government Aircraft Federal Inventory Data § 102-33.420 How must...

  18. 41 CFR 102-33.420 - How must we declassify an aircraft?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... registration form from the aircraft, complete the reverse side of the registration form, and send both... an aircraft? 102-33.420 Section 102-33.420 Public Contracts and Property Management Federal Property... GOVERNMENT AIRCRAFT Reporting Information on Government Aircraft Federal Inventory Data § 102-33.420 How must...

  19. 41 CFR 102-33.420 - How must we declassify an aircraft?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... registration form from the aircraft, complete the reverse side of the registration form, and send both... an aircraft? 102-33.420 Section 102-33.420 Public Contracts and Property Management Federal Property... GOVERNMENT AIRCRAFT Reporting Information on Government Aircraft Federal Inventory Data § 102-33.420 How must...

  20. 41 CFR 102-33.420 - How must we declassify an aircraft?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... registration form from the aircraft, complete the reverse side of the registration form, and send both... an aircraft? 102-33.420 Section 102-33.420 Public Contracts and Property Management Federal Property... GOVERNMENT AIRCRAFT Reporting Information on Government Aircraft Federal Inventory Data § 102-33.420 How must...

  1. Synthetic Generation of Myocardial Blood-Oxygen-Level-Dependent MRI Time Series via Structural Sparse Decomposition Modeling

    PubMed Central

    Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A.

    2014-01-01

    This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for Cardiac Phase-resolved Blood-Oxygen-Level-Dependent (CP–BOLD) MRI. CP–BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by (a) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and (b) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease. PMID:24691119

  2. Synthetic generation of myocardial blood-oxygen-level-dependent MRI time series via structural sparse decomposition modeling.

    PubMed

    Rusu, Cristian; Morisi, Rita; Boschetto, Davide; Dharmakumar, Rohan; Tsaftaris, Sotirios A

    2014-07-01

    This paper aims to identify approaches that generate appropriate synthetic data (computer generated) for cardiac phase-resolved blood-oxygen-level-dependent (CP-BOLD) MRI. CP-BOLD MRI is a new contrast agent- and stress-free approach for examining changes in myocardial oxygenation in response to coronary artery disease. However, since signal intensity changes are subtle, rapid visualization is not possible with the naked eye. Quantifying and visualizing the extent of disease relies on myocardial segmentation and registration to isolate the myocardium and establish temporal correspondences and ischemia detection algorithms to identify temporal differences in BOLD signal intensity patterns. If transmurality of the defect is of interest pixel-level analysis is necessary and thus a higher precision in registration is required. Such precision is currently not available affecting the design and performance of the ischemia detection algorithms. In this work, to enable algorithmic developments of ischemia detection irrespective to registration accuracy, we propose an approach that generates synthetic pixel-level myocardial time series. We do this by 1) modeling the temporal changes in BOLD signal intensity based on sparse multi-component dictionary learning, whereby segmentally derived myocardial time series are extracted from canine experimental data to learn the model; and 2) demonstrating the resemblance between real and synthetic time series for validation purposes. We envision that the proposed approach has the capacity to accelerate development of tools for ischemia detection while markedly reducing experimental costs so that cardiac BOLD MRI can be rapidly translated into the clinical arena for the noninvasive assessment of ischemic heart disease.

  3. Demonstration of near real-time Sentinel-2A Landsat-8 registration

    NASA Astrophysics Data System (ADS)

    Yan, L.; Roy, D. P.; Huang, H.; Li, Z.; Zhang, H.

    2017-12-01

    The potential for near daily global medium-spatial-resolution optical wavelength remote sensing has been advanced by the availability of European Space Agency (ESA) Sentinel-2 data. Sentinel-2A (S2A) and Landsat-8 (L8) are known to have systematic misregistration errors due to factors including a Landsat geolocation reference discrepancy and a S2A satellite yaw orientation knowledge error (rectified in the recent S2A processing baseline V02.04). In order to undertake low temporal latency applications, such as change detection, near real-time sensor data registration is required. This study considered 2,459 S2A L1C tile images and 355 L8 Collection-1 images defined in UTM zone 35 acquired June to November 2016 over 700 × 1,200 km of Southern Africa. Misregistration characterizations among the S2A L1C tile time series and then among the Landsat-8 Collection-1 time series are first reported. Image matching was undertaken between near-infrared S2A 10 m image pairs and L8 image pairs (resampled to 10 m) using a recently published hierarchical image pyramid approach. The S2A V02.04 products had a 0.45 pixel (10 m) mean intra-misregistration, while the L8 Collection-1 images had a 0.12 pixel (10 m) mean intra-misregistration. Given these findings, we choose to register the S2A to the L8 data. Rather than registering individual images, which is not always robust to missing data, clouds or land surface changes, whole orbits falling over the same UTM zone were registered. A least-squares adjustment was applied using match points between S2A orbit images and L8 images as observations. Each orbit of S2A images was mached to multiple spatially-overlapping and contemporaneous L8 images to generate affine transformation coefficients for its registration. This provided a registered S2A and L8 time series with 0.3 pixel (10 m) misregistration (2σ) and demonstrates a near real-time methodology that can be applied as new sensor data are collected.

  4. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    NASA Astrophysics Data System (ADS)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.

  5. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Seattle, WA; Majewski, Stanislaw [Morgantown, WV; Welch, Benjamin L [Hampton, VA

    2012-07-03

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  6. Method to improve cancerous lesion detection sensitivity in a dedicated dual-head scintimammography system

    DOEpatents

    Kieper, Douglas Arthur [Newport News, VA; Majewski, Stanislaw [Yorktown, VA; Welch, Benjamin L [Hampton, VA

    2008-10-28

    An improved method for enhancing the contrast between background and lesion areas of a breast undergoing dual-head scintimammographic examination comprising: 1) acquiring a pair of digital images from a pair of small FOV or mini gamma cameras compressing the breast under examination from opposing sides; 2) inverting one of the pair of images to align or co-register with the other of the images to obtain co-registered pixel values; 3) normalizing the pair of images pixel-by-pixel by dividing pixel values from each of the two acquired images and the co-registered image by the average count per pixel in the entire breast area of the corresponding detector; and 4) multiplying the number of counts in each pixel by the value obtained in step 3 to produce a normalization enhanced two dimensional contrast map. This enhanced (increased contrast) contrast map enhances the visibility of minor local increases (uptakes) of activity over the background and therefore improves lesion detection sensitivity, especially of small lesions.

  7. Towards adaptive radiotherapy for head and neck patients: validation of an in-house deformable registration algorithm

    NASA Astrophysics Data System (ADS)

    Veiga, C.; McClelland, J.; Moinuddin, S.; Ricketts, K.; Modat, M.; Ourselin, S.; D'Souza, D.; Royle, G.

    2014-03-01

    The purpose of this work is to validate an in-house deformable image registration (DIR) algorithm for adaptive radiotherapy for head and neck patients. We aim to use the registrations to estimate the "dose of the day" and assess the need to replan. NiftyReg is an open-source implementation of the B-splines deformable registration algorithm, developed in our institution. We registered a planning CT to a CBCT acquired midway through treatment for 5 HN patients that required replanning. We investigated 16 different parameter settings that previously showed promising results. To assess the registrations, structures delineated in the CT were warped and compared with contours manually drawn by the same clinical expert on the CBCT. This structure set contained vertebral bodies and soft tissue. Dice similarity coefficient (DSC), overlap index (OI), centroid position and distance between structures' surfaces were calculated for every registration, and a set of parameters that produces good results for all datasets was found. We achieve a median value of 0.845 in DSC, 0.889 in OI, error smaller than 2 mm in centroid position and over 90% of the warped surface pixels are distanced less than 2 mm of the manually drawn ones. By using appropriate DIR parameters, we are able to register the planning geometry (pCT) to the daily geometry (CBCT).

  8. An incompressible fluid flow model with mutual information for MR image registration

    NASA Astrophysics Data System (ADS)

    Tsai, Leo; Chang, Herng-Hua

    2013-03-01

    Image registration is one of the fundamental and essential tasks within image processing. It is a process of determining the correspondence between structures in two images, which are called the template image and the reference image, respectively. The challenge of registration is to find an optimal geometric transformation between corresponding image data. This paper develops a new MR image registration algorithm that uses a closed incompressible viscous fluid model associated with mutual information. In our approach, we treat the image pixels as the fluid elements of a viscous fluid flow governed by the nonlinear Navier-Stokes partial differential equation (PDE). We replace the pressure term with the body force mainly used to guide the transformation with a weighting coefficient, which is expressed by the mutual information between the template and reference images. To solve this modified Navier-Stokes PDE, we adopted the fast numerical techniques proposed by Seibold1. The registration process of updating the body force, the velocity and deformation fields is repeated until the mutual information weight reaches a prescribed threshold. We applied our approach to the BrainWeb and real MR images. As consistent with the theory of the proposed fluid model, we found that our method accurately transformed the template images into the reference images based on the intensity flow. Experimental results indicate that our method is of potential in a wide variety of medical image registration applications.

  9. Use of Multi-Resolution Wavelet Feature Pyramids for Automatic Registration of Multi-Sensor Imagery

    NASA Technical Reports Server (NTRS)

    Zavorin, Ilya; LeMoigne, Jacqueline

    2003-01-01

    The problem of image registration, or alignment of two or more images representing the same scene or object, has to be addressed in various disciplines that employ digital imaging. In the area of remote sensing, just like in medical imaging or computer vision, it is necessary to design robust, fast and widely applicable algorithms that would allow automatic registration of images generated by various imaging platforms at the same or different times, and that would provide sub-pixel accuracy. One of the main issues that needs to be addressed when developing a registration algorithm is what type of information should be extracted from the images being registered, to be used in the search for the geometric transformation that best aligns them. The main objective of this paper is to evaluate several wavelet pyramids that may be used both for invariant feature extraction and for representing images at multiple spatial resolutions to accelerate registration. We find that the band-pass wavelets obtained from the Steerable Pyramid due to Simoncelli perform better than two types of low-pass pyramids when the images being registered have relatively small amount of nonlinear radiometric variations between them. Based on these findings, we propose a modification of a gradient-based registration algorithm that has recently been developed for medical data. We test the modified algorithm on several sets of real and synthetic satellite imagery.

  10. Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution

    PubMed Central

    Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan

    2017-01-01

    In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campione, Salvatore; Warne, Larry K.; Jorgenson, Roy E.

    Here, we investigate full-wave simulations of realistic implementations of multifunctional nanoantenna enabled detectors (NEDs). We focus on a 2x2 pixelated array structure that supports two wavelengths of operation. We design each resonating structure independently using full-wave simulations with periodic boundary conditions mimicking the whole infinite array. We then construct a supercell made of a 2x2 pixelated array with periodic boundary conditions mimicking the full NED; in this case, however, each pixel comprises 10-20 antennas per side. In this way, the cross-talk between contiguous pixels is accounted for in our simulations. We observe that, even though there are finite extent effects,more » the pixels work as designed, each responding at the respective wavelength of operation. This allows us to stress that realistic simulations of multifunctional NEDs need to be performed to verify the design functionality by taking into account finite extent and cross-talk effects.« less

  12. It's not the pixel count, you fool

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2012-01-01

    The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.

  13. Back-illuminated imager and method for making electrical and optical connections to same

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata (Inventor)

    2010-01-01

    Methods for bringing or exposing metal pads or traces to the backside of a backside-illuminated imager allow the pads or traces to reside on the illumination side for electrical connection. These methods provide a solution to a key packaging problem for backside thinned imagers. The methods also provide alignment marks for integrating color filters and microlenses to the imager pixels residing on the frontside of the wafer, enabling high performance multispectral and high sensitivity imagers, including those with extremely small pixel pitch. In addition, the methods incorporate a passivation layer for protection of devices against external contamination, and allow interface trap density reduction via thermal annealing. Backside-illuminated imagers with illumination side electrical connections are also disclosed.

  14. Cassini Targets a Propeller in Saturn A Ring

    NASA Image and Video Library

    2017-03-02

    NASA's Cassini spacecraft captured these remarkable views of a propeller feature in Saturn's A ring on Feb. 21, 2017. These are the sharpest images taken of a propeller so far, and show an unprecedented level of detail. The propeller is nicknamed "Santos-Dumont," after the pioneering Brazilian-French aviator. This observation was Cassini's first targeted flyby of a propeller. The views show the object from vantage points on opposite sides of the rings. The top image looks toward the rings' sunlit side, while the bottom image shows the unilluminated side, where sunlight filters through the backlit ring. The two images presented as figure 1 are reprojected at the same scale (0.13 mile or 207 meters per pixel) in order to facilitate comparison. The original images, which have slightly different scales, are also provided here, without reprojection, as figure 2; the sunlit-side image is at left, while the unlit-side image is at right. Cassini scientists have been tracking the orbit of this object for the past decade, tracing the effect that the ring has upon it. Now, as Cassini has moved in close to the ring as part of its ring-grazing orbits, it was able to obtain this extreme close-up view of the propeller, enabling researchers to examine its effects on the ring. These views, and others like them, will inform models and studies in new ways going forward. Like a frosted window, Saturn's rings look different depending on whether they are seen fully sunlit or backlit. On the lit side, the rings look darker where there is less material to reflect sunlight. On the unlit side, some regions look darker because there is less material, but other regions look dark because there is so much material that the ring becomes opaque. Observing the same propeller on both the lit and unlit sides allows scientists to gather richer information about how the moonlet affects the ring. For example, in the unlit-side view, the broad, dark band through the middle of the propeller seems to be a combination of both empty and opaque regions. The propeller's central moonlet would only be a couple of pixels across in these images, and may not actually be resolved here. The lit-side image shows that a bright, narrow band of material connects the moonlet directly to the larger ring, in agreement with dynamical models. That same thin band of material may also be obscuring the moonlet from view. Lengthwise along the propeller is a gap in the ring that the moonlet has pried open. The gap appears dark on both the lit and unlit sides. Flanking the gap near the moonlet are regions of enhanced density, which appear bright on the lit side and more mottled on the unlit side. One benefit of the high resolution of these images is that, for the first time, wavy edges are clearly visible in the gap. These waves are also expected from dynamical models, and they emphasize that the gap must be sharp-edged. Furthermore, the distance between the wave crests tells scientists the width of the gap (1.2 miles or 2 kilometers), which in turn reveals the mass of the central moonlet. From these measurements, Cassini imaging scientists deduce that the moonlet's mass is comparable to that of a snowball about 0.6 mile (1 kilometer) wide. For the original images, the lit-side image has a scale of 0.33 mile (530 meters) per pixel in the radial (or outward from Saturn) direction and 0.44 mile (710 meters) per pixel in the azimuthal (or around Saturn) direction. The different scales are the result of Cassini's vantage point being off to the side of the propeller, rather than directly above it. The unlit-side image has a scale of 0.25 (410 meters) per pixel in both directions. In order to preserve its original level of detail, the image has not been cleaned of bright blemishes due to cosmic rays and to charged particle radiation from Saturn. http://photojournal.jpl.nasa.gov/catalog/PIA21433

  15. Poster – 41: External marker block placement on the breast or chest wall for left-sided deep inspiration breath-hold radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conroy, Leigh; Guebert, Alexandra; Smith, Wendy

    Purpose: We investigate DIBH breast radiotherapy using the Real-time Position Management (RPM) system with the marker-block placed on the target breast or chest wall. Methods: We measured surface dose for three different RPM marker-blocks using EBT3 Gafchromic film at 0° and 30° incidence. A registration study was performed to determine the breast surface position that best correlates with overall internal chest wall position. Surface and chest wall contours from MV images of the medial tangent field were extracted for 15 patients. Surface contours were divided into three potential marker-block positions on the breast: Superior, Middle, and Inferior. Translational registration wasmore » used to align the partial contours to the first-fraction contour. Each resultant transformation matrix was applied to the chest wall contour, and the minimum distance between the reference chest wall contour and the transformed chest wall contour was evaluated for each pixel. Results: The measured surface dose for the 2-dot, 6-dot, and 4-dot marker-blocks at 0° incidence were 74%, 71%, and 77% of dose to dmax respectively. At 30° beam incidence this increased to 76%, 72%, and 81%. The best external surface position was patient and fraction dependent, with no consistent best choice. Conclusions: The increase in surface dose directly under the RPM block is approximately equivalent to 3 mm of bolus. No marker-block position on the breast surface was found to be more representative of overall chest wall motion; therefore block positional stability and reproducibility can be used to determine optimal placement on the breast or chest wall.« less

  16. Registration of Aerial Optical Images with LiDAR Data Using the Closest Point Principle and Collinearity Equations.

    PubMed

    Huang, Rongyong; Zheng, Shunyi; Hu, Kun

    2018-06-01

    Registration of large-scale optical images with airborne LiDAR data is the basis of the integration of photogrammetry and LiDAR. However, geometric misalignments still exist between some aerial optical images and airborne LiDAR point clouds. To eliminate such misalignments, we extended a method for registering close-range optical images with terrestrial LiDAR data to a variety of large-scale aerial optical images and airborne LiDAR data. The fundamental principle is to minimize the distances from the photogrammetric matching points to the terrestrial LiDAR data surface. Except for the satisfactory efficiency of about 79 s per 6732 × 8984 image, the experimental results also show that the unit weighted root mean square (RMS) of the image points is able to reach a sub-pixel level (0.45 to 0.62 pixel), and the actual horizontal and vertical accuracy can be greatly improved to a high level of 1/4⁻1/2 (0.17⁻0.27 m) and 1/8⁻1/4 (0.10⁻0.15 m) of the average LiDAR point distance respectively. Finally, the method is proved to be more accurate, feasible, efficient, and practical in variety of large-scale aerial optical image and LiDAR data.

  17. A New Lunar Digital Elevation Model from the Lunar Orbiter Laser Altimeter and SELENE Terrain Camera

    NASA Technical Reports Server (NTRS)

    Barker, M. K.; Mazarico, E.; Neumann, G. A.; Zuber, M. T.; Haruyama, J.; Smith, D. E.

    2015-01-01

    We present an improved lunar digital elevation model (DEM) covering latitudes within +/-60 deg, at a horizontal resolution of 512 pixels per degree ( approx.60 m at the equator) and a typical vertical accuracy approx.3 to 4 m. This DEM is constructed from approx.4.5 ×10(exp 9) geodetically-accurate topographic heights from the Lunar Orbiter Laser Altimeter (LOLA) onboard the Lunar Reconnaissance Orbiter, to which we co-registered 43,200 stereo-derived DEMs (each 1 deg×1 deg) from the SELENE Terrain Camera (TC) ( approx.10(exp 10) pixels total). After co-registration, approximately 90% of the TC DEMs show root-mean-square vertical residuals with the LOLA data of < 5 m compared to approx.50% prior to co-registration. We use the co-registered TC data to estimate and correct orbital and pointing geolocation errors from the LOLA altimetric profiles (typically amounting to < 10 m horizontally and < 1 m vertically). By combining both co-registered datasets, we obtain a near-global DEM with high geodetic accuracy, and without the need for surface interpolation. We evaluate the resulting LOLA + TC merged DEM (designated as "SLDEM2015") with particular attention to quantifying seams and crossover errors.

  18. Resolution-independent surface rendering using programmable graphics hardware

    DOEpatents

    Loop, Charles T.; Blinn, James Frederick

    2008-12-16

    Surfaces defined by a Bezier tetrahedron, and in particular quadric surfaces, are rendered on programmable graphics hardware. Pixels are rendered through triangular sides of the tetrahedra and locations on the shapes, as well as surface normals for lighting evaluations, are computed using pixel shader computations. Additionally, vertex shaders are used to aid interpolation over a small number of values as input to the pixel shaders. Through this, rendering of the surfaces is performed independently of viewing resolution, allowing for advanced level-of-detail management. By individually rendering tetrahedrally-defined surfaces which together form complex shapes, the complex shapes can be rendered in their entirety.

  19. Fully automated registration of first-pass myocardial perfusion MRI using independent component analysis.

    PubMed

    Milles, J; van der Geest, R J; Jerosch-Herold, M; Reiber, J H C; Lelieveldt, B P F

    2007-01-01

    This paper presents a novel method for registration of cardiac perfusion MRI. The presented method successfully corrects for breathing motion without any manual interaction using Independent Component Analysis to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of ICA, and used to compute the displacement caused by breathing for each frame. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Validation experiments showed a reduction of the average LV motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. We conclude that this fully automatic ICA-based method shows an excellent accuracy, robustness and computation speed, adequate for use in a clinical environment.

  20. 75 FR 3416 - Fisheries in the Western Pacific; Pelagic Fisheries; Vessel Identification Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-21

    ... other registration number) on the port and starboard sides of the deckhouse or hull, and on an...) to display its IRCS on the port and starboard sides of the hull or superstructure, and on a deck... port and starboard sides of the deckhouse or hull, and on an appropriate weather deck, so as to be...

  1. Students: Design a T-Shirt or Submit a Video to Win 2013 Fall Meeting Registration

    NASA Astrophysics Data System (ADS)

    Noreika, J. Matt

    2013-05-01

    For the second consecutive year, AGU is holding its annual Student T-Shirt Design and Student Video competitions. The two contests, running simultaneously, offer AGU student members a chance to express their creative side. The winners will receive free registration for the 2013 Fall Meeting in San Francisco, Calif.

  2. Minimization of color shift generated in RGBW quad structure.

    NASA Astrophysics Data System (ADS)

    Kim, Hong Chul; Yun, Jae Kyeong; Baek, Heume-Il; Kim, Ki Duk; Oh, Eui Yeol; Chung, In Jae

    2005-03-01

    The purpose of RGBW Quad Structure Technology is to realize higher brightness than that of normal panel (RGB stripe structure) by adding white sub-pixel to existing RGB stripe structure. However, there is side effect called 'color shift' resulted from increasing brightness. This side effect degrades general color characteristics due to change of 'Hue', 'Brightness' and 'Saturation' as compared with existing RGB stripe structure. Especially, skin-tone colors show a tendency to get darker in contrast to normal panel. We"ve tried to minimize 'color shift' through use of LUT (Look Up Table) for linear arithmetic processing of input data, data bit expansion to 12-bit for minimizing arithmetic tolerance and brightness weight of white sub-pixel on each R, G, B pixel. The objective of this study is to minimize and keep Δu'v' value (we commonly use to represent a color difference), quantitative basis of color difference between RGB stripe structure and RGBW quad structure, below 0.01 level (existing 0.02 or higher) using Macbeth colorchecker that is general reference of color characteristics.

  3. The use of virtual fiducials in image-guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Glisson, Courtenay; Ong, Rowena; Simpson, Amber; Clark, Peter; Herrell, S. D.; Galloway, Robert

    2011-03-01

    The alignment of image-space to physical-space lies at the heart of all image-guided procedures. In intracranial surgery, point-based registrations can be used with either skin-affixed or bone-implanted extrinsic objects called fiducial markers. The advantages of point-based registration techniques are that they are robust, fast, and have a well developed mathematical foundation for the assessment of registration quality. In abdominal image-guided procedures such techniques have not been successful. It is difficult to accurately locate sufficient homologous intrinsic points in imagespace and physical-space, and the implantation of extrinsic fiducial markers would constitute "surgery before the surgery." Image-space to physical-space registration for abdominal organs has therefore been dominated by surfacebased registration techniques which are iterative, prone to local minima, sensitive to initial pose, and sensitive to percentage coverage of the physical surface. In our work in image-guided kidney surgery we have developed a composite approach using "virtual fiducials." In an open kidney surgery, the perirenal fat is removed and the surface of the kidney is dotted using a surgical marker. A laser range scanner (LRS) is used to obtain a surface representation and matching high definition photograph. A surface to surface registration is performed using a modified iterative closest point (ICP) algorithm. The dots are extracted from the high definition image and assigned the three dimensional values from the LRS pixels over which they lie. As the surgery proceeds, we can then use point-based registrations to re-register the spaces and track deformations due to vascular clamping and surgical tractions.

  4. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  5. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  6. Preliminary Results of 3D-DDTC Pixel Detectors for the ATLAS Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Rosa, Alessandro; /CERN; Boscardin, M.

    2012-04-04

    3D Silicon sensors fabricated at FBK-irst with the Double-side Double Type Column (DDTC) approach and columnar electrodes only partially etched through p-type substrates were tested in laboratory and in a 1.35 Tesla magnetic field with a 180 GeV pion beam at CERN SPS. The substrate thickness of the sensors is about 200 {mu}m, and different column depths are available, with overlaps between junction columns (etched from the front side) and ohmic columns (etched from the back side) in the range from 110 {mu}m to 150 {mu}m. The devices under test were bump bonded to the ATLAS Pixel readout chip (FEI3)more » at SELEX SI (Rome, Italy). We report leakage current and noise measurements, results of functional tests with Am{sup 241} {gamma}-ray sources, charge collection tests with Sr90 {beta}-source and an overview of preliminary results from the CERN beam test.« less

  7. Laboratory and testbeam results for thin and epitaxial planar sensors for HL-LHC

    DOE PAGES

    Bubna, M.; Bolla, G.; Bortoletto, D.; ...

    2015-08-03

    The High-Luminosity LHC (HL-LHC) upgrade of the CMS pixel detector will require the development of novel pixel sensors which can withstand the increase in instantaneous luminosity to L = 5 × 10 34 cm –2s –1 and collect ~ 3000fb –1 of data. The innermost layer of the pixel detector will be exposed to doses of about 10 16 n eq/ cm 2. Hence, new pixel sensors with improved radiation hardness need to be investigated. A variety of silicon materials (Float-zone, Magnetic Czochralski and Epitaxially grown silicon), with thicknesses from 50 μm to 320 μm in p-type and n-type substrates have beenmore » fabricated using single-sided processing. The effect of reducing the sensor active thickness to improve radiation hardness by using various techniques (deep diffusion, wafer thinning, or growing epitaxial silicon on a handle wafer) has been studied. Furthermore, the results for electrical characterization, charge collection efficiency, and position resolution of various n-on-p pixel sensors with different substrates and different pixel geometries (different bias dot gaps and pixel implant sizes) will be presented.« less

  8. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity

    PubMed Central

    Zhang, Fan; Niu, Hanben

    2016-01-01

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 107 when illuminated by a 405-nm diode laser and 1/1.4 × 104 when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e− rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena. PMID:27367699

  9. A 75-ps Gated CMOS Image Sensor with Low Parasitic Light Sensitivity.

    PubMed

    Zhang, Fan; Niu, Hanben

    2016-06-29

    In this study, a 40 × 48 pixel global shutter complementary metal-oxide-semiconductor (CMOS) image sensor with an adjustable shutter time as low as 75 ps was implemented using a 0.5-μm mixed-signal CMOS process. The implementation consisted of a continuous contact ring around each p+/n-well photodiode in the pixel array in order to apply sufficient light shielding. The parasitic light sensitivity of the in-pixel storage node was measured to be 1/8.5 × 10⁷ when illuminated by a 405-nm diode laser and 1/1.4 × 10⁴ when illuminated by a 650-nm diode laser. The pixel pitch was 24 μm, the size of the square p+/n-well photodiode in each pixel was 7 μm per side, the measured random readout noise was 217 e(-) rms, and the measured dynamic range of the pixel of the designed chip was 5500:1. The type of gated CMOS image sensor (CIS) that is proposed here can be used in ultra-fast framing cameras to observe non-repeatable fast-evolving phenomena.

  10. Development of 4-Sides Buttable CdTe-ASIC Hybrid Module for X-ray Flat Panel Detector

    NASA Astrophysics Data System (ADS)

    Tamaki, Mitsuru; Mito, Yoshio; Shuto, Yasuhiro; Kiyuna, Tatsuya; Yamamoto, Masaya; Sagae, Kenichi; Kina, Tooru; Koizumi, Tatsuhiro; Ohno, Ryoichi

    2009-08-01

    A 4-sides buttable CdTe-ASIC hybrid module suitable for use in an X-ray flat panel detector (FPD) has been developed by applying through silicon via (TSV) technology to the readout ASIC. The ASIC has 128 times 256 channels of charge integration type readout circuitry and an area of 12.9 mm times 25.7 mm. The CdTe sensor of 1 mm thickness, having the same area and pixel of 100 mum pitch, was fabricated from the Cl-doped CdTe single crystal grown by traveling heater method (THM). Then the CdTe pixel sensor was hybridized with the ASIC using the bump-bonding technology. The basic performance of this 4-sides buttable module was evaluated by taking X-ray images, and it was compared with that of a commercially available indirect type CsI(Tl) FPD. A prototype CdTe FPD was made by assembling 9 pieces of the 4-sides buttable modules into 3 times 3 arrays in which the neighboring modules were mounted on the interface board. The FPD covers an active area of 77 mm times 39 mm. The results showed the great potential of this 4-sides buttable module for the new real time X-ray FPD with high spatial resolution.

  11. Chromatic Modulator for a High-Resolution CCD or APS

    NASA Technical Reports Server (NTRS)

    Hartley, Frank; Hull, Anthony

    2008-01-01

    A chromatic modulator has been proposed to enable the separate detection of the red, green, and blue (RGB) color components of the same scene by a single charge-coupled device (CCD), active-pixel sensor (APS), or similar electronic image detector. Traditionally, the RGB color-separation problem in an electronic camera has been solved by use of either (1) fixed color filters over three separate image detectors; (2) a filter wheel that repeatedly imposes a red, then a green, then a blue filter over a single image detector; or (3) different fixed color filters over adjacent pixels. The use of separate image detectors necessitates precise registration of the detectors and the use of complicated optics; filter wheels are expensive and add considerably to the bulk of the camera; and fixed pixelated color filters reduce spatial resolution and introduce color-aliasing effects. The proposed chromatic modulator would not exhibit any of these shortcomings. The proposed chromatic modulator would be an electromechanical device fabricated by micromachining. It would include a filter having a spatially periodic pattern of RGB strips at a pitch equal to that of the pixels of the image detector. The filter would be placed in front of the image detector, supported at its periphery by a spring suspension and electrostatic comb drive. The spring suspension would bias the filter toward a middle position in which each filter strip would be registered with a row of pixels of the image detector. Hard stops would limit the excursion of the spring suspension to precisely one pixel row above and one pixel row below the middle position. In operation, the electrostatic comb drive would be actuated to repeatedly snap the filter to the upper extreme, middle, and lower extreme positions. This action would repeatedly place a succession of the differently colored filter strips in front of each pixel of the image detector. To simplify the processing, it would be desirable to encode information on the color of the filter strip over each row (or at least over some representative rows) of pixels at a given instant of time in synchronism with the pixel output at that instant.

  12. Characterisation of a novel reverse-biased PPD CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Stefanov, K. D.; Clarke, A. S.; Ivory, J.; Holland, A. D.

    2017-11-01

    A new pinned photodiode (PPD) CMOS image sensor (CIS) has been developed and characterised. The sensor can be fully depleted by means of reverse bias applied to the substrate, and the principle of operation is applicable to very thick sensitive volumes. Additional n-type implants under the pixel p-wells, called Deep Depletion Extension (DDE), have been added in order to eliminate the large parasitic substrate current that would otherwise be present in a normal device. The first prototype has been manufactured on a 18 μm thick, 1000 Ω .cm epitaxial silicon wafers using 180 nm PPD image sensor process at TowerJazz Semiconductor. The chip contains arrays of 10 μm and 5.4 μm pixels, with variations of the shape, size and the depth of the DDE implant. Back-side illuminated (BSI) devices were manufactured in collaboration with Teledyne e2v, and characterised together with the front-side illuminated (FSI) variants. The presented results show that the devices could be reverse-biased without parasitic leakage currents, in good agreement with simulations. The new 10 μm pixels in both BSI and FSI variants exhibit nearly identical photo response to the reference non-modified pixels, as characterised with the photon transfer curve. Different techniques were used to measure the depletion depth in FSI and BSI chips, and the results are consistent with the expected full depletion.

  13. A Multispectral Image Creating Method for a New Airborne Four-Camera System with Different Bandpass Filters

    PubMed Central

    Li, Hanlun; Zhang, Aiwu; Hu, Shaoxing

    2015-01-01

    This paper describes an airborne high resolution four-camera multispectral system which mainly consists of four identical monochrome cameras equipped with four interchangeable bandpass filters. For this multispectral system, an automatic multispectral data composing method was proposed. The homography registration model was chosen, and the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) were used to generate matching points. For the difficult registration problem between visible band images and near-infrared band images in cases lacking manmade objects, we presented an effective method based on the structural characteristics of the system. Experiments show that our method can acquire high quality multispectral images and the band-to-band alignment error of the composed multiple spectral images is less than 2.5 pixels. PMID:26205264

  14. Saturn-lit Tethys

    NASA Image and Video Library

    2017-08-21

    NASA's Cassini gazes across the icy rings of Saturn toward the icy moon Tethys, whose night side is illuminated by Saturnshine, or sunlight reflected by the planet. Tethys was on the far side of Saturn with respect to Cassini here; an observer looking upward from the moon's surface toward Cassini would see Saturn's illuminated disk filling the sky. Tethys was brightened by a factor of two in this image to increase its visibility. A sliver of the moon's sunlit northern hemisphere is seen at top. A bright wedge of Saturn's sunlit side is seen at lower left. This view looks toward the sunlit side of the rings from about 10 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft wide-angle camera on May 13, 2017. The view was acquired at a distance of approximately 750,000 miles (1.2 million kilometers) from Saturn and at a Sun-Saturn-spacecraft, or phase, angle of 140 degrees. Image scale is 43 miles (70 kilometers) per pixel on Saturn. The distance to Tethys was about 930,000 miles (1.5 million kilometers). The image scale on Tethys is about 56 miles (90 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21342

  15. Influence of the number of elongated fiducial markers on the localization accuracy of the prostate

    NASA Astrophysics Data System (ADS)

    de Boer, Johan; de Bois, Josien; van Herk, Marcel; Sonke, Jan-Jakob

    2012-10-01

    Implanting fiducial markers for localization purposes has become an accepted practice in radiotherapy for prostate cancer. While many correction strategies correct for translations only, advanced correction protocols also require knowledge of the rotation of the prostate. For this purpose, typically, three or more markers are implanted. Elongated fiducial markers provide more information about their orientation than traditional round or cylindrical markers. Potentially, fewer markers are required. In this study, we evaluate the effect of the number of elongated markers on the localization accuracy of the prostate. To quantify the localization error, we developed a model that estimates, at arbitrary locations in the prostate, the registration error caused by translational and rotational uncertainties of the marker registration. Every combination of one, two and three markers was analysed for a group of 24 patients. The average registration errors at the prostate surface were 0.3-0.8 mm and 0.4-1 mm for registrations on, respectively, three markers and two markers located on different sides of the prostate. Substantial registration errors (2.0-2.2 mm) occurred at the prostate surface contralateral to the markers when two markers were implanted on the same side of the prostate or only one marker was used. In conclusion, there is no benefit in using three elongated markers: two markers accurately localize the prostate if they are implanted at some distance from each other.

  16. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  17. Thin hybrid pixel assembly with backside compensation layer on ROIC

    NASA Astrophysics Data System (ADS)

    Bates, R.; Buttar, C.; McMullen, T.; Cunningham, L.; Ashby, J.; Doherty, F.; Gray, C.; Pares, G.; Vignoud, L.; Kholti, B.; Vahanen, S.

    2017-01-01

    The entire ATLAS inner tracking system will be replaced for operation at the HL-LHC . This will include a significantly larger pixel detector of approximately 15 m2. For this project, it is critical to reduce the mass of the hybrid pixel modules and this requires thinning both the sensor and readout chips to about 150 micrometres each. The thinning of the silicon chips leads to low bump yield for SnAg bumps due to bad co-planarity of the two chips at the solder reflow stage creating dead zones within the pixel array. In the case of the ATLAS FEI4 pixel readout chip thinned to 100 micrometres, the chip is concave, with the front side in compression, with a bow of +100 micrometres at room temperature which varies to a bow of -175 micrometres at the SnAg solder reflow temperature, caused by the CTE mismatch between the materials in the CMOS stack and the silicon substrate. A new wafer level process to address the issue of low bump yield be controlling the chip bow has been developed. A back-side dielectric and metal stack of SiN and Al:Si has been deposited on the readout chip wafer to dynamically compensate the stress of the front side stack. In keeping with a 3D process the materials used are compatible with Through Silicon Via (TSV) technology with a TSV last approach which is under development for this chip. It is demonstrated that the amplitude of the correction can be manipulated by the deposition conditions and thickness of the SiN/Al:Si stack. The bow magnitude over the temperature range for the best sample to date is reduced by almost a factor of 4 and the sign of the bow (shape of the die) remains constant. Further development of the backside deposition conditions is on-going with the target of close to zero bow at the solder reflow temperature and a minimal bow magnitude throughout the temperature range. Assemblies produced from FEI4 readout wafers thinned to 100 micrometres with the backside compensation layer have been made for the first time and demonstrate bond yields close to 100%.

  18. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  19. Subpixel resolution from multiple images

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Kanefsky, Rob; Stutz, John; Kraft, Richard

    1994-01-01

    Multiple images taken from similar locations and under similar lighting conditions contain similar, but not identical, information. Slight differences in instrument orientation and position produces mismatches between the projected pixel grids. These mismatches ensure that any point on the ground is sampled differently in each image. If all the images can be registered with respect to each other to a small fraction of a pixel accuracy, then the information from the multiple images can be combined to increase linear resolution by roughly the square root of the number of images. In addition, the gray-scale resolution of the composite image is also improved. We describe methods for multiple image registration and combination, and discuss some of the problems encountered in developing and extending them. We display test results with 8:1 resolution enhancement, and Viking Orbiter imagery with 2:1 and 4:1 enhancements.

  20. Three-dimensional quantification of orthodontic root resorption with time-lapsed imaging of micro-computed tomography in a rodent model.

    PubMed

    Yang, Chongshi; Zhang, Yuanyuan; Zhang, Yan; Fan, Yubo; Deng, Feng

    2015-01-01

    Despite various X-ray approaches have been widely used to monitor root resorption after orthodontic treatment, a non-invasive and accurate method is highly desirable for long-term follow up. The aim of this study was to build a non-invasive method to quantify longitudinal orthodontic root resorption with time-lapsed images of micro-computed tomography (micro-CT) in a rodent model. Twenty male Sprague Dawley (SD) rats (aged 6-8 weeks, weighing 180-220 g) were used in this study. A 25 g orthodontic force generated by nickel-titanium coil spring was applied to the right maxillary first molar for each rat, while contralateral first molar was severed as a control. Micro-CT scan was performed at day 0 (before orthodontic load) and days 3, 7, 14, and 28 after orthodontic load. Resorption of mesial root of maxillary first molars at bilateral sides was calculated from micro-CT images with registration algorithm via reconstruction, superimposition and partition operations. Obvious resorption of mesial root of maxillary first molar can be detected at day 14 and day 28 at orthodontic side. Most of the resorption occurred in the apical region at distal side and cervical region at mesiolingual side. Desirable development of molar root of rats was identified from day 0 to day 28 at control side. The development of root concentrated on apical region. This non-invasive 3D quantification method with registration algorithm can be used in longitudinal study of root resorption. Obvious root resorption in rat molar can be observed three-dimensionally at day 14 and day 28 after orthodontic load. This indicates that registration algorithm combined with time-lapsed images provides clinic potential application in detection and quantification of root contour.

  1. TU-F-BRF-03: Effect of Radiation Therapy Planning Scan Registration On the Dose in Lung Cancer Patient CT Scans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, A; Contee, C; White, B

    Purpose: To characterize the effect of deformable registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60Gy, 2Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pre-therapy (4–75 days) CT scan and a treatment planning scan with an associated dose map calculated in Pinnacle were collected. To establish baseline correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pre-therapy scans were co-registered with planning scans (and associated dose maps)more » using the Plastimatch demons and Fraunhofer MEVIS deformable registration algorithms. Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from both registration algorithms. The absolute difference in planned dose (|ΔD|) between manually and automatically mapped landmark points was calculated. Using regression modeling, |ΔD| was modeled as a function of the distance between manually and automatically matched points (registration error, E), the dose standard deviation (SD-dose) in the eight-pixel neighborhood, and the registration algorithm used. Results: 52–92 landmark point pairs (median: 82) were identified in each patient's scans. Average |ΔD| across patients was 3.66Gy (range: 1.2–7.2Gy). |ΔD| was significantly reduced by 0.53Gy using Plastimatch demons compared with Fraunhofer MEVIS. |ΔD| increased significantly as a function of E (0.39Gy/mm) and SD-dose (2.23Gy/Gy). Conclusion: An average error of <4Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration. Dose differences following registration were significantly increased when the Fraunhofer MEVIS registration algorithm was used, spatial registration errors were larger, and dose gradient was higher (i.e., higher SD-dose). To our knowledge, this is the first study to directly compute dose errors following deformable registration of lung CT scans.« less

  2. Improved Band-to-Band Registration Characterization for VIIRS Reflective Solar Bands Based on Lunar Observations

    NASA Technical Reports Server (NTRS)

    Wang, Zhipeng; Xiong, Xiaoxiong; Li, Yonghong

    2015-01-01

    Spectral bands of the Visible Infrared Imaging Radiometer Suite (VIIRS) instrumentaboard the Suomi National Polar-orbiting Partnership (S-NPP) satellite are spatially co-registered.The accuracy of the band-to-band registration (BBR) is one of the key spatial parameters that must becharacterized. Unlike its predecessor, the Moderate Resolution Imaging Spectroradiometer (MODIS), VIIRS has no on-board calibrator specifically designed to perform on-orbit BBR characterization.To circumvent this problem, a BBR characterization method for VIIRS reflective solar bands (RSB) based on regularly-acquired lunar images has been developed. While its results can satisfactorily demonstrate that the long-term stability of the BBR is well within +/- 0.1 moderate resolution bandpixels, undesired seasonal oscillations have been observed in the trending. The oscillations are most obvious between the visiblenear-infrared bands and short-middle wave infrared bands. This paper investigates the oscillations and identifies their cause as the band spectral dependence of the centroid position and the seasonal rotation of the lunar images over calibration events. Accordingly, an improved algorithm is proposed to quantify the rotation and compensate for its impact. After the correction, the seasonal oscillation in the resulting BBR is reduced from up to 0.05 moderate resolution band pixels to around 0.01 moderate resolution band pixels. After removing this spurious seasonal oscillation, the BBR, as well as its long-term drift are well determined.

  3. An improved method for precise automatic co-registration of moderate and high-resolution spacecraft imagery

    NASA Technical Reports Server (NTRS)

    Bryant, Nevin A.; Logan, Thomas L.; Zobrist, Albert L.

    2006-01-01

    Improvements to the automated co-registration and change detection software package, AFIDS (Automatic Fusion of Image Data System) has recently completed development for and validation by NGA/GIAT. The improvements involve the integration of the AFIDS ultra-fine gridding technique for horizontal displacement compensation with the recently evolved use of Rational Polynomial Functions/ Coefficients (RPFs/RPCs) for image raster pixel position to Latitude/Longitude indexing. Mapping and orthorectification (correction for elevation effects) of satellite imagery defies exact projective solutions because the data are not obtained from a single point (like a camera), but as a continuous process from the orbital path. Standard image processing techniques can apply approximate solutions, but advances in the state-of-the-art had to be made for precision change-detection and time-series applications where relief offsets become a controlling factor. The earlier AFIDS procedure required the availability of a camera model and knowledge of the satellite platform ephemeredes. The recent design advances connect the spacecraft sensor Rational Polynomial Function, a deductively developed model, with the AFIDS ultrafine grid, an inductively developed representation of the relationship raster pixel position to latitude /longitude. As a result, RPCs can be updated by AFIDS, a situation often necessary due to the accuracy limits of spacecraft navigation systems. An example of precision change detection will be presented from Quickbird.

  4. Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.

    2017-08-01

    The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.

  5. Automated Registration of Multimodal Optic Disc Images: Clinical Assessment of Alignment Accuracy.

    PubMed

    Ng, Wai Siene; Legg, Phil; Avadhanam, Venkat; Aye, Kyaw; Evans, Steffan H P; North, Rachel V; Marshall, Andrew D; Rosin, Paul; Morgan, James E

    2016-04-01

    To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: "Fail" (no alignment of vessels with no vessel contact), "Weak" (vessels have slight contact), "Good" (vessels with <50% contact), "Very Good" (vessels with >50% contact), and "Excellent" (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of "Good" or better in >95% of the image sets. NRFNMI had the highest percentage of "Excellent" (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images.

  6. Improving multispectral satellite image compression using onboard subpixel registration

    NASA Astrophysics Data System (ADS)

    Albinet, Mathieu; Camarero, Roberto; Isnard, Maxime; Poulet, Christophe; Perret, Jokin

    2013-09-01

    Future CNES earth observation missions will have to deal with an ever increasing telemetry data rate due to improvements in resolution and addition of spectral bands. Current CNES image compressors implement a discrete wavelet transform (DWT) followed by a bit plane encoding (BPE) but only on a mono spectral basis and do not profit from the multispectral redundancy of the observed scenes. Recent CNES studies have proven a substantial gain on the achievable compression ratio, +20% to +40% on selected scenarios, by implementing a multispectral compression scheme based on a Karhunen Loeve transform (KLT) followed by the classical DWT+BPE. But such results can be achieved only on perfectly registered bands; a default of registration as low as 0.5 pixel ruins all the benefits of multispectral compression. In this work, we first study the possibility to implement a multi-bands subpixel onboard registration based on registration grids generated on-the-fly by the satellite attitude control system and simplified resampling and interpolation techniques. Indeed bands registration is usually performed on ground using sophisticated techniques too computationally intensive for onboard use. This fully quantized algorithm is tuned to meet acceptable registration performances within stringent image quality criteria, with the objective of onboard real-time processing. In a second part, we describe a FPGA implementation developed to evaluate the design complexity and, by extrapolation, the data rate achievable on a spacequalified ASIC. Finally, we present the impact of this approach on the processing chain not only onboard but also on ground and the impacts on the design of the instrument.

  7. Automated Image Registration Using Morphological Region of Interest Feature Extraction

    NASA Technical Reports Server (NTRS)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2005-01-01

    With the recent explosion in the amount of remotely sensed imagery and the corresponding interest in temporal change detection and modeling, image registration has become increasingly important as a necessary first step in the integration of multi-temporal and multi-sensor data for applications such as the analysis of seasonal and annual global climate changes, as well as land use/cover changes. The task of image registration can be divided into two major components: (1) the extraction of control points or features from images; and (2) the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual control feature extraction can be subjective and extremely time consuming, and often results in few usable points. Automated feature extraction is a solution to this problem, where desired target features are invariant, and represent evenly distributed landmarks such as edges, corners and line intersections. In this paper, we develop a novel automated registration approach based on the following steps. First, a mathematical morphology (MM)-based method is used to obtain a scale-orientation morphological profile at each image pixel. Next, a spectral dissimilarity metric such as the spectral information divergence is applied for automated extraction of landmark chips, followed by an initial approximate matching. This initial condition is then refined using a hierarchical robust feature matching (RFM) procedure. Experimental results reveal that the proposed registration technique offers a robust solution in the presence of seasonal changes and other interfering factors. Keywords-Automated image registration, multi-temporal imagery, mathematical morphology, robust feature matching.

  8. Microgap flat panel display

    DOEpatents

    Wuest, C.R.

    1998-12-08

    A microgap flat panel display is disclosed which includes a thin gas-filled display tube that utilizes switched X-Y ``pixel`` strips to trigger electron avalanches and activate a phosphor at a given location on a display screen. The panel utilizes the principal of electron multiplication in a gas subjected to a high electric field to provide sufficient electron current to activate standard luminescent phosphors located on an anode. The X-Y conductive strips of a few micron widths may for example, be deposited on opposite sides of a thin insulating substrate, or on one side of the adjacent substrates and function as a cathode. The X-Y strips are separated from the anode by a gap filled with a suitable gas. Electrical bias is selectively switched onto X and Y strips to activate a ``pixel`` in the region where these strips overlap. A small amount of a long-lived radioisotope is used to initiate an electron avalanche in the overlap region when bias is applied. The avalanche travels through the gas filled gap and activates a luminescent phosphor of a selected color. The bias is adjusted to give a proportional electron multiplication to control brightness for given pixel. 6 figs.

  9. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  10. Investigation of several aspects of LANDSAT-4 data quality

    NASA Technical Reports Server (NTRS)

    Wrigley, R. C. (Principal Investigator)

    1983-01-01

    No insurmountable problems in change detection analysis were found when portions of scenes collected simultaneously by LANDSAT 4 MSS and either LANDSAT 2 or 3. The cause of the periodic noise in LANDSAT 4 MSS images which had a RMS value of approximately 2DN should be corrected in the LANDSAT D instrument before its launch. Analysis of the P-tape of the Arkansas scene shows bands within the same focal plane very well registered except for the thermal band which was misregistered by approximately three 28.5 meter pixels in both directions. It is possible to derive tight confidence bounds for the registration errors. Preliminary analyses of the Sacramento and Arkansas scenes reveals a very high degree of consistency with earlier results for bands 3 vs 1, 3 vs 4, and 3 vs 5. Results are presented in table form. It is suggested that attention be given to the standard deviations of registrations errors to judge whether or not they will be within specification once any known mean registration errors are corrected. Techniques used for MTF analysis of a Washington scene produced noisy results.

  11. MRI signal intensity based B-spline nonrigid registration for pre- and intraoperative imaging during prostate brachytherapy.

    PubMed

    Oguro, Sota; Tokuda, Junichi; Elhawary, Haytham; Haker, Steven; Kikinis, Ron; Tempany, Clare M C; Hata, Nobuhiko

    2009-11-01

    To apply an intensity-based nonrigid registration algorithm to MRI-guided prostate brachytherapy clinical data and to assess its accuracy. A nonrigid registration of preoperative MRI to intraoperative MRI images was carried out in 16 cases using a Basis-Spline algorithm in a retrospective manner. The registration was assessed qualitatively by experts' visual inspection and quantitatively by measuring the Dice similarity coefficient (DSC) for total gland (TG), central gland (CG), and peripheral zone (PZ), the mutual information (MI) metric, and the fiducial registration error (FRE) between corresponding anatomical landmarks for both the nonrigid and a rigid registration method. All 16 cases were successfully registered in less than 5 min. After the nonrigid registration, DSC values for TG, CG, PZ were 0.91, 0.89, 0.79, respectively, the MI metric was -0.19 +/- 0.07 and FRE presented a value of 2.3 +/- 1.8 mm. All the metrics were significantly better than in the case of rigid registration, as determined by one-sided t-tests. The intensity-based nonrigid registration method using clinical data was demonstrated to be feasible and showed statistically improved metrics when compare to only rigid registration. The method is a valuable tool to integrate pre- and intraoperative images for brachytherapy.

  12. Panorama imaging for image-to-physical registration of narrow drill holes inside spongy bones

    NASA Astrophysics Data System (ADS)

    Bergmeier, Jan; Fast, Jacob Friedemann; Ortmaier, Tobias; Kahrs, Lüder Alexander

    2017-03-01

    Image-to-physical registration based on volumetric data like computed tomography on the one side and intraoperative endoscopic images on the other side is an important method for various surgical applications. In this contribution, we present methods to generate panoramic views from endoscopic recordings for image-to-physical registration of narrow drill holes inside spongy bone. One core application is the registration of drill poses inside the mastoid during minimally invasive cochlear implantations. Besides the development of image processing software for registration, investigations are performed on a miniaturized optical system, achieving 360° radial imaging with one shot by extending a conventional, small, rigid, rod lens endoscope. A reflective cone geometry is used to deflect radially incoming light rays into the endoscope optics. Therefore, a cone mirror is mounted in front of a conventional 0° endoscope. Furthermore, panoramic images of inner drill hole surfaces in artificial bone material are created. Prior to drilling, cone beam computed tomography data is acquired from this artificial bone and simulated endoscopic views are generated from this data. A qualitative and quantitative image comparison of resulting views in terms of image-to-image registration is performed. First results show that downsizing of panoramic optics to a diameter of 3mm is possible. Conventional rigid rod lens endoscopes can be extended to produce suitable panoramic one-shot image data. Using unrolling and stitching methods, images of the inner drill hole surface similar to computed tomography image data of the same surface were created. Registration is performed on ten perturbations of the search space and results in target registration errors of (0:487 +/- 0:438)mm at the entry point and (0:957 +/- 0:948)mm at the exit as well as an angular error of (1:763 +/- 1:536)°. The results show suitability of this image data for image-to-image registration. Analysis of the error components in different directions reveals a strong influence of the pattern structure, meaning higher diversity results into smaller errors.

  13. Low-resistivity photon-transparent window attached to photo-sensitive silicon detector

    DOEpatents

    Holland, Stephen Edward

    2000-02-15

    The invention comprises a combination of a low resistivity, or electrically conducting, silicon layer that is transparent to long or short wavelength photons and is attached to the backside of a photon-sensitive layer of silicon, such as a silicon wafer or chip. The window is applied to photon sensitive silicon devices such as photodiodes, charge-coupled devices, active pixel sensors, low-energy x-ray sensors and other radiation detectors. The silicon window is applied to the back side of a photosensitive silicon wafer or chip so that photons can illuminate the device from the backside without interference from the circuit printed on the frontside. A voltage sufficient to fully deplete the high-resistivity photosensitive silicon volume of charge carriers is applied between the low-resistivity back window and the front, patterned, side of the device. This allows photon-induced charge created at the backside to reach the front side of the device and to be processed by any circuitry attached to the front side. Using the inventive combination, the photon sensitive silicon layer does not need to be thinned beyond standard fabrication methods in order to achieve full charge-depletion in the silicon volume. In one embodiment, the inventive backside window is applied to high resistivity silicon to allow backside illumination while maintaining charge isolation in CCD pixels.

  14. MRI-based registration of pelvic alignment affected by altered pelvic floor muscle characteristics.

    PubMed

    Bendová, Petra; Růzicka, Pavel; Peterová, Vera; Fricová, Martina; Springrová, Ingrid

    2007-11-01

    Pelvic floor muscles have potential to influence relative pelvic alignment. Side asymmetry in pelvic floor muscle tension is claimed to induce pelvic malalignment. However, its nature and amplitude are not clear. There is a need for non-invasive and reliable assessment method. An intervention experiment of unilateral pelvic floor muscle activation on healthy females was performed using image data for intra-subject comparison of normal and altered configuration of bony pelvis. Sequent magnetic resonance imaging of 14 females in supine position was performed with 1.5 T static body coil in coronal orientation. The intervention, surface functional electrostimulation, was applied to activate pelvic floor muscles on the right side. Spatial coordinates of 23 pelvic landmarks were localized in each subject and registered by specially designed magnetic resonance image data processing tool (MPT2006), where individual error calculation; data registration, analysis and 3D visualization were interfaced. The effect of intervention was large (Cohen's d=1.34). We found significant differences in quantity (P<0.01) and quality (P=0.02) of normal and induced pelvic displacements. After pelvic floor muscle activation on the right side, pelvic structures shifted most frequently to the right side in ventro-caudal direction. The right femoral head, the right innominate and the coccyx showed the largest displacements. The consequences arising from the capacity of pelvic floor muscles to displace pelvic bony structures are important to consider not only in management of malalignment syndrome but also in treatment of incontinence. The study has demonstrated benefits associated with processing of magnetic resonance image data within pelvic region with high localization and registration reliability.

  15. Geometric error characterization and error budgets. [thematic mapper

    NASA Technical Reports Server (NTRS)

    Beyer, E.

    1982-01-01

    Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.

  16. A New Two-Color Infrared Photodetector Design Using INGAAS/INALAS Coupled Quantum Wells

    DTIC Science & Technology

    1999-08-01

    that spans the mid-wave infrared (MWIR) and the long-wave infrared ( LWIR ) atmospheric transmission windows of 3 to 5 and 8 to 12 µm, respectively...This leads to natural pixel registration in an FPA application. QWIP FPAs operating in two LWIR bands have been demonstrated,2 and, recently, the...Abstract unlimited Number of Pages 15 color FPA with simultaneous readout of an LWIR (9-µm peak) and an MWIR (5.1-µm peak) band was tested3 and shown to

  17. Remote sensing: Physical principles, sensors and products, and the LANDSAT

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Steffen, C. A.; Lorenzzetti, J. A.; Stech, J. L.; Desouza, R. C. M.

    1981-01-01

    Techniques of data acquisition by remote sensing are introduced in this teaching aid. The properties of the elements involved (radiant energy, topograph, atmospheric attenuation, surfaces, and sensors) are covered. Radiometers, photography, scanners, and radar are described as well as their products. Aspects of the LANDSAT system examined include the characteristics of the satellite and its orbit, the multispectral band scanner, and the return beam vidicon. Pixels (picture elements), pattern registration, and the characteristics, reception, and processing of LANDSAT imagery are also considered.

  18. Shifting scintillator neutron detector

    DOEpatents

    Clonts, Lloyd G; Cooper, Ronald G; Crow, Jr., Morris Lowell; Hannah, Bruce W; Hodges, Jason P; Richards, John D; Riedel, Richard A

    2014-03-04

    Provided are sensors and methods for detecting thermal neutrons. Provided is an apparatus having a scintillator for absorbing a neutron, the scintillator having a back side for discharging a scintillation light of a first wavelength in response to the absorbed neutron, an array of wavelength-shifting fibers proximate to the back side of the scintillator for shifting the scintillation light of the first wavelength to light of a second wavelength, the wavelength-shifting fibers being disposed in a two-dimensional pattern and defining a plurality of scattering plane pixels where the wavelength-shifting fibers overlap, a plurality of photomultiplier tubes, in coded optical communication with the wavelength-shifting fibers, for converting the light of the second wavelength to an electronic signal, and a processor for processing the electronic signal to identify one of the plurality of scattering plane pixels as indicative of a position within the scintillator where the neutron was absorbed.

  19. Geometric processing of digital images of the planets

    NASA Technical Reports Server (NTRS)

    Edwards, Kathleen

    1987-01-01

    New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.

  20. Spatial and Time Coincidence Detection of the Decay Chain of Short-Lived Radioactive Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granja, Carlos; Jakubek, Jan; Platkevic, Michal

    The quantum counting position sensitive pixel detector Timepix with per-pixel energy and time resolution enables to detect radioactive ions and register the consecutive decay chain by simultaneous position-and time-correlation. This spatial and timing coincidence technique in the same sensor is demonstrated by the registration of the decay chain {sup 8}He{yields}{sup {beta} 8}Li and {sup 8}Li{yields}{sup {beta}-} {sup 8}Be{yields}{alpha}+{alpha} and by the measurement of the {beta} decay half-lives. Radioactive ions, selectively obtained from the Lohengrin fission fragment spectrometer installed at the High Flux Reactor of the ILL Grenoble, are delivered to the Timepix silicon sensor where decays of the implanted ionsmore » and daughter nuclei are registered and visualized. We measure decay lifetimes in the range {>=}{mu}s with precision limited just by counting statistics.« less

  1. Scanning the pressure-induced distortion of fingerprints.

    PubMed

    Mil'shtein, S; Doshi, U

    2004-01-01

    Fingerprint recognition technology is an important part of criminal investigations it is the basis of some security systems and an important tool of government operations such as the Immigration and Naturalization Services, registration procedures in the Armed Forces, and so forth. After the tragic events of September 11, 2001, the importance of reliable fingerprint recognition technology became even more obvious. In the current study, pressure-induced changes of distances between ridges of a fingerprint were measured. Using calibrated silicon pressure sensors we scanned the distribution of pressure across a finger pixel by pixel, and also generated maps of an average pressure distribution during fingerprinting. Emulating the fingerprinting procedure employed with widely used optical scanners, we found that on average the distance between ridges decreases by about 20% when a finger is positioned on a scanner. Controlled loading of a finger demonstrated that it is impossible to reproduce the same distribution of pressure across a given finger during repeated fingerprinting procedures.

  2. Impact of LANDSAT MSS sensor differences on change detection analysis

    NASA Technical Reports Server (NTRS)

    Likens, W. C.; Wrigley, R. C.

    1983-01-01

    Some 512 by 512 pixel subwindows for simultaneously acquired scene pairs obtained by LANDSAT 2,3 and 4 multispectral band scanners were coregistered using LANDSAT 4 scenes as the base to which the other images were registered. Scattergrams between the coregistered scenes (a form of contingency analysis) were used to radiometrically compare data from the various sensors. Mode values were derived and used to visually fit a linear regression. Root mean square errors of the registration varied between .1 and 1.5 pixels. There appear to be no major problem preventing the use of LANDSAT 4 MSS with previous MSS sensors for change detection, provided the noise interference can be removed or minimized. Data normalizations for change detection should be based on the data rather than solely on calibration information. This allows simultaneous normalization of the atmosphere as well as the radiometry.

  3. Polar Dunes Resolved by the Mars Orbiter Laser Altimeter Gridded Topography and Pulse Widths

    NASA Technical Reports Server (NTRS)

    Neumann, Gregory A.

    2003-01-01

    The Mars Orbiter Laser Altimeter (MOLA) polar data have been refined to the extent that many features poorly imaged by Viking Orbiters are now resolved in densely gridded altimetry. Individual linear polar dunes with spacings of 0.5 km or more can be seen as well as sparsely distributed and partially mantled dunes. The refined altimetry will enable measurements of the extent and possibly volume of the north polar ergs. MOLA pulse widths have been recalibrated using inflight data, and a robust algorithm applied to solve for the surface optical impulse response. It shows the surface root-mean-square (RMS) roughness at the 75-m-diameter MOLA footprint scale, together with a geological map. While the roughness is of vital interest for landing site safety studies, a variety of geomorphological studies may also be performed. Pulse widths corrected for regional slope clearly delineate the extent of the polar dunes. The MOLA PEDR profile data have now been re-released in their entirety (Version L). The final Mission Experiment Gridded Data Records (MEGDR's) are now provided at up to 128 pixels per degree globally. Densities as high as 512 pixels per degree are available in a polar stereographic projection. A large computational effort has been expended in improving the accuracy of the MOLA altimetry themselves, both in improved orbital modeling and in after-the-fact adjustment of tracks to improve their registration at crossovers. The current release adopts the IAU2000 rotation model and cartographic frame recommended by the Mars Cartography Working Group. Adoption of the current standard will allow registration of images and profiles globally with an uncertainty of less than 100 m. The MOLA detector is still operational and is currently collecting radiometric data at 1064 nm. Seasonal images of the reflectivity of the polar caps can be generated with a resolution of about 300 m per pixel.

  4. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    DOE PAGES

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; ...

    2016-01-28

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  5. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation

    PubMed Central

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull; Shanks, Katherine S.; Weiss, Joel T.; Gruner, Sol M.

    2016-01-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed. PMID:26917125

  6. High-speed X-ray imaging pixel array detector for synchrotron bunch isolation.

    PubMed

    Philipp, Hugh T; Tate, Mark W; Purohit, Prafull; Shanks, Katherine S; Weiss, Joel T; Gruner, Sol M

    2016-03-01

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8-12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10-100 ps) and intense X-ray pulses at megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. The characteristics, operation, testing and application of the detector are detailed.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Philipp, Hugh T.; Tate, Mark W.; Purohit, Prafull

    A wide-dynamic-range imaging X-ray detector designed for recording successive frames at rates up to 10 MHz is described. X-ray imaging with frame rates of up to 6.5 MHz have been experimentally verified. The pixel design allows for up to 8–12 frames to be stored internally at high speed before readout, which occurs at a 1 kHz frame rate. An additional mode of operation allows the integration capacitors to be re-addressed repeatedly before readout which can enhance the signal-to-noise ratio of cyclical processes. This detector, along with modern storage ring sources which provide short (10–100 ps) and intense X-ray pulses atmore » megahertz rates, opens new avenues for the study of rapid structural changes in materials. The detector consists of hybridized modules, each of which is comprised of a 500 µm-thick silicon X-ray sensor solder bump-bonded, pixel by pixel, to an application-specific integrated circuit. The format of each module is 128 × 128 pixels with a pixel pitch of 150 µm. In the prototype detector described here, the three-side buttable modules are tiled in a 3 × 2 array with a full format of 256 × 384 pixels. Lastly, we detail the characteristics, operation, testing and application of the detector.« less

  8. Fractal analysis of INSAR and correlation with graph-cut based image registration for coastline deformation analysis: post seismic hazard assessment of the 2011 Tohoku earthquake region

    NASA Astrophysics Data System (ADS)

    Dutta, P. K.; Mishra, O. P.

    2012-04-01

    Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.

  9. Vision based tunnel inspection using non-rigid registration

    NASA Astrophysics Data System (ADS)

    Badshah, Amir; Ullah, Shan; Shahzad, Danish

    2015-04-01

    Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.

  10. SU-E-J-42: Motion Adaptive Image Filter for Low Dose X-Ray Fluoroscopy in the Real-Time Tumor-Tracking Radiotherapy System.

    PubMed

    Miyamoto, N; Ishikawa, M; Sutherland, K; Suzuki, R; Matsuura, T; Takao, S; Toramatsu, C; Nihongi, H; Shimizu, S; Onimaru, R; Umegaki, K; Shirato, H

    2012-06-01

    In the real-time tumor-tracking radiotherapy system, fiducial markers are detected by X-ray fluoroscopy. The fluoroscopic parameters should be optimized as low as possible in order to reduce unnecessary imaging dose. However, the fiducial markers could not be recognized due to effect of statistical noise in low dose imaging. Image processing is envisioned to be a solution to improve image quality and to maintain tracking accuracy. In this study, a recursive image filter adapted to target motion is proposed. A fluoroscopy system was used for the experiment. A spherical gold marker was used as a fiducial marker. About 450 fluoroscopic images of the marker were recorded. In order to mimic respiratory motion of the marker, the images were shifted sequentially. The tube voltage, current and exposure duration were fixed at 65 kV, 50 mA and 2.5 msec as low dose imaging condition, respectively. The tube current was 100 mA as high dose imaging. A pattern recognition score (PRS) ranging from 0 to 100 and image registration error were investigated by performing template pattern matching to each sequential image. The results with and without image processing were compared. In low dose imaging, theimage registration error and the PRS without the image processing were 2.15±1.21 pixel and 46.67±6.40, respectively. Those with the image processing were 1.48±0.82 pixel and 67.80±4.51, respectively. There was nosignificant difference in the image registration error and the PRS between the results of low dose imaging with the image processing and that of high dose imaging without the image processing. The results showed that the recursive filter was effective in order to maintain marker tracking stability and accuracy in low dose fluoroscopy. © 2012 American Association of Physicists in Medicine.

  11. MRI Signal Intensity Based B-Spline Nonrigid Registration for Pre- and Intraoperative Imaging During Prostate Brachytherapy

    PubMed Central

    Oguro, Sota; Tokuda, Junichi; Elhawary, Haytham; Haker, Steven; Kikinis, Ron; Tempany, Clare M.C.; Hata, Nobuhiko

    2009-01-01

    Purpose To apply an intensity-based nonrigid registration algorithm to MRI-guided prostate brachytherapy clinical data and to assess its accuracy. Materials and Methods A nonrigid registration of preoperative MRI to intraoperative MRI images was carried out in 16 cases using a Basis-Spline algorithm in a retrospective manner. The registration was assessed qualitatively by experts’ visual inspection and quantitatively by measuring the Dice similarity coefficient (DSC) for total gland (TG), central gland (CG), and peripheral zone (PZ), the mutual information (MI) metric, and the fiducial registration error (FRE) between corresponding anatomical landmarks for both the nonrigid and a rigid registration method. Results All 16 cases were successfully registered in less than 5 min. After the nonrigid registration, DSC values for TG, CG, PZ were 0.91, 0.89, 0.79, respectively, the MI metric was −0.19 ± 0.07 and FRE presented a value of 2.3 ± 1.8 mm. All the metrics were significantly better than in the case of rigid registration, as determined by one-sided t-tests. Conclusion The intensity-based nonrigid registration method using clinical data was demonstrated to be feasible and showed statistically improved metrics when compare to only rigid registration. The method is a valuable tool to integrate pre- and intraoperative images for brachytherapy. PMID:19856437

  12. The Development of the CMS Zero Degree Calorimeters to Derive the Centrality of AA Collisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Jeffrey Scott

    The centrality of РЬРЬ collisions is derived using correlations from the zero degree calorimeter (ZDC) signal and pixel multiplicity at the Compact Muon Solenoid (CMS) Experiment using data from the heavy ion run in 2010. The method to derive the centrality takes the two-dimensional correlation between the ZDC and pixels and linearizes it for sorting events. The initial method for deriving the centrality at CMS uses the energy deposit in the HF detector, and it is compared to the centrality derived Ьу the correlations in ZDC and pixel multiplicity. This comparison highlights the similarities between the results of both methodsmore » in central collisions, as expected, and deviations in the results in peripheral collisions. The ZDC signals in peripheral collisions are selected Ьу low pixel multiplicity to oЬtain а ZDC neutron spectrum, which is used to effectively gain match both sides of the ZDC« less

  13. Large Format CMOS-based Detectors for Diffraction Studies

    NASA Astrophysics Data System (ADS)

    Thompson, A. C.; Nix, J. C.; Achterkirchen, T. G.; Westbrook, E. M.

    2013-03-01

    Complementary Metal Oxide Semiconductor (CMOS) devices are rapidly replacing CCD devices in many commercial and medical applications. Recent developments in CMOS fabrication have improved their radiation hardness, device linearity, readout noise and thermal noise, making them suitable for x-ray crystallography detectors. Large-format (e.g. 10 cm × 15 cm) CMOS devices with a pixel size of 100 μm × 100 μm are now becoming available that can be butted together on three sides so that very large area detector can be made with no dead regions. Like CCD systems our CMOS systems use a GdOS:Tb scintillator plate to convert stopping x-rays into visible light which is then transferred with a fiber-optic plate to the sensitive surface of the CMOS sensor. The amount of light per x-ray on the sensor is much higher in the CMOS system than a CCD system because the fiber optic plate is only 3 mm thick while on a CCD system it is highly tapered and much longer. A CMOS sensor is an active pixel matrix such that every pixel is controlled and readout independently of all other pixels. This allows these devices to be readout while the sensor is collecting charge in all the other pixels. For x-ray diffraction detectors this is a major advantage since image frames can be collected continuously at up 20 Hz while the crystal is rotated. A complete diffraction dataset can be collected over five times faster than with CCD systems with lower radiation exposure to the crystal. In addition, since the data is taken fine-phi slice mode the 3D angular position of diffraction peaks is improved. We have developed a cooled 6 sensor CMOS detector with an active area of 28.2 × 29.5 cm with 100 μm × 100 μm pixels and a readout rate of 20 Hz. The detective quantum efficiency exceeds 60% over the range 8-12 keV. One, two and twelve sensor systems are also being developed for a variety of scientific applications. Since the sensors are butt able on three sides, even larger systems could be built at reasonable cost.

  14. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  15. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera.

    PubMed

    Wang, Hongkai; Stout, David B; Taschereau, Richard; Gu, Zheng; Vu, Nam T; Prout, David L; Chatziioannou, Arion F

    2012-10-07

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  16. Improved Space Object Orbit Determination Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Peltonen, J.; Sännti, T.; Silha, J.; Flohrer, T.

    2014-09-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contains their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, we simulated several observation scenarios for ground- and space-based sensor by assuming different observation and sensor properties. We will introduce the analyzed end-to-end simulations of the ground- and space-based strategies in order to investigate the orbit determination accuracy and its sensitivity which may result from different values for the frame-rate, pixel scale, astrometric and epoch registration accuracies. Two cases were simulated, a survey using a ground-based sensor to observe objects in LEO for surveillance applications, and a statistical survey with a space-based sensor orbiting in LEO observing small-size debris in LEO. The ground-based LEO survey uses a dynamical fence close to the Earth shadow a few hours after sunset. For the space-based scenario a sensor in a sun-synchronous LEO orbit, always pointing in the anti-sun direction to achieve optimum illumination conditions for small LEO debris, was simulated. For the space-based scenario the simulations showed a 20 130 % improvement of the accuracy of all orbital parameters when varying the frame rate from 1/3 fps, which is the fastest rate for a typical CCD detector, to 50 fps, which represents the highest rate of scientific CMOS cameras. Changing the epoch registration accuracy from a typical 20.0 ms for a mechanical shutter to 0.025 ms, the theoretical value for the electronic shutter of a CMOS camera, improved the orbit accuracy by 4 to 190 %. The ground-based scenario also benefit from the specific CMOS characteristics, but to a lesser extent.

  17. Investigation of thin n-in-p planar pixel modules for the ATLAS upgrade

    NASA Astrophysics Data System (ADS)

    Savic, N.; Beyer, J.; La Rosa, A.; Macchiolo, A.; Nisius, R.

    2016-12-01

    In view of the High Luminosity upgrade of the Large Hadron Collider (HL-LHC), planned to start around 2023-2025, the ATLAS experiment will undergo a replacement of the Inner Detector. A higher luminosity will imply higher irradiation levels and hence will demand more radiation hardness especially in the inner layers of the pixel system. The n-in-p silicon technology is a promising candidate to instrument this region, also thanks to its cost-effectiveness because it only requires a single sided processing in contrast to the n-in-n pixel technology presently employed in the LHC experiments. In addition, thin sensors were found to ensure radiation hardness at high fluences. An overview is given of recent results obtained with not irradiated and irradiated n-in-p planar pixel modules. The focus will be on n-in-p planar pixel sensors with an active thickness of 100 and 150 μm recently produced at ADVACAM. To maximize the active area of the sensors, slim and active edges are implemented. The performance of these modules is investigated at beam tests and the results on edge efficiency will be shown.

  18. Self-amplified CMOS image sensor using a current-mode readout circuit

    NASA Astrophysics Data System (ADS)

    Santos, Patrick M.; de Lima Monteiro, Davies W.; Pittet, Patrick

    2014-05-01

    The feature size of the CMOS processes decreased during the past few years and problems such as reduced dynamic range have become more significant in voltage-mode pixels, even though the integration of more functionality inside the pixel has become easier. This work makes a contribution on both sides: the possibility of a high signal excursion range using current-mode circuits together with functionality addition by making signal amplification inside the pixel. The classic 3T pixel architecture was rebuild with small modifications to integrate a transconductance amplifier providing a current as an output. The matrix with these new pixels will operate as a whole large transistor outsourcing an amplified current that will be used for signal processing. This current is controlled by the intensity of the light received by the matrix, modulated pixel by pixel. The output current can be controlled by the biasing circuits to achieve a very large range of output signal levels. It can also be controlled with the matrix size and this permits a very high degree of freedom on the signal level, observing the current densities inside the integrated circuit. In addition, the matrix can operate at very small integration times. Its applications would be those in which fast imaging processing, high signal amplification are required and low resolution is not a major problem, such as UV image sensors. Simulation results will be presented to support: operation, control, design, signal excursion levels and linearity for a matrix of pixels that was conceived using this new concept of sensor.

  19. Feature-based pairwise retinal image registration by radial distortion correction

    NASA Astrophysics Data System (ADS)

    Lee, Sangyeol; Abràmoff, Michael D.; Reinhardt, Joseph M.

    2007-03-01

    Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration. Multiple retinal images can be combined together through a procedure known as mosaicing to form an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation. Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed. The parameters of the image acquisition and radial distortion models are estimated during an optimization step that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel images acquired from three subjects with a fundus camera confirmed that the afine model with distortion correction could register retinal image pairs to within 1.88+/-0.35 pixels accuracy (mean +/- standard deviation) assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small overlap between retinal image pairs.

  20. Modeling susceptibility difference artifacts produced by metallic implants in magnetic resonance imaging with point-based thin-plate spline image registration.

    PubMed

    Pauchard, Y; Smith, M; Mintchev, M

    2004-01-01

    Magnetic resonance imaging (MRI) suffers from geometric distortions arising from various sources. One such source are the non-linearities associated with the presence of metallic implants, which can profoundly distort the obtained images. These non-linearities result in pixel shifts and intensity changes in the vicinity of the implant, often precluding any meaningful assessment of the entire image. This paper presents a method for correcting these distortions based on non-rigid image registration techniques. Two images from a modelled three-dimensional (3D) grid phantom were subjected to point-based thin-plate spline registration. The reference image (without distortions) was obtained from a grid model including a spherical implant, and the corresponding test image containing the distortions was obtained using previously reported technique for spatial modelling of magnetic susceptibility artifacts. After identifying the nonrecoverable area in the distorted image, the calculated spline model was able to quantitatively account for the distortions, thus facilitating their compensation. Upon the completion of the compensation procedure, the non-recoverable area was removed from the reference image and the latter was compared to the compensated image. Quantitative assessment of the goodness of the proposed compensation technique is presented.

  1. Can direct electron detectors outperform phosphor-CCD systems for TEM?

    NASA Astrophysics Data System (ADS)

    Moldovan, G.; Li, X.; Kirkland, A.

    2008-08-01

    A new generation of imaging detectors is being considered for application in TEM, but which device architectures can provide the best images? Monte Carlo simulations of the electron-sensor interaction are used here to calculate the expected modulation transfer of monolithic active pixel sensors (MAPS), hybrid active pixel sensors (HAPS) and double sided Silicon strip detectors (DSSD), showing that ideal and nearly ideal transfer can be obtained using DSSD and MAPS sensors. These results highly recommend the replacement of current phosphor screen and charge coupled device imaging systems with such new directly exposed position sensitive electron detectors.

  2. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites

    NASA Astrophysics Data System (ADS)

    Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.

    2016-07-01

    The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.

  3. Theory of dispersive microlenses

    NASA Technical Reports Server (NTRS)

    Herman, B.; Gal, George

    1993-01-01

    A dispersive microlens is a miniature optical element which simultaneously focuses and disperses light. Arrays of dispersive mircolenses have potential applications in multicolor focal planes. They have a 100 percent optical fill factor and can focus light down to detectors of diffraction spot size, freeing up areas on the focal plane for on-chip analog signal processing. Use of dispersive microlenses allows inband color separation within a pixel and perfect scene registration. A dual-color separation has the potential for temperature discrimination. We discuss the design of dispersive microlenses and present sample results for efficient designs.

  4. Improved image quality using monolithic scintillator detectors with dual-sided readout in a whole-body TOF-PET ring: a simulation study.

    PubMed

    Tabacchini, Valerio; Surti, Suleman; Borghi, Giacomo; Karp, Joel S; Schaart, Dennis R

    2017-02-13

    We have recently built and characterized the performance of a monolithic scintillator detector based on a 32 mm  ×  32 mm  ×  22 mm LYSO:Ce crystal read out by digital silicon photomultiplier (dSiPM) arrays coupled to the crystal front and back surfaces in a dual-sided readout (DSR) configuration. The detector spatial resolution appeared to be markedly better than that of a detector consisting of the same crystal with conventional back-sided readout (BSR). Here, we aim to evaluate the influence of this difference in the detector spatial response on the quality of reconstructed images, so as to quantify the potential benefit of the DSR approach for high-resolution, whole-body time-of-flight (TOF) positron emission tomography (PET) applications. We perform Monte Carlo simulations of clinical PET systems based on BSR and DSR detectors, using the results of our detector characterization experiments to model the detector spatial responses. We subsequently quantify the improvement in image quality obtained with DSR compared to BSR, using clinically relevant metrics such as the contrast recovery coefficient (CRC) and the area under the localized receiver operating characteristic curve (ALROC). Finally, we compare the results with simulated rings of pixelated detectors with DOI capability. Our results show that the DSR detector produces significantly higher CRC and increased ALROC values than the BSR detector. The comparison with pixelated systems indicates that one would need to choose a crystal size of 3.2 mm with three DOI layers to match the performance of the BSR detector, while a pixel size of 1.3 mm with three DOI layers would be required to get on par with the DSR detector.

  5. Improved image quality using monolithic scintillator detectors with dual-sided readout in a whole-body TOF-PET ring: a simulation study

    NASA Astrophysics Data System (ADS)

    Tabacchini, Valerio; Surti, Suleman; Borghi, Giacomo; Karp, Joel S.; Schaart, Dennis R.

    2017-03-01

    We have recently built and characterized the performance of a monolithic scintillator detector based on a 32 mm  ×  32 mm  ×  22 mm LYSO:Ce crystal read out by digital silicon photomultiplier (dSiPM) arrays coupled to the crystal front and back surfaces in a dual-sided readout (DSR) configuration. The detector spatial resolution appeared to be markedly better than that of a detector consisting of the same crystal with conventional back-sided readout (BSR). Here, we aim to evaluate the influence of this difference in the detector spatial response on the quality of reconstructed images, so as to quantify the potential benefit of the DSR approach for high-resolution, whole-body time-of-flight (TOF) positron emission tomography (PET) applications. We perform Monte Carlo simulations of clinical PET systems based on BSR and DSR detectors, using the results of our detector characterization experiments to model the detector spatial responses. We subsequently quantify the improvement in image quality obtained with DSR compared to BSR, using clinically relevant metrics such as the contrast recovery coefficient (CRC) and the area under the localized receiver operating characteristic curve (ALROC). Finally, we compare the results with simulated rings of pixelated detectors with DOI capability. Our results show that the DSR detector produces significantly higher CRC and increased ALROC values than the BSR detector. The comparison with pixelated systems indicates that one would need to choose a crystal size of 3.2 mm with three DOI layers to match the performance of the BSR detector, while a pixel size of 1.3 mm with three DOI layers would be required to get on par with the DSR detector.

  6. Enter AGU student contest to win free Fall Meeting registration

    NASA Astrophysics Data System (ADS)

    Smedley, Kara

    2012-07-01

    AGU is excited to announce its first Student Video and Student T-shirt Design competitions. This is an opportunity for students to display their artistic sides and share their creativity and love of science with the world. Entries could highlight an aspect of Earth or space science in an educational and/or entertaining way or showcase a career path in geophysical sciences. Winners of these student-only competitions will be awarded free registration to the 2012 Fall Meeting in San Francisco, Calif.

  7. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  8. Photon-counting hexagonal pixel array CdTe detector: Spatial resolution characteristics for image-guided interventional applications

    PubMed Central

    Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo

    2016-01-01

    Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324

  9. Photon-counting hexagonal pixel array CdTe detector: Spatial resolution characteristics for image-guided interventional applications.

    PubMed

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo

    2016-05-01

    High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.

  10. Median filters as a tool to determine dark noise thresholds in high resolution smartphone image sensors for scientific imaging

    NASA Astrophysics Data System (ADS)

    Igoe, Damien P.; Parisi, Alfio V.; Amar, Abdurazaq; Rummenie, Katherine J.

    2018-01-01

    An evaluation of the use of median filters in the reduction of dark noise in smartphone high resolution image sensors is presented. The Sony Xperia Z1 employed has a maximum image sensor resolution of 20.7 Mpixels, with each pixel having a side length of just over 1 μm. Due to the large number of photosites, this provides an image sensor with very high sensitivity but also makes them prone to noise effects such as hot-pixels. Similar to earlier research with older models of smartphone, no appreciable temperature effects were observed in the overall average pixel values for images taken in ambient temperatures between 5 °C and 25 °C. In this research, hot-pixels are defined as pixels with intensities above a specific threshold. The threshold is determined using the distribution of pixel values of a set of images with uniform statistical properties associated with the application of median-filters of increasing size. An image with uniform statistics was employed as a training set from 124 dark images, and the threshold was determined to be 9 digital numbers (DN). The threshold remained constant for multiple resolutions and did not appreciably change even after a year of extensive field use and exposure to solar ultraviolet radiation. Although the temperature effects' uniformity masked an increase in hot-pixel occurrences, the total number of occurrences represented less than 0.1% of the total image. Hot-pixels were removed by applying a median filter, with an optimum filter size of 7 × 7; similar trends were observed for four additional smartphone image sensors used for validation. Hot-pixels were also reduced by decreasing image resolution. The method outlined in this research provides a methodology to characterise the dark noise behavior of high resolution image sensors for use in scientific investigations, especially as pixel sizes decrease.

  11. Dunes of the Southern Highlands

    NASA Image and Video Library

    2017-03-23

    Sand dunes are scattered across Mars and one of the larger populations exists in the Southern hemisphere, just west of the Hellas impact basin. The Hellespontus region features numerous collections of dark, dune formations that collect both within depressions such as craters, and among "extra-crater" plains areas. This image displays the middle portion of a large dune field composed primarily of crescent-shaped "barchan" dunes. Here, the steep, sunlit side of the dune, called a slip face, indicates the down-wind side of the dune and direction of its migration. Other long, narrow linear dunes known as "seif" dunes are also here and in other locales to the east. NB: "Seif" comes from the Arabic word meaning "sword." The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25.5 centimeters (10 inches) per pixel (with 1 x 1 binning); objects on the order of 77 centimeters (30.3 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21571

  12. Coregistration of high-resolution Mars orbital images

    NASA Astrophysics Data System (ADS)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter

    2015-04-01

    The systematic orbital imaging of the Martian surface started 4 decades ago from NASA's Viking Orbiter 1 & 2 missions, which were launched in August 1975, and acquired orbital images of the planet between 1976 and 1980. The result of this reconnaissance was the first medium-resolution (i.e. ≤ 300m/pixel) global map of Mars, as well as a variety of high-resolution images (reaching up to 8m/pixel) of special regions of interest. Over the last two decades NASA has sent 3 more spacecraft with onboard instruments for high-resolution orbital imaging: Mars Global Surveyor (MGS) having onboard the Mars Orbital Camera - Narrow Angle (MOC-NA), Mars Odyssey having onboard the Thermal Emission Imaging System - Visual (THEMIS-VIS) and the Mars Reconnaissance Orbiter (MRO) having on board two distinct high-resolution cameras, Context Camera (CTX) and High-Resolution Imaging Science Experiment (HiRISE). Moreover, ESA has the multispectral High resolution Stereo Camera (HRSC) onboard ESA's Mars Express with resolution up to 12.5m since 2004. Overall, this set of cameras have acquired more than 400,000 high-resolution images, i.e. with resolution better than 100m and as fine as 25 cm/pixel. Notwithstanding the high spatial resolution of the available NASA orbital products, their accuracy of areo-referencing is often very poor. As a matter of fact, due to pointing inconsistencies, usually form errors in roll attitude, the acquired products may actually image areas tens of kilometers far away from the point that they are supposed to be looking at. On the other hand, since 2004, the ESA Mars Express has been acquiring stereo images through the High Resolution Stereo Camera (HRSC), with resolution that is usually 12.5-25 metres per pixel. The achieved coverage is more than 64% for images with resolution finer than 20 m/pixel, while for ~40% of Mars, Digital Terrain Models (DTMs) have been produced with are co-registered with MOLA [Gwinner et al., 2010]. The HRSC images and DTMs represent the best available 3D reference frame for Mars showing co-registration with MOLA<25m (loc.cit.). In our work, the reference generated by HRSC terrain corrected orthorectified images is used as a common reference frame to co-register all available high-resolution orbital NASA products into a common 3D coordinate system, thus allowing the examination of the changes that happen on the surface of Mars over time (such as seasonal flows [McEwen et al., 2011] or new impact craters [Byrne, et al., 2009]). In order to accomplish such a tedious manual task, we have developed an automatic co-registration pipeline that produces orthorectified versions of the NASA images in realistic time (i.e. from ~15 minutes to 10 hours per image depending on size). In the first step of this pipeline, tie-points are extracted from the target NASA image and the reference HRSC image or image mosaic. Subsequently, the HRSC areo-reference information is used to transform the HRSC tie-points pixel coordinates into 3D "world" coordinates. This way, a correspondence between the pixel coordinates of the target NASA image and the 3D "world" coordinates is established for each tie-point. This set of correspondences is used to estimate a non-rigid, 3D to 2D transformation model, which transforms the target image into the HRSC reference coordinate system. Finally, correlation of the transformed target image and the HRSC image is employed to fine-tune the orthorectification results, thus generating results with sub-pixel accuracy. This method, which has been proven to be accurate, robust to resolution differences and reliable when dealing with partially degraded data and fast, will be presented, along with some example co-registration results that have been achieved by using it. Acknowledgements: The research leading to these results has received partial funding from the STFC "MSSL Consolidated Grant" ST/K000977/1 and partial support from the European Union's Seventh Framework Programme (FP7/2007-2013) under iMars grant agreement n° 607379. References: [1] K. F. Gwinner, et al. (2010) Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: characteristics and performance. Earth and Planetary Science Letters 294, 506-519, doi:10.1016/j.epsl.2009.11.007. [2] A. McEwen, et al. (2011) Seasonal flows on warm martian slopes. Science , 333 (6043): 740-743. [3] S. Byrne, et al. (2009) Distribution of mid-latitude ground ice on mars from new impact craters. Science, 325(5948):1674-1676.

  13. 32 CFR 1288.5 - Procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... validation sticker. (1) One decal will be affixed to the left front bumper (operator's side) of a four-wheel... registration information. Evidence of compliance will be documented by the issuance and display of a new 3-year...

  14. Registration of 3D ultrasound computer tomography and MRI for evaluation of tissue correspondences

    NASA Astrophysics Data System (ADS)

    Hopp, T.; Dapp, R.; Zapf, M.; Kretzek, E.; Gemmeke, H.; Ruiter, N. V.

    2015-03-01

    3D Ultrasound Computer Tomography (USCT) is a new imaging method for breast cancer diagnosis. In the current state of development it is essential to correlate USCT with a known imaging modality like MRI to evaluate how different tissue types are depicted. Due to different imaging conditions, e.g. with the breast subject to buoyancy in USCT, a direct correlation is demanding. We present a 3D image registration method to reduce positioning differences and allow direct side-by-side comparison of USCT and MRI volumes. It is based on a two-step approach including a buoyancy simulation with a biomechanical model and free form deformations using cubic B-Splines for a surface refinement. Simulation parameters are optimized patient-specifically in a simulated annealing scheme. The method was evaluated with in-vivo datasets resulting in an average registration error below 5mm. Correlating tissue structures can thereby be located in the same or nearby slices in both modalities and three-dimensional non-linear deformations due to the buoyancy are reduced. Image fusion of MRI volumes and USCT sound speed volumes was performed for intuitive display. By applying the registration to data of our first in-vivo study with the KIT 3D USCT, we could correlate several tissue structures in MRI and USCT images and learn how connective tissue, carcinomas and breast implants observed in the MRI are depicted in the USCT imaging modes.

  15. Fourier-based automatic alignment for improved Visual Cryptography schemes.

    PubMed

    Machizaud, Jacques; Chavel, Pierre; Fournel, Thierry

    2011-11-07

    In Visual Cryptography, several images, called "shadow images", that separately contain no information, are overlapped to reveal a shared secret message. We develop a method to digitally register one printed shadow image acquired by a camera with a purely digital shadow image, stored in memory. Using Fourier techniques derived from Fourier Optics concepts, the idea is to enhance and exploit the quasi periodicity of the shadow images, composed by a random distribution of black and white patterns on a periodic sampling grid. The advantage is to speed up the security control or the access time to the message, in particular in the cases of a small pixel size or of large numbers of pixels. Furthermore, the interest of visual cryptography can be increased by embedding the initial message in two shadow images that do not have identical mathematical supports, making manual registration impractical. Experimental results demonstrate the successful operation of the method, including the possibility to directly project the result onto the printed shadow image.

  16. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R.; Wiegand, C. L.; Richardson, A. J.; Johnson, M. P. (Principal Investigator)

    1982-01-01

    Subvisible cirrus clouds (SCi) were easily distinguished in mid-infrared (MIR) TIROS-N daytime data from south Texas and northeast Mexico. The MIR (3.55-3.93 micrometer) pixel digital count means of the SCi affected areas were more than 3.5 standard deviations on the cold side of the scene means. (These standard deviations were made free of the effects of unusual instrument error by factoring out the Ch 3 MIR noise on the basis of detailed examination of noisy and noise-free pixels). SCi affected areas in the IR Ch 4 (10.5-11.5 micrometer) appeared cooler than the general scene, but were not as prominent as in Ch 3, being less than 2 standard deviations from the scene mean. Ch 3 and 4 standard deviations and coefficients of variation are not reliable indicators, by themselves, of the presence of SCi because land features can have similar statistical properties.

  17. Geometric Corrections for Topographic Distortion from Side Scan Sonar Data Obtained by ANKOU System

    NASA Astrophysics Data System (ADS)

    Yamamoto, Fujio; Kato, Yukihiro; Ogasawara, Shohei

    The ANKOU is a newly developed, full ocean depth, long-range vector side scan sonar system. The system provides real time vector side scan sonar data to produce backscattering images and bathymetric maps for seafloor swaths up to 10 km on either side of ship's centerline. Complete geometric corrections are made using towfish attitude and cross-track distortions known as foreshortening and layover caused by violation of the flat bottom assumption. Foreshortening and layover refers to pixels which have been placed at an incorrect cross-track distance. Our correction of this topographic distortion is accomplished by interpolating a bathymetric profile and ANKOU phase data. We applied these processing techniques to ANKOU backscattering data obtained from off Boso Peninsula, and confirmed their efficiency and utility for making geometric corrections of side scan sonar data.

  18. Image registration reveals central lens thickness minimally increases during accommodation

    PubMed Central

    Schachar, Ronald A; Mani, Majid; Schachar, Ira H

    2017-01-01

    Purpose To evaluate anterior chamber depth, central crystalline lens thickness and lens curvature during accommodation. Setting California Retina Associates, El Centro, CA, USA. Design Healthy volunteer, prospective, clinical research swept-source optical coherence biometric image registration study of accommodation. Methods Ten subjects (4 females and 6 males) with an average age of 22.5 years (range: 20–26 years) participated in the study. A 45° beam splitter attached to a Zeiss IOLMaster 700 (Carl Zeiss Meditec Inc., Jena, Germany) biometer enabled simultaneous imaging of the cornea, anterior chamber, entire central crystalline lens and fovea in the dilated right eyes of subjects before, and during focus on a target 11 cm from the cornea. Images with superimposable foveal images, obtained before and during accommodation, that met all of the predetermined alignment criteria were selected for comparison. This registration requirement assured that changes in anterior chamber depth and central lens thickness could be accurately and reliably measured. The lens radii of curvatures were measured with a pixel stick circle. Results Images from only 3 of 10 subjects met the predetermined criteria for registration. Mean anterior chamber depth decreased, −67 μm (range: −0.40 to −110 μm), and mean central lens thickness increased, 117 μm (range: 100–130 μm). The lens surfaces steepened, anterior greater than posterior, while the lens, itself, did not move or shift its position as appeared from the lack of movement of the lens nucleus, during 7.8 diopters of accommodation, (range: 6.6–9.7 diopters). Conclusion Image registration, with stable invariant references for image correspondence, reveals that during accommodation a large increase in lens surface curvatures is associated with only a small increase in central lens thickness and no change in lens position. PMID:28979092

  19. Comparison of demons deformable registration-based methods for texture analysis of serial thoracic CT scans

    NASA Astrophysics Data System (ADS)

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Fei, Xianhan M.; Tuohy, Rachel E.; Armato, Samuel G.

    2013-02-01

    To determine how 19 image texture features may be altered by three image registration methods, "normal" baseline and follow-up computed tomography (CT) scans from 27 patients were analyzed. Nineteen texture feature values were calculated in over 1,000 32x32-pixel regions of interest (ROIs) randomly placed in each baseline scan. All three methods used demons registration to map baseline scan ROIs to anatomically matched locations in the corresponding transformed follow-up scan. For the first method, the follow-up scan transformation was subsampled to achieve a voxel size identical to that of the baseline scan. For the second method, the follow-up scan was transformed through affine registration to achieve global alignment with the baseline scan. For the third method, the follow-up scan was directly deformed to the baseline scan using demons deformable registration. Feature values in matched ROIs were compared using Bland- Altman 95% limits of agreement. For each feature, the range spanned by the 95% limits was normalized to the mean feature value to obtain the normalized range of agreement, nRoA. Wilcoxon signed-rank tests were used to compare nRoA values across features for the three methods. Significance for individual tests was adjusted using the Bonferroni method. nRoA was significantly smaller for affine-registered scans than for the resampled scans (p=0.003), indicating lower feature value variability between baseline and follow-up scan ROIs using this method. For both of these methods, however, nRoA was significantly higher than when feature values were calculated directly on demons-deformed followup scans (p<0.001). Across features and methods, nRoA values remained below 26%.

  20. Ocean Thermal Feature Recognition, Discrimination and Tracking Using Infrared Satellite Imagery

    DTIC Science & Technology

    1991-06-01

    rejected if the temperature in the mapped area exceeds classification criteria ............................... 17 viii 2.6 Ideal feature space mapping from...in seconds, and 1P is the side dimension of the pixel in meters. Figure 2.6: Ideal feature space mapping from pattern tile - search tile comparison. 20

  1. WE-G-BRD-01: Diffusion Weighted MRI for Response Assessment of Inoperable Lung Tumors for Patients Undergoing SBRT Treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyagi, N; Wengler, K; Yorke, E

    2014-06-15

    Purpose: To investigate early changes in tumor Apparent Diffusion Coefficients derived from diffusion weighted (DW)-MRI of lung cancer patients undergoing SBRT, as a possible early predictor of treatment response. Methods: DW-MRI scans were performed in this prospective phase I IRB-approved study of inoperable lung tumors at various time-points during the course of SBRT treatments. Axial DW scan using multi b-values ranging from 0–1000 s/mm{sup 2} were acquired in treatment position on a 3T Philips MR scanner during simulation, one hour after the first fraction (8 Gy), after a total of 5 fractions (40 Gy) and 4 weeks after SBRT delivery.more » A monoexponential model based on a least square fit from all b values was performed on a pixel-by-pixel basis and ADC was calculated. GTVs drawn on 4DCT for planning were mapped on the T2w MRI (acquired at exhale) after deformable registration. These volumes were then mapped on DWI scan for ADC calculation after rigid registration between the anatomical scan and diffusion scan. T2w scan on followup time points were deformably registered to the pretreatment T2 scan. Results: The first two patients in this study were analyzed. Median ADC values were 1.48, 1.48, 1.62 and 1.83 (10{sup −3}×) mm{sup 2}/s at pretreatment, after 8 Gy, after 40 Gy and 4 weeks posttreatment for the first patient and 1.57, 1.53, 1.66 and 1.72 (10{sup −3}×) mm{sup 2}/s for the second patient. ADC increased more significantly after 4 weeks of treatment rather than immediately post treatment, implying that late ADC value may be a better predictor of tumor response for SBRT treatment. The fraction of tumor pixels at high ADC values increased at 4 weeks post treatment. Conclusion: The observed increase in ADC values before the end of radiotherapy may be a surrogate for tumor response, but further patient accrual will be necessary to determine its value.« less

  2. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic errors. These results also demonstrate the high potential of linear features as reliable control features to reach sub-pixel accuracy in registration applications.

  3. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    PubMed

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  4. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    PubMed Central

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-01-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 × 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches – namely so-called wobbled splatting – to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. PMID:21782399

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M.; Al-Dayeh, L.; Patel, P.

    It is well known that even small movements of the head can lead to artifacts in fMRI. Corrections for these movements are usually made by a registration algorithm which accounts for translational and rotational motion of the head under a rigid body assumption. The brain, however, is not entirely rigid and images are prone to local deformations due to CSF motion, susceptibility effects, local changes in blood flow and inhomogeneities in the magnetic and gradient fields. Since nonrigid body motion is not adequately corrected by approaches relying on simple rotational and translational corrections, we have investigated a general approach wheremore » an n{sup th} order polynomial is used to map all images onto a common reference image. The coefficients of the polynomial transformation were determined through minimization of the ratio of the variance to the mean of each pixel. Simulation studies were conducted to validate the technique. Results of experimental studies using polynomial transformation for 2D and 3D registration show lower variance to mean ratio compared to simple rotational and translational corrections.« less

  6. Image Geometric Corrections for a New EMCCD-based Dual Modular X-ray Imager

    PubMed Central

    Qu, Bin; Huang, Ying; Wang, Weiyuan; Cartwright, Alexander N.; Titus, Albert H.; Bednarek, Daniel R.; Rudin, Stephen

    2012-01-01

    An EMCCD-based dual modular x-ray imager was recently designed and developed from the component level, providing a high dynamic range of 53 dB and an effective pixel size of 26 μm for angiography and fluoroscopy. The unique 2×1 array design efficiently increased the clinical field of view, and also can be readily expanded to an M×N array implementation. Due to the alignment mismatches between the EMCCD sensors and the fiber optic tapers in each module, the output images or video sequences result in a misaligned 2048×1024 digital display if uncorrected. In this paper, we present a method for correcting display registration using a custom-designed two layer printed circuit board. This board was designed with grid lines to serve as the calibration pattern, and provides an accurate reference and sufficient contrast to enable proper display registration. Results show an accurate and fine stitching of the two outputs from the two modules. PMID:22254882

  7. Path to the Dark Side

    NASA Image and Video Library

    2015-03-09

    The moon Iapetus, like the "force" in Star Wars, has both a light side and a dark side. Scientists think that Iapetus' (914 miles or 1471 kilometers across) dark/light asymmetry was actually created by material migrating away from the dark side. For a simulation of how scientists think the asymmetry formed, see Thermal Runaway Model . Lit terrain seen here is on the Saturn-facing hemisphere of Iapetus. North on Iapetus is up and rotated 43 degrees to the right. The image was taken in green light with the Cassini spacecraft narrow-angle camera on Jan. 4, 2015. The view was acquired at a distance of approximately 2.5 million miles (4 million kilometers) from Iapetus. Image scale is 15 miles (24 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/pia18307

  8. Lung texture in serial thoracic CT scans: Assessment of change introduced by image registration1

    PubMed Central

    Cunliffe, Alexandra R.; Al-Hallaq, Hania A.; Labby, Zacariah E.; Pelizzari, Charles A.; Straus, Christopher; Sensakovic, William F.; Ludwig, Michelle; Armato, Samuel G.

    2012-01-01

    Purpose: The aim of this study was to quantify the effect of four image registration methods on lung texture features extracted from serial computed tomography (CT) scans obtained from healthy human subjects. Methods: Two chest CT scans acquired at different time points were collected retrospectively for each of 27 patients. Following automated lung segmentation, each follow-up CT scan was registered to the baseline scan using four algorithms: (1) rigid, (2) affine, (3) B-splines deformable, and (4) demons deformable. The registration accuracy for each scan pair was evaluated by measuring the Euclidean distance between 150 identified landmarks. On average, 1432 spatially matched 32 × 32-pixel region-of-interest (ROI) pairs were automatically extracted from each scan pair. First-order, fractal, Fourier, Laws’ filter, and gray-level co-occurrence matrix texture features were calculated in each ROI, for a total of 140 features. Agreement between baseline and follow-up scan ROI feature values was assessed by Bland–Altman analysis for each feature; the range spanned by the 95% limits of agreement of feature value differences was calculated and normalized by the average feature value to obtain the normalized range of agreement (nRoA). Features with small nRoA were considered “registration-stable.” The normalized bias for each feature was calculated from the feature value differences between baseline and follow-up scans averaged across all ROIs in every patient. Because patients had “normal” chest CT scans, minimal change in texture feature values between scan pairs was anticipated, with the expectation of small bias and narrow limits of agreement. Results: Registration with demons reduced the Euclidean distance between landmarks such that only 9% of landmarks were separated by ≥1 mm, compared with rigid (98%), affine (95%), and B-splines (90%). Ninety-nine of the 140 (71%) features analyzed yielded nRoA > 50% for all registration methods, indicating that the majority of feature values were perturbed following registration. Nineteen of the features (14%) had nRoA < 15% following demons registration, indicating relative feature value stability. Student's t-tests showed that the nRoA of these 19 features was significantly larger when rigid, affine, or B-splines registration methods were used compared with demons registration. Demons registration yielded greater normalized bias in feature value change than B-splines registration, though this difference was not significant (p = 0.15). Conclusions: Demons registration provided higher spatial accuracy between matched anatomic landmarks in serial CT scans than rigid, affine, or B-splines algorithms. Texture feature changes calculated in healthy lung tissue from serial CT scans were smaller following demons registration compared with all other algorithms. Though registration altered the values of the majority of texture features, 19 features remained relatively stable after demons registration, indicating their potential for detecting pathologic change in serial CT scans. Combined use of accurate deformable registration using demons and texture analysis may allow for quantitative evaluation of local changes in lung tissue due to disease progression or treatment response. PMID:22894392

  9. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications

    PubMed Central

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-01-01

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC. PMID:29280970

  10. Background Registration-Based Adaptive Noise Filtering of LWIR/MWIR Imaging Sensors for UAV Applications.

    PubMed

    Kim, Byeong Hak; Kim, Min Young; Chae, You Seong

    2017-12-27

    Unmanned aerial vehicles (UAVs) are equipped with optical systems including an infrared (IR) camera such as electro-optical IR (EO/IR), target acquisition and designation sights (TADS), or forward looking IR (FLIR). However, images obtained from IR cameras are subject to noise such as dead pixels, lines, and fixed pattern noise. Nonuniformity correction (NUC) is a widely employed method to reduce noise in IR images, but it has limitations in removing noise that occurs during operation. Methods have been proposed to overcome the limitations of the NUC method, such as two-point correction (TPC) and scene-based NUC (SBNUC). However, these methods still suffer from unfixed pattern noise. In this paper, a background registration-based adaptive noise filtering (BRANF) method is proposed to overcome the limitations of conventional methods. The proposed BRANF method utilizes background registration processing and robust principle component analysis (RPCA). In addition, image quality verification methods are proposed that can measure the noise filtering performance quantitatively without ground truth images. Experiments were performed for performance verification with middle wave infrared (MWIR) and long wave infrared (LWIR) images obtained from practical military optical systems. As a result, it is found that the image quality improvement rate of BRANF is 30% higher than that of conventional NUC.

  11. A method to detect layover and shadow based on distributed spaceborne single-baseline InSAR

    NASA Astrophysics Data System (ADS)

    Yun, Ren; Huanxin, Zou; Shilin, Zhou; Hao, Sun; Kefeng, Ji

    2014-03-01

    Layover and Shadow are inevitable phenomenena in InSAR, which seriously destroy the continuity of interferometric phase images and present difficulties in the follow-up phase unwrapping. Thus, it's significant to detect layover and shadow. This paper presents an approach to detect layover and shadow using the auto-correlation matrix and amplitude of the two images. The method can make full use of the spatial information of neighboring pixels and effectively detect layover and shadow regions in the case of low registration accuracy. Experiment result on the simulated data verifies effectiveness of the algorithm.

  12. Tracker-on-C for cone-beam CT-guided surgery: evaluation of geometric accuracy and clinical applications

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Otake, Y.; Uneri, A.; Schafer, S.; Mirota, D. J.; Nithiananthan, S.; Stayman, J. W.; Khanna, A. J.; Reh, D. D.; Gallia, G. L.; Taylor, R. H.; Siewerdsen, J. H.

    2012-02-01

    Conventional surgical tracking configurations carry a variety of limitations in line-of-sight, geometric accuracy, and mismatch with the surgeon's perspective (for video augmentation). With increasing utilization of mobile C-arms, particularly those allowing cone-beam CT (CBCT), there is opportunity to better integrate surgical trackers at bedside to address such limitations. This paper describes a tracker configuration in which the tracker is mounted directly on the Carm. To maintain registration within a dynamic coordinate system, a reference marker visible across the full C-arm rotation is implemented, and the "Tracker-on-C" configuration is shown to provide improved target registration error (TRE) over a conventional in-room setup - (0.9+/-0.4) mm vs (1.9+/-0.7) mm, respectively. The system also can generate digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool ("x-ray flashlight"), the tracker, or the C-arm ("virtual fluoroscopy"), with geometric accuracy in virtual fluoroscopy of (0.4+/-0.2) mm. Using a video-based tracker, planning data and DRRs can be superimposed on the video scene from a natural perspective over the surgical field, with geometric accuracy (0.8+/-0.3) pixels for planning data overlay and (0.6+/-0.4) pixels for DRR overlay across all C-arm angles. The field-of-view of fluoroscopy or CBCT can also be overlaid on real-time video ("Virtual Field Light") to assist C-arm positioning. The fixed transformation between the x-ray image and tracker facilitated quick, accurate intraoperative registration. The workflow and precision associated with a variety of realistic surgical tasks were significantly improved using the Tracker-on-C - for example, nearly a factor of 2 reduction in time required for C-arm positioning, reduction or elimination of dose in "hunting" for a specific fluoroscopic view, and confident placement of the x-ray FOV on the surgical target. The proposed configuration streamlines the integration of C-arm CBCT with realtime tracking and demonstrated utility in a spectrum of image-guided interventions (e.g., spine surgery) benefiting from improved accuracy, enhanced visualization, and reduced radiation exposure.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew, E-mail: andrew.karellas@umassmed.edu

    Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixelmore » pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.« less

  14. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    NASA Astrophysics Data System (ADS)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  15. ALOS PALSAR Winter Coherence and Summer Intensities for Large Scale Forest Monitoring in Siberia

    NASA Astrophysics Data System (ADS)

    Thiel, Christian; Thiel, Carolin; Santoro, Maurizio; Schmullius, Christiane

    2008-11-01

    In this paper summer intensity and winter coherence images are used for large scale forest monitoring. The intensities (FBD HH/HV) have been acquired during summer 2007 and feature the K&C intensity stripes [1]. The processing consisted of radiometric calibration, orthorectification, and topographic normalisation. The coherence has been estimated from interferometric pairs with 46-days repeat-pass intervals. The pairs have been acquired during the winters 2006/2007 and 2007/2008. During both winters suited weather conditions have been reported. Interferometric processing consisted of SLC co-registration at sub-pixel level, common-band filtering in range and azimuth and generation of a differential interferogram, which was used in the coherence estimation procedure based on adaptive estimation. All images were geocoded using SRTM data. The pixel size of the final SAR products is 50 m x 50 m. It could already be demonstrated, that by using PALSAR intensities and winter coherence forest and non-forest can be clearly separated [2]. By combining both data types hardly any overlap of the class signatures was detected, even though the analysis was conducted on pixel level and no speckle filter has been applied. Thus, the delineation of a forest cover mask could be executed operationally. The major hitch is the definition of a biomass threshold for regrowing forest to be distinguished as forest.

  16. GPUs benchmarking in subpixel image registration algorithm

    NASA Astrophysics Data System (ADS)

    Sanz-Sabater, Martin; Picazo-Bueno, Jose Angel; Micó, Vicente; Ferrerira, Carlos; Granero, Luis; Garcia, Javier

    2015-05-01

    Image registration techniques are used among different scientific fields, like medical imaging or optical metrology. The straightest way to calculate shifting between two images is using the cross correlation, taking the highest value of this correlation image. Shifting resolution is given in whole pixels which cannot be enough for certain applications. Better results can be achieved interpolating both images, as much as the desired resolution we want to get, and applying the same technique described before, but the memory needed by the system is significantly higher. To avoid memory consuming we are implementing a subpixel shifting method based on FFT. With the original images, subpixel shifting can be achieved multiplying its discrete Fourier transform by a linear phase with different slopes. This method is high time consuming method because checking a concrete shifting means new calculations. The algorithm, highly parallelizable, is very suitable for high performance computing systems. GPU (Graphics Processing Unit) accelerated computing became very popular more than ten years ago because they have hundreds of computational cores in a reasonable cheap card. In our case, we are going to register the shifting between two images, doing the first approach by FFT based correlation, and later doing the subpixel approach using the technique described before. We consider it as `brute force' method. So we will present a benchmark of the algorithm consisting on a first approach (pixel resolution) and then do subpixel resolution approaching, decreasing the shifting step in every loop achieving a high resolution in few steps. This program will be executed in three different computers. At the end, we will present the results of the computation, with different kind of CPUs and GPUs, checking the accuracy of the method, and the time consumed in each computer, discussing the advantages, disadvantages of the use of GPUs.

  17. Image cross-correlation using COSI-Corr: A versatile technique to monitor and quantity surface deformation in space and time

    NASA Astrophysics Data System (ADS)

    Leprince, S.; Ayoub, F.; Avouac, J.

    2011-12-01

    We have developed a suite of algorithms for precise Co-registration of Optically Sensed Images and Correlation (COSI-Corr) which were implemented in a software package first released to the academic community in 2007. Its capability for accurate surface deformation measurement has proved useful for a wide variety of applications. We present the fundamental principles of COSI-Corr, which are the key ingredients to achieve sub-pixel registration and sub-pixel measurement accuracy, and we show how they can be applied to various types of images to extract 2D, 3D, or even 4D deformation fields of a given surface. Examples are drawn from recent collaborative studies and include: (1) The study of the Icelandic Krafla rifting crisis that occurred from 1975 to 1984 where we used a combination of archived airborne photographs, declassified spy satellite imagery, and modern satellite acquisitions to propose a detailed 2D displacement field of the ground; (2) The estimation of glacial velocities from fast New Zealand glaciers using successive ASTER acquisitions; (3) The derivation of sand dunes migration rates; (4) The estimation of ocean swell velocity taking advantage of the short time delay between the acquisition of different spectral bands on the SPOT 5 satellite; (5) The derivation of the full 3D ground displacement field induced by the 2010 Mw 7.2 El Mayor-Cucapah Earthquake, as recorded from pre- and post-event lidar acquisitions; (6) And, the estimation of 2D in plane deformation of mechanical samples under stress in the lab. Finally, we conclude by highlighting the potential future and implication of applying such correlation techniques on a large scale to provide global monitoring of our environment.

  18. MRO's High Resolution Imaging Science Experiment (HiRISE): Polar Science Expectations

    NASA Technical Reports Server (NTRS)

    McEwen, A.; Herkenhoff, K.; Hansen, C.; Bridges, N.; Delamere, W. A.; Eliason, E.; Grant, J.; Gulick, V.; Keszthelyi, L.; Kirk, R.

    2003-01-01

    The Mars Reconnaissance Orbiter (MRO) is expected to launch in August 2005, arrive at Mars in March 2006, and begin the primary science phase in November 2006. MRO will carry a suite of remote-sensing instruments and is designed to routinely point off-nadir to precisely target locations on Mars for high-resolution observations. The mission will have a much higher data return than any previous planetary mission, with 34 Tbits of returned data expected in the first Mars year in the mapping orbit (255 x 320 km). The HiRISE camera features a 0.5 m telescope, 12 m focal length, and 14 CCDs. We expect to acquire approximately 10,000 observations in the primary science phase (approximately 1 Mars year), including approximately 2,000 images for 1,000 stereo targets. Each observation will be accompanied by a approximately 6 m/pixel image over a 30 x 45 km region acquired by MRO s context imager. Many HiRISE images will be full resolution in the center portion of the swath width and binned (typically 4x4) on the sides. This provides two levels of context, so we step out from 0.3 m/pixel to 1.2 m/pixel to 6 m/pixel (at 300 km altitude). We expect to cover approximately 1% of Mars at better than 1.2 m/pixel, approximately 0.1% at 0.3 m/pixel, approximately 0.1% in 3 colors, and approximately 0.05% in stereo. Our major challenge is to find the dey contacts, exposures and type morphologies to observe.

  19. Quantitative Spatial and Temporal Analysis of Fluorescein Angiography Dynamics in the Eye

    PubMed Central

    Hui, Flora; Nguyen, Christine T. O.; Bedggood, Phillip A.; He, Zheng; Fish, Rebecca L.; Gurrell, Rachel; Vingrys, Algis J.; Bui, Bang V.

    2014-01-01

    Purpose We describe a novel approach to analyze fluorescein angiography to investigate fluorescein flow dynamics in the rat posterior retina as well as identify abnormal areas following laser photocoagulation. Methods Experiments were undertaken in adult Long Evans rats. Using a rodent retinal camera, videos were acquired at 30 frames per second for 30 seconds following intravenous introduction of sodium fluorescein in a group of control animals (n = 14). Videos were image registered and analyzed using principle components analysis across all pixels in the field. This returns fluorescence intensity profiles from which, the half-rise (time to 50% brightness), half-fall (time for 50% decay) back to an offset (plateau level of fluorescence). We applied this analysis to video fluorescein angiography data collected 30 minutes following laser photocoagulation in a separate group of rats (n = 7). Results Pixel-by-pixel analysis of video angiography clearly delineates differences in the temporal profiles of arteries, veins and capillaries in the posterior retina. We find no difference in half-rise, half-fall or offset amongst the four quadrants (inferior, nasal, superior, temporal). We also found little difference with eccentricity. By expressing the parameters at each pixel as a function of the number of standard deviation from the average of the entire field, we could clearly identify the spatial extent of the laser injury. Conclusions This simple registration and analysis provides a way to monitor the size of vascular injury, to highlight areas of subtle vascular leakage and to quantify vascular dynamics not possible using current fluorescein angiography approaches. This can be applied in both laboratory and clinical settings for in vivo dynamic fluorescent imaging of vasculature. PMID:25365578

  20. Indium antimonide large-format detector arrays

    NASA Astrophysics Data System (ADS)

    Davis, Mike; Greiner, Mark

    2011-06-01

    Large format infrared imaging sensors are required to achieve simultaneously high resolution and wide field of view image data. Infrared sensors are generally required to be cooled from room temperature to cryogenic temperatures in less than 10 min thousands of times during their lifetime. The challenge is to remove mechanical stress, which is due to different materials with different coefficients of expansion, over a very wide temperature range and at the same time, provide a high sensitivity and high resolution image data. These challenges are met by developing a hybrid where the indium antimonide detector elements (pixels) are unconnected islands that essentially float on a silicon substrate and form a near perfect match to the silicon read-out circuit. Since the pixels are unconnected and isolated from each other, the array is reticulated. This paper shows that the front side illuminated and reticulated element indium antimonide focal plane developed at L-3 Cincinnati Electronics are robust, approach background limited sensitivity limit, and provide the resolution expected of the reticulated pixel array.

  1. Clementine High Resolution Camera Mosaicking Project. Volume 21; CL 6021; 80 deg S to 90 deg S Latitude, North Periapsis; 1

    NASA Technical Reports Server (NTRS)

    Malin, Michael; Revine, Michael; Boyce, Joseph M. (Technical Monitor)

    1998-01-01

    This compact disk (CD) is part of the Clementine I high resolution (HiRes) camera lunar image mosaics developed by Malin Space Science Systems (MSSS). These mosaics were developed through calibration and semi-automated registration against the recently released geometrically and photometrically controlled Ultraviolet/Visible (UV/Vis) Basemap Mosaic, which is available through the PDS, as CD-ROM volumes CL_3001-3015. The HiRes mosaics are compiled from non-uniformity corrected, 750 nanometer ("D") filter high resolution observations from the HiRes imaging system onboard the Clementine Spacecraft. The geometric control is provided by the U. S. Geological Survey (USGS) Clementine Basemap Mosaic compiled from the 750 nm Ultraviolet/Visible Clementine imaging system. Calibration was achieved by removing the image nonuniformity largely caused by the HiRes system's light intensifier. Also provided are offset and scale factors, achieved by a fit of the HiRes data to the corresponding photometrically calibrated UV/Vis basemap that approximately transform the 8-bit HiRes data to photometric units. The mosaics on this CD are compiled from polar data (latitudes greater than 80 degrees), and are presented in the stereographic projection at a scale of 30 m/pixel at the pole, a resolution 5 times greater than that (150 m/pixel) of the corresponding UV/Vis polar basemap. This 5:1 scale ratio is in keeping with the sub-polar mosaic, in which the HiRes and UV/Vis mosaics had scales of 20 m/pixel and 100 m/pixel, respectively. The equal-area property of the stereographic projection made this preferable for the HiRes polar mosaic rather than the basemap's orthographic projection. Thus, a necessary first step in constructing the mosaic was the reprojection of the UV/Vis basemap to the stereographic projection. The HiRes polar data can be naturally grouped according to the orbital periapsis, which was in the south during the first half of the mapping mission and in the north during the second half. Images in each group have generally uniform intrinsic resolution, illumination, exposure and gain. Rather than mingle data from the two periapsis epochs, separate mosaics are provided for each, a total of 4 polar mosaics. The mosaics are divided into 100 square tiles of 2250 pixels (approximately 2.2 deg near the pole) on a side. Not all squares of this grid contain HiRes mosaic data, some inevitably since a square is not a perfect representation of a (latitude) circle, others due to the lack of HiRes data. This CD also contains ancillary data files that support the HiRes mosaic. These files include browse images with UV/Vis context stored in a Joint Photographic Experts Group (JPEG) format, index files ('imgindx.tab' and 'srcindx.tab') that tabulate the contents of the CD, and documentation files. For more information on the contents and organization of the CD volume set refer to the "FILES, DIRECTORIES AND DISK CONTENTS" section of this document. The image files are organized according to NASA's Planetary Data System (PDS) standards. An image file (tile) is organized as a PDS labeled file containing an "image object".

  2. The performance analysis of three-dimensional track-before-detect algorithm based on Fisher-Tippett-Gnedenko theorem

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan

    2016-09-01

    The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.

  3. Integration of SAR and DEM data: Geometrical considerations

    NASA Technical Reports Server (NTRS)

    Kropatsch, Walter G.

    1991-01-01

    General principles for integrating data from different sources are derived from the experience of registration of SAR images with digital elevation models (DEM) data. The integration consists of establishing geometrical relations between the data sets that allow us to accumulate information from both data sets for any given object point (e.g., elevation, slope, backscatter of ground cover, etc.). Since the geometries of the two data are completely different they cannot be compared on a pixel by pixel basis. The presented approach detects instances of higher level features in both data sets independently and performs the matching at the high level. Besides the efficiency of this general strategy it further allows the integration of additional knowledge sources: world knowledge and sensor characteristics are also useful sources of information. The SAR features layover and shadow can be detected easily in SAR images. An analytical method to find such regions also in a DEM needs in addition the parameters of the flight path of the SAR sensor and the range projection model. The generation of the SAR layover and shadow maps is summarized and new extensions to this method are proposed.

  4. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  5. Wrinkle Ridges and Pit Craters

    NASA Image and Video Library

    2016-10-19

    Tectonic stresses highly modified this area of Ganges Catena, north of Valles Marineris. The long, skinny ridges (called "wrinkle ridges") are evidence of compressional stresses in Mars' crust that created a crack (fault) where one side was pushed on top of the other side, also known as a thrust fault. As shown by cross-cutting relationships, however, extensional stresses have more recently pulled the crust of Mars apart in this region. (HiRISE imaged this area in 2-by-2 binning mode, so a pixel represents a 50 x 50 square centimeter.) http://photojournal.jpl.nasa.gov/catalog/PIA21112

  6. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  7. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  8. Fully automated motion correction in first-pass myocardial perfusion MR image sequences.

    PubMed

    Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F

    2008-11-01

    This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.

  9. Nonrigid liver registration for image-guided surgery using partial surface data: a novel iterative approach

    NASA Astrophysics Data System (ADS)

    Rucker, D. Caleb; Wu, Yifei; Ondrake, Janet E.; Pheiffer, Thomas S.; Simpson, Amber L.; Miga, Michael I.

    2013-03-01

    In the context of open abdominal image-guided liver surgery, the efficacy of an image-guidance system relies on its ability to (1) accurately depict tool locations with respect to the anatomy, and (2) maintain the work flow of the surgical team. Laser-range scanned (LRS) partial surface measurements can be taken intraoperatively with relatively little impact on the surgical work flow, as opposed to other intraoperative imaging modalities. Previous research has demonstrated that this kind of partial surface data may be (1) used to drive a rigid registration of the preoperative CT image volume to intraoperative patient space, and (2) extrapolated and combined with a tissue-mechanics-based organ model to drive a non-rigid registration, thus compensating for organ deformations. In this paper we present a novel approach for intraoperative nonrigid liver registration which iteratively reconstructs a displacement field on the posterior side of the organ in order to minimize the error between the deformed model and the intraopreative surface data. Experimental results with a phantom liver undergoing large deformations demonstrate that this method achieves target registration errors (TRE) with a mean of 4.0 mm in the prediction of a set of 58 locations inside the phantom, which represents a 50% improvement over rigid registration alone, and a 44% improvement over the prior non-iterative single-solve method of extrapolating boundary conditions via a surface Laplacian.

  10. Method for detecting an image of an object

    DOEpatents

    Chapman, Leroy Dean; Thomlinson, William C.; Zhong, Zhong

    1999-11-16

    A method for detecting an absorption, refraction and scatter image of an object by independently analyzing, detecting, digitizing, and combining images acquired on a high and a low angle side of a rocking curve of a crystal analyzer. An x-ray beam which is generated by any suitable conventional apparatus can be irradiated upon either a Bragg type crystal analyzer or a Laue type crystal analyzer. Images of the absorption, refraction and scattering effects are detected, such as on an image plate, and then digitized. The digitized images are simultaneously solved, preferably on a pixel-by-pixel basis, to derive a combined visual image which has dramatically improved contrast and spatial resolution over an image acquired through conventional radiology methods.

  11. Goodbye to the Dark Side

    NASA Image and Video Library

    2017-10-02

    Stunning views like this image of Saturn's night side are only possible thanks to our robotic emissaries like Cassini. Until future missions are sent to Saturn, Cassini's image-rich legacy must suffice. Because Earth is closer to the Sun than Saturn, observers on Earth only see Saturn's day side. With spacecraft, we can capture views (and data) that are simply not possible from Earth, even with the largest telescopes. This view looks toward the sunlit side of the rings from about 7 degrees above the ring plane. The image was taken in visible light with the wide-angle camera on NASA's Cassini spacecraft on June 7, 2017. The view was obtained at a distance of approximately 751,000 miles (1.21 million kilometers) from Saturn. Image scale is 45 miles (72 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21350

  12. Effect of deformable registration on the dose calculated in radiation therapy planning CT scans of lung cancer patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cunliffe, Alexandra R.; Armato, Samuel G.; White, Bradley

    2015-01-15

    Purpose: To characterize the effects of deformable image registration of serial computed tomography (CT) scans on the radiation dose calculated from a treatment planning scan. Methods: Eighteen patients who received curative doses (≥60 Gy, 2 Gy/fraction) of photon radiation therapy for lung cancer treatment were retrospectively identified. For each patient, a diagnostic-quality pretherapy (4–75 days) CT scan and a treatment planning scan with an associated dose map were collected. To establish correspondence between scan pairs, a researcher manually identified anatomically corresponding landmark point pairs between the two scans. Pretherapy scans then were coregistered with planning scans (and associated dose maps)more » using the demons deformable registration algorithm and two variants of the Fraunhofer MEVIS algorithm (“Fast” and “EMPIRE10”). Landmark points in each pretherapy scan were automatically mapped to the planning scan using the displacement vector field output from each of the three algorithms. The Euclidean distance between manually and automatically mapped landmark points (d{sub E}) and the absolute difference in planned dose (|ΔD|) were calculated. Using regression modeling, |ΔD| was modeled as a function of d{sub E}, dose (D), dose standard deviation (SD{sub dose}) in an eight-pixel neighborhood, and the registration algorithm used. Results: Over 1400 landmark point pairs were identified, with 58–93 (median: 84) points identified per patient. Average |ΔD| across patients was 3.5 Gy (range: 0.9–10.6 Gy). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, with an average d{sub E} across patients of 5.2 mm (compared with >7 mm for the other two algorithms). Consequently, average |ΔD| was also lowest using the Fraunhofer MEVIS EMPIRE10 algorithm. |ΔD| increased significantly as a function of d{sub E} (0.42 Gy/mm), D (0.05 Gy/Gy), SD{sub dose} (1.4 Gy/Gy), and the algorithm used (≤1 Gy). Conclusions: An average error of <4 Gy in radiation dose was introduced when points were mapped between CT scan pairs using deformable registration, with the majority of points yielding dose-mapping error <2 Gy (approximately 3% of the total prescribed dose). Registration accuracy was highest using the Fraunhofer MEVIS EMPIRE10 algorithm, resulting in the smallest errors in mapped dose. Dose differences following registration increased significantly with increasing spatial registration errors, dose, and dose gradient (i.e., SD{sub dose}). This model provides a measurement of the uncertainty in the radiation dose when points are mapped between serial CT scans through deformable registration.« less

  13. Rotation and scale invariant shape context registration for remote sensing images with background variations

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zhang, Shumei; Cao, Shixiang

    2015-01-01

    Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.

  14. WE-AB-BRA-12: Virtual Endoscope Tracking for Endoscopy-CT Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, W; Rao, A; Wendt, R

    Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT-space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2-mm-diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom’s luminal surface on CT. We tested registration accuracy by tracking the endoscope’s 6-degree-of-freedom coordinates frame-to-frame in a video recorded asmore » it moved through the phantom, and using these coordinates to measure CT-space positions of markers visible in the final frame. To track the endoscope we used the Nelder-Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope’s initial-frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT-space marker positions were measured by projecting their final-frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker’s manually-selected CT-space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy-CT registration framework that is clinically valuable and requires no specialized equipment.« less

  15. Technology Used for Realization of the Reform in Informal Areas.

    NASA Astrophysics Data System (ADS)

    Qirko, K.

    2008-12-01

    ORGANIZATION OF STRUCTURE AND ADMINISTRATION OF ALUIZNI Law no. 9482, date 03.03.2006 " On legalization, urban planning and integration of unauthorized buildings", entered into force on May 15, 2006. The Council of Ministers, with its decision no.289, date 17.05.2006, established the Agency for the Legalization, Urbanization, and Integration of the Informal Zones/Buildings (ALUIZNI), with its twelve local bodies. ALUIZNI began its activity in reliance to Law no. 9482, date 03.03.2006 " On legalization, urban planning and integration of unauthorized buildings", in July 2006. The administration of this agency was completed during this period and it is composed of; General Directory and twelve regional directories. As of today, this institution has 300 employees. The administrative structure of ALUIZNI is organized to achieve the objectives of the reform and to solve the problems arising during its completion. The following sectors have been established to achieve the objectives: Sector of compensation of owners; sector of cartography, sector of geographic system data elaboration (GIS) and Information Technology; sector of urban planning; sector of registration of legalized properties and Human resource sector. Following this vision, digital air photography of the Republic of Albania is in process of realization, from which we will receive, for the first time, orthophoto and digital map, unique for the entire territory of our country. This cartographic product, will serve to all government institutions and private ones. All other systems, such as; system of territory management; system of property registration ; system of population registration; system of addresses; urban planning studies and systems; definition of boundaries of administrative and touristic zones will be established based on this cartographic system. The cartographic product will be of parameters mentioned below, divided in lots:(2.3 MEuro) 1.Lot I: It includes the urban zone, 1200 km2. It will have a resolution of 8cm pixel and it will be produced as a orthophoto and digital vectorized map. 2. Lot II: It includes the rural zone, 12000km2. Orthophoto, with resolution 8cm pixel, will be produced. 3.Lot III: It includes mountainous zone, 15000km2. We will receive orthophoto, with resolution 30cm pixel. All the technical documentation of the process will be produced in a digital manner, based on the digital map and it will be the main databases. We have established the sector of geographic system data elaboration (GIS) and Information Technology, with the purpose to assure transparency, and correctness to the process, and to assure a permanent useful information for various reasons. (1.1MEuro) GIS is a modern technology, which elaborates and makes connections among different information. The main objective of this sector is the establishment of self declaration databases, with 30 characteristics for each of them and a databases for the process, with 40 characteristics for each property, which includes cartographic, geographic and construction data.

  16. Exploiting Measurement Uncertainty Estimation in Evaluation of GOES-R ABI Image Navigation Accuracy Using Image Registration Techniques

    NASA Technical Reports Server (NTRS)

    Haas, Evan; DeLuccia, Frank

    2016-01-01

    In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.

  17. Image navigation and registration for the geostationary lightning mapper (GLM)

    NASA Astrophysics Data System (ADS)

    van Bezooijen, Roel W. H.; Demroff, Howard; Burton, Gregory; Chu, Donald; Yang, Shu S.

    2016-10-01

    The Geostationary Lightning Mappers (GLM) for the Geostationary Operational Environmental Satellite (GOES) GOES-R series will, for the first time, provide hemispherical lightning information 24 hours a day from longitudes of 75 and 137 degrees west. The first GLM of a series of four is planned for launch in November, 2016. Observation of lightning patterns by GLM holds promise to improve tornado warning lead times to greater than 20 minutes while halving the present false alarm rates. In addition, GLM will improve airline traffic flow management, and provide climatology data allowing us to understand the Earth's evolving climate. The paper describes the method used for translating the pixel position of a lightning event to its corresponding geodetic longitude and latitude, using the J2000 attitude of the GLM mount frame reported by the spacecraft, the position of the spacecraft, and the alignment of the GLM coordinate frame relative to its mount frame. Because the latter alignment will experience seasonal variation, this alignment is determined daily using GLM background images collected over the previous 7 days. The process involves identification of coastlines in the background images and determination of the alignment change necessary to match the detected coastline with the coastline predicted using the GSHHS database. Registration is achieved using a variation of the Lucas-Kanade algorithm where we added a dither and average technique to improve performance significantly. An innovative water mask technique was conceived to enable self-contained detection of clear coastline sections usable for registration. Extensive simulations using accurate visible images from GOES13 and GOES15 have been used to demonstrate the performance of the coastline registration method, the results of which are presented in the paper.

  18. Angiogram, fundus, and oxygen saturation optic nerve head image fusion

    NASA Astrophysics Data System (ADS)

    Cao, Hua; Khoobehi, Bahram

    2009-02-01

    A novel multi-modality optic nerve head image fusion approach has been successfully designed. The new approach has been applied on three ophthalmologic modalities: angiogram, fundus, and oxygen saturation retinal optic nerve head images. It has achieved an excellent result by giving the visualization of fundus or oxygen saturation images with a complete angiogram overlay. During this study, two contributions have been made in terms of novelty, efficiency, and accuracy. The first contribution is the automated control point detection algorithm for multi-sensor images. The new method employs retina vasculature and bifurcation features by identifying the initial good-guess of control points using the Adaptive Exploratory Algorithm. The second contribution is the heuristic optimization fusion algorithm. In order to maximize the objective function (Mutual-Pixel-Count), the iteration algorithm adjusts the initial guess of the control points at the sub-pixel level. A refinement of the parameter set is obtained at the end of each loop, and finally an optimal fused image is generated at the end of the iteration. It is the first time that Mutual-Pixel-Count concept has been introduced into biomedical image fusion area. By locking the images in one place, the fused image allows ophthalmologists to match the same eye over time and get a sense of disease progress and pinpoint surgical tools. The new algorithm can be easily expanded to human or animals' 3D eye, brain, or body image registration and fusion.

  19. Multitemporal Snow Cover Mapping in Mountainous Terrain for Landsat Climate Data Record Development

    NASA Technical Reports Server (NTRS)

    Crawford, Christopher J.; Manson, Steven M.; Bauer, Marvin E.; Hall, Dorothy K.

    2013-01-01

    A multitemporal method to map snow cover in mountainous terrain is proposed to guide Landsat climate data record (CDR) development. The Landsat image archive including MSS, TM, and ETM+ imagery was used to construct a prototype Landsat snow cover CDR for the interior northwestern United States. Landsat snow cover CDRs are designed to capture snow-covered area (SCA) variability at discrete bi-monthly intervals that correspond to ground-based snow telemetry (SNOTEL) snow-water-equivalent (SWE) measurements. The June 1 bi-monthly interval was selected for initial CDR development, and was based on peak snowmelt timing for this mountainous region. Fifty-four Landsat images from 1975 to 2011 were preprocessed that included image registration, top-of-the-atmosphere (TOA) reflectance conversion, cloud and shadow masking, and topographic normalization. Snow covered pixels were retrieved using the normalized difference snow index (NDSI) and unsupervised classification, and pixels having greater (less) than 50% snow cover were classified presence (absence). A normalized SCA equation was derived to independently estimate SCA given missing image coverage and cloud-shadow contamination. Relative frequency maps of missing pixels were assembled to assess whether systematic biases were embedded within this Landsat CDR. Our results suggest that it is possible to confidently estimate historical bi-monthly SCA from partially cloudy Landsat images. This multitemporal method is intended to guide Landsat CDR development for freshwaterscarce regions of the western US to monitor climate-driven changes in mountain snowpack extent.

  20. A novel snapshot polarimetric imager

    NASA Astrophysics Data System (ADS)

    Wong, Gerald; McMaster, Ciaran; Struthers, Robert; Gorman, Alistair; Sinclair, Peter; Lamb, Robert; Harvey, Andrew R.

    2012-10-01

    Polarimetric imaging (PI) is of increasing importance in determining additional scene information beyond that of conventional images. For very long-range surveillance, image quality is degraded due to turbulence. Furthermore, the high magnification required to create images with sufficient spatial resolution suitable for object recognition and identification require long focal length optical systems. These are incompatible with the size and weight restrictions for aircraft. Techniques which allow detection and recognition of an object at the single pixel level are therefore likely to provide advance warning of approaching threats or long-range object cueing. PI is a technique that has the potential to detect object signatures at the pixel level. Early attempts to develop PI used rotating polarisers (and spectral filters) which recorded sequential polarized images from which the complete Stokes matrix could be derived. This approach has built-in latency between frames and requires accurate registration of consecutive frames to analyze real-time video of moving objects. Alternatively, multiple optical systems and cameras have been demonstrated to remove latency, but this approach increases cost and bulk of the imaging system. In our investigation we present a simplified imaging system that divides an image into two orthogonal polarimetric components which are then simultaneously projected onto a single detector array. Thus polarimetric data is recorded without latency on a single snapshot. We further show that, for pixel-level objects, the data derived from only two orthogonal states (H and V) is sufficient to increase the probability of detection whilst reducing false alarms compared to conventional unpolarised imaging.

  1. ChromAIX2: A large area, high count-rate energy-resolving photon counting ASIC for a Spectral CT Prototype

    NASA Astrophysics Data System (ADS)

    Steadman, Roger; Herrmann, Christoph; Livne, Amir

    2017-08-01

    Spectral CT based on energy-resolving photon counting detectors is expected to deliver additional diagnostic value at a lower dose than current state-of-the-art CT [1]. The capability of simultaneously providing a number of spectrally distinct measurements not only allows distinguishing between photo-electric and Compton interactions but also discriminating contrast agents that exhibit a K-edge discontinuity in the absorption spectrum, referred to as K-edge Imaging [2]. Such detectors are based on direct converting sensors (e.g. CdTe or CdZnTe) and high-rate photon counting electronics. To support the development of Spectral CT and show the feasibility of obtaining rates exceeding 10 Mcps/pixel (Poissonian observed count-rate), the ChromAIX ASIC has been previously reported showing 13.5 Mcps/pixel (150 Mcps/mm2 incident) [3]. The ChromAIX has been improved to offer the possibility of a large area coverage detector, and increased overall performance. The new ASIC is called ChromAIX2, and delivers count-rates exceeding 15 Mcps/pixel with an rms-noise performance of approximately 260 e-. It has an isotropic pixel pitch of 500 μm in an array of 22×32 pixels and is tile-able on three of its sides. The pixel topology consists of a two stage amplifier (CSA and Shaper) and a number of test features allowing to thoroughly characterize the ASIC without a sensor. A total of 5 independent thresholds are also available within each pixel, allowing to acquire 5 spectrally distinct measurements simultaneously. The ASIC also incorporates a baseline restorer to eliminate excess currents induced by the sensor (e.g. dark current and low frequency drifts) which would otherwise cause an energy estimation error. In this paper we report on the inherent electrical performance of the ChromAXI2 as well as measurements obtained with CZT (CdZnTe)/CdTe sensors and X-rays and radioactive sources.

  2. Construction and tests of a fine granularity lead-scintillating fibers calorimeter

    NASA Astrophysics Data System (ADS)

    Branchini, P.; Ceradini, F.; Corradi, G.; Di Micco, B.; Passeri, A.

    2009-04-01

    We report the construction and the tests of a small prototype of the lead-scintillating fiber calorimeter of the KLOE experiment, instrumented with multianode photomultipliers to obtain a 16 times finer readout granularity. The prototype is 15 cm wide, 15 radiation lengths deep and is made of 200 layers of fibers 50 cm long. On one side it is read out with an array of 3×5 multianode photomultipliers Hamamatsu type R8900-M16, each segmented with 4×4 anodes, the read out granularity being 240 pixels of 11 × 11 mm2 corresponding to about 64 scintillating fibers each. These are interfaced to the 6 × 6 mm2 pixeled photocathode with truncated pyramid light guides made of Bicron BC-800 plastic to partially transmit the UV light. Each photomultiplier provides also an OR of the 16 last dynodes that is used for trigger. The response of the individual anodes, their relative gain and cross-talk has been measured with the light (440 nm) of a laser illuminating only few fibers on the side opposite to the readout. We finally present the first results of the calorimeter response to cosmic rays in auto-trigger mode.

  3. A prototype of fine granularity lead-scintillating fiber calorimeter with imaging read out

    NASA Astrophysics Data System (ADS)

    Branchini, P.; Ceradini, F.; Corradi, G.; Di Micco, B.; Passeri, A.

    2009-12-01

    The construction and tests performed on a smal prototype of lead-scintillating fiber calorimeter instrumented with multianode photomultipliers are reported. The prototype is 15 cm wide, 15 radiation lenghts deep and is made of 200 layers of 50 cm long fibers. One side of the calorimeter has been instrumented with an array of 3 × 5 multianode R8900-M16 Hamamatsu photomultipliers, each segmented with a matrix of 4 × 4 anodes. The read-out granularity is 240 pixels 11 × 11 mm 2 reading about 64 fibers each. They are interfaced to the 6 × 6 mm 2 pixelled photocade with truncated pyramid light guides made of BC-800 plastic, UV transparent. Moreover each photomultiplier provides also the OR information of the last 12 dynodes. This information can be useful for trigger purposes. The response of the individual anodes, their relative gain and cross-talk has been measured with a 404 nm picosecond laser illuminating only a few fibers on the opposite side of the read-out. We also present first results of the calorimeter response to cosmic rays and electron beam data collected at BTF facility in Frascati.

  4. Protein structure determination by electron diffraction using a single three-dimensional nanocrystal.

    PubMed

    Clabbers, M T B; van Genderen, E; Wan, W; Wiegers, E L; Gruene, T; Abrahams, J P

    2017-09-01

    Three-dimensional nanometre-sized crystals of macromolecules currently resist structure elucidation by single-crystal X-ray crystallography. Here, a single nanocrystal with a diffracting volume of only 0.14 µm 3 , i.e. no more than 6 × 10 5 unit cells, provided sufficient information to determine the structure of a rare dimeric polymorph of hen egg-white lysozyme by electron crystallography. This is at least an order of magnitude smaller than was previously possible. The molecular-replacement solution, based on a monomeric polyalanine model, provided sufficient phasing power to show side-chain density, and automated model building was used to reconstruct the side chains. Diffraction data were acquired using the rotation method with parallel beam diffraction on a Titan Krios transmission electron microscope equipped with a novel in-house-designed 1024 × 1024 pixel Timepix hybrid pixel detector for low-dose diffraction data collection. Favourable detector characteristics include the ability to accurately discriminate single high-energy electrons from X-rays and count them, fast readout to finely sample reciprocal space and a high dynamic range. This work, together with other recent milestones, suggests that electron crystallography can provide an attractive alternative in determining biological structures.

  5. Protein structure determination by electron diffraction using a single three-dimensional nanocrystal

    PubMed Central

    Clabbers, M. T. B.; van Genderen, E.; Wiegers, E. L.; Gruene, T.; Abrahams, J. P.

    2017-01-01

    Three-dimensional nanometre-sized crystals of macromolecules currently resist structure elucidation by single-crystal X-ray crystallography. Here, a single nanocrystal with a diffracting volume of only 0.14 µm3, i.e. no more than 6 × 105 unit cells, provided sufficient information to determine the structure of a rare dimeric polymorph of hen egg-white lysozyme by electron crystallography. This is at least an order of magnitude smaller than was previously possible. The molecular-replacement solution, based on a monomeric polyalanine model, provided sufficient phasing power to show side-chain density, and automated model building was used to reconstruct the side chains. Diffraction data were acquired using the rotation method with parallel beam diffraction on a Titan Krios transmission electron microscope equipped with a novel in-house-designed 1024 × 1024 pixel Timepix hybrid pixel detector for low-dose diffraction data collection. Favourable detector characteristics include the ability to accurately discriminate single high-energy electrons from X-rays and count them, fast readout to finely sample reciprocal space and a high dynamic range. This work, together with other recent milestones, suggests that electron crystallography can provide an attractive alternative in determining biological structures. PMID:28876237

  6. TH-A-BRF-04: Intra-Fraction Motion Characterization for Early Stage Rectal Cancer Using Cine-MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleijnen, J; Asselen, B; Burbach, M

    2014-06-15

    Purpose: To investigate the intra-fraction motion in patients with early stage rectal cancer using cine-MRI. Methods: Sixteen patient diagnosed with early stage rectal cancer underwent 1.5 T MR imaging prior to each treatment fraction of their short course radiotherapy (n=76). During each scan session, three 2D sagittal cine-MRIs were performed: at the beginning (Start), after 9:30 minutes (Mid), and after 18 minutes (End). Each cine-MRI has a duration of one minute at 2Hz temporal resolution, resulting in a total of 3:48 hours of cine-MRI. Additionally, standard T2-weighted (T2w) imaging was performed. Clinical target volume (CTV) an tumor (GTV) were delineatedmore » on the T2w scan and transferred to the first time-point of each cine-MRI scan. Within each cine-MRI, the first frame was registered to the remaining frames of the scan, using a non-rigid B-spline registration. To investigate potential drifts, a similar registration was performed between the first frame of the Start and End scans.To evaluate the motion, the distances by which the edge pixels of the delineations move in anterior-posterior (AP) and cranial-caudal (CC) direction, were determined using the deformation field of the registrations. The distance which incorporated 95% of these edge pixels (dist95%) was determined within each cine-MRI, and between Start- End scans, respectively. Results: Within a cine-MRI, we observed an average dist95% for the CTV of 1.3mm/1.5mm (SD=0.7mm/0.6mm) and for the GTV of 1.2mm/1.5mm (SD=0.8mm/0.9mm), in respectively AP/CC. For the CTV motion between the Start and End scan, an average dist95% of 5.5mm/5.3mm (SD=3.1mm/2.5mm) was found, in respectively AP/CC. For the GTV motion, an average dist95% of 3.6mm/3.9mm (SD=2.2mm/2.5mm) was found in AP/CC, respectively. Conclusion: Although intra-fraction motion within a one minute cine-MRI is limited, substantial intra-fraction motion was observed within the 18 minute time period between the Start and End cine-MRI.« less

  7. Evaluation of experimental UAV video change detection

    NASA Astrophysics Data System (ADS)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.

  8. Pixel Perfect

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.

    2005-09-01

    Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less

  9. Small feature sizes and high aperture ratio organic light-emitting diodes by using laser-patterned polyimide shadow masks

    NASA Astrophysics Data System (ADS)

    Kajiyama, Yoshitaka; Joseph, Kevin; Kajiyama, Koichi; Kudo, Shuji; Aziz, Hany

    2014-02-01

    A shadow mask technique capable of realizing high resolution (>330 pixel-per-inch) and ˜100% aperture ratio Organic Light-Emitting Diode (OLED) full color displays is demonstrated. The technique utilizes polyimide contact shadow masks, patterned by laser ablation. Red, green, and blue OLEDs with very small feature sizes (<25 μm) are fabricated side by side on one substrate. OLEDs fabricated via this technique have the same performance as those made by established technology. This technique has a strong potential to achieve high resolution OLED displays via standard vacuum deposition processes even on flexible substrates.

  10. a Portable Pixel Detector Operating as AN Active Nuclear Emulsion and its Application for X-Ray and Neutron Tomography

    NASA Astrophysics Data System (ADS)

    Vykydal, Z.; Jakubek, J.; Holy, T.; Pospisil, S.

    2006-04-01

    This work is devoted to the development of a USB1.1 (Universal Serial Bus) based read out system for the Medipix2 detector to achieve maximum portability of this position sensitive detecting device. All necessary detector support is integrated into one compact system (80 × 50 × 20 mm3) including the detector bias source (up to 100 V). The read out interface can control external I2C based devices, so in case of tomography it is easy to synchronize detector shutter with stepper motor control. An additional significant advantage of the USB interface is the support of back side pulse processing. This feature enables to determine the energy additionally to the position of a heavy charged particle hitting the sensor. Due to the small pixel dimensions it is also possible to distinguish the type of single quanta of radiation from the track created in the pixel detector as in case of an active nuclear emulsion.

  11. First experiences with ARNICA, the ARCETRI observatory imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Maiolino, R.; Moriondo, G.; Stanga, R.

    1994-03-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometer that Arcetri Observatory has designed and built as a common use instrument for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1 sec per pixel, with sky coverage of more than 4 min x 4 min on the NICMOS 3 (256 x 256 pixels, 40 micrometer side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature of detector and optics is 76 K. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some preliminary considerations on photometric accuracy.

  12. Maskless lithography

    DOEpatents

    Sweatt, William C.; Stulen, Richard H.

    1999-01-01

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of these individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides.

  13. Maskless lithography

    DOEpatents

    Sweatt, W.C.; Stulen, R.H.

    1999-02-09

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of these individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides. 12 figs.

  14. Method for maskless lithography

    DOEpatents

    Sweatt, William C.; Stulen, Richard H.

    2000-01-01

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of these individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides.

  15. Equilibrium radionuclide gated angiography in patients with tricuspid regurgitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Handler, B.; Pavel, D.G.; Pietras, R.

    Equilibrium gated radionuclide angiography was performed in 2 control groups (15 patients with no organic heart disease and 24 patients with organic heart disease but without right- or left-sided valvular regurgitation) and in 9 patients with clinical tricuspid regurgitation. The regurgitant index, or ratio of left to right ventricular stroke counts, was significantly lower in patients with tricuspid regurgitation than in either control group. Time-activity variation over the liver was used to compute a hepatic expansion fraction which was significantly higher in patients with tricuspid regurgitation than in either control group. Fourier analysis of time-activity variation in each pixel wasmore » used to generate amplitude and phase images. Only pixels with values for amplitude at least 7% of the maximum in the image were retained in the final display. All patients with tricuspid regurgitation had greater than 100 pixels over the liver automatically retained by the computer. These pixels were of phase comparable to that of the right atrium and approximately 180 degrees out of phase with the right ventricle. In contrast, no patient with no organic heart disease and only 1 of 24 patients with organic heart disease had any pixels retained by the computer. In conclusion, patients with tricuspid regurgitation were characterized on equilibrium gated angiography by an abnormally low regurgitant index (7 of 9 patients) reflecting increased right ventricular stroke volume, increased hepatic expansion fraction (7 of 9 patients), and increased amplitude of count variation over the liver in phase with the right atrium (9 of 9 patients).« less

  16. Fabrication of a Kilopixel Array of Superconducting Microcalorimeters with Microstripline Wiring

    NASA Technical Reports Server (NTRS)

    Chervenak, James

    2012-01-01

    A document describes the fabrication of a two-dimensional microcalorimeter array that uses microstrip wiring and integrated heat sinking to enable use of high-performance pixel designs at kilopixel scales (32 X 32). Each pixel is the high-resolution design employed in small-array test devices, which consist of a Mo/Au TES (transition edge sensor) on a silicon nitride membrane and an electroplated Bi/Au absorber. The pixel pitch within the array is 300 microns, where absorbers 290 microns on a side are cantilevered over a silicon support grid with 100-micron-wide beams. The high-density wiring and heat sinking are both carried by the silicon beams to the edge of the array. All pixels are wired out to the array edge. ECR (electron cyclotron resonance) oxide underlayer is deposited underneath the sensor layer. The sensor (TES) layer consists of a superconducting underlayer and a normal metal top layer. If the sensor is deposited at high temperature, the ECR oxide can be vacuum annealed to improve film smoothness and etch characteristics. This process is designed to recover high-resolution, single-pixel x-ray microcalorimeter performance within arrays of arbitrarily large format. The critical current limiting parts of the circuit are designed to have simple interfaces that can be independently verified. The lead-to-TES interface is entirely determined in a single layer that has multiple points of interface to maximize critical current. The lead rails that overlap the TES sensor element contact both the superconducting underlayer and the TES normal metal

  17. Remote Attitude Measurement Techniques.

    DTIC Science & Technology

    1982-12-01

    televison camera). The incident illumination produces a non-uniformity on the scanned side of the sensitive material which can be modeled as an...to compute the probabilistic attitude matrix. Fourth, the experiment will be conducted with the televison camera mounted on a machinists table, such... the optical axis does not necesarily pass through the center of the lens assembly and impact the center pixel in the active region of

  18. Imaging tissues for biomedical research using the high-resolution micro-tomography system nanotom® m

    NASA Astrophysics Data System (ADS)

    Deyhle, Hans; Schulz, Georg; Khimchenko, Anna; Bikis, Christos; Hieber, Simone E.; Jaquiery, Claude; Kunz, Christoph; Müller-Gerbl, Magdalena; Höchel, Sebastian; Saxer, Till; Stalder, Anja K.; Ilgenstein, Bernd; Beckmann, Felix; Thalmann, Peter; Buscema, Marzia; Rohr, Nadja; Holme, Margaret N.; Müller, Bert

    2016-10-01

    Micro computed tomography (mCT) is well established in virtually all fields of biomedical research, allowing for the non-destructive volumetric visualization of tissue morphology. A variety of specimens can be investigated, ranging from soft to hard tissue to engineered structures like scaffolds. Similarly, the size of the objects of interest ranges from a fraction of a millimeter to several tens of centimeters. While synchrotron radiation-based μCT still offers unrivaled data quality, the ever-improving technology of cathodic tube-based machines offers a valuable and more accessible alternative. The Biomaterials Science Center of the University of Basel operates a nanotomOR m (phoenix|x-ray, GE Sensing and Inspection Technologies GmbH, Wunstorf, Germany), with a 180 kV source and a minimal spot size of about 0.9 μm. Through the adjustable focus-specimen and focus-detector distances, the effective pixel size can be adjusted from below 500 nm to about 80 μm. On the high-resolution side, it is for example possible to visualize the tubular network in sub-millimeter thin dentin specimens. It is then possible to locally extract parameters such as tubule diameter, density, or alignment, giving information on cell movements during tooth formation. On the other side, with a horizontal shift of the 3,072 pixels x 2,400 pixels detector, specimens up to 35 cm in diameter can be scanned. It is possible, for example, to scan an entire human knee, albeit with inferior resolution. Lab source μCT machines are thus a powerful and flexible tool for the advancement of biomedical research, and a valuable and more accessible alternative to synchrotron radiation facilities.

  19. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    PubMed

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  20. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  1. Iapetus: Unique Surface Properties and a Global Color Dichotomy from Cassini Imaging

    NASA Astrophysics Data System (ADS)

    Denk, Tilmann; Neukum, Gerhard; Roatsch, Thomas; Porco, Carolyn C.; Burns, Joseph A.; Galuba, Götz G.; Schmedemann, Nico; Helfenstein, Paul; Thomas, Peter C.; Wagner, Roland J.; West, Robert A.

    2010-01-01

    Since 2004, Saturn’s moon Iapetus has been observed repeatedly with the Imaging Science Subsystem of the Cassini spacecraft. The images show numerous impact craters down to the resolution limit of ~10 meters per pixel. Small, bright craters within the dark hemisphere indicate a dark blanket thickness on the order of meters or less. Dark, equator-facing and bright, poleward-facing crater walls suggest temperature-driven water-ice sublimation as the process responsible for local albedo patterns. Imaging data also reveal a global color dichotomy, wherein both dark and bright materials on the leading side have a substantially redder color than the respective trailing-side materials. This global pattern indicates an exogenic origin for the redder leading-side parts and suggests that the global color dichotomy initiated the thermal formation of the global albedo dichotomy.

  2. Iapetus: unique surface properties and a global color dichotomy from Cassini imaging.

    PubMed

    Denk, Tilmann; Neukum, Gerhard; Roatsch, Thomas; Porco, Carolyn C; Burns, Joseph A; Galuba, Götz G; Schmedemann, Nico; Helfenstein, Paul; Thomas, Peter C; Wagner, Roland J; West, Robert A

    2010-01-22

    Since 2004, Saturn's moon Iapetus has been observed repeatedly with the Imaging Science Subsystem of the Cassini spacecraft. The images show numerous impact craters down to the resolution limit of approximately 10 meters per pixel. Small, bright craters within the dark hemisphere indicate a dark blanket thickness on the order of meters or less. Dark, equator-facing and bright, poleward-facing crater walls suggest temperature-driven water-ice sublimation as the process responsible for local albedo patterns. Imaging data also reveal a global color dichotomy, wherein both dark and bright materials on the leading side have a substantially redder color than the respective trailing-side materials. This global pattern indicates an exogenic origin for the redder leading-side parts and suggests that the global color dichotomy initiated the thermal formation of the global albedo dichotomy.

  3. Cloud Motion Vectors from MISR using Sub-pixel Enhancements

    NASA Technical Reports Server (NTRS)

    Davies, Roger; Horvath, Akos; Moroney, Catherine; Zhang, Banglin; Zhu, Yanqiu

    2007-01-01

    The operational retrieval of height-resolved cloud motion vectors by the Multiangle Imaging SpectroRadiometer on the Terra satellite has been significantly improved by using sub-pixel approaches to co-registration and disparity assessment, and by imposing stronger quality control based on the agreement between independent forward and aft triplet retrievals. Analysis of the fore-aft differences indicates that CMVs pass the basic operational quality control 67% of the time, with rms differences - in speed of 2.4 m/s, in direction of 17 deg, and in height assignment of 290 m. The use of enhanced quality control thresholds reduces these rms values to 1.5 m/s, 17 deg and 165 m, respectively, at the cost of reduced coverage to 45%. Use of the enhanced thresholds also eliminates a tendency for the rms differences to increase with height. Comparison of CMVs from an earlier operational version that had slightly weaker quality control, with 6-hour forecast winds from the Global Modeling and Assimilation Office yielded very low bias values and an rms vector difference that ranged from 5 m/s for low clouds to 10 m/s for high clouds.

  4. Removal of anti-Stokes emission background in STED microscopy by FPGA-based synchronous detection

    NASA Astrophysics Data System (ADS)

    Castello, M.; Tortarolo, G.; Coto Hernández, I.; Deguchi, T.; Diaspro, A.; Vicidomini, G.

    2017-05-01

    In stimulated emission depletion (STED) microscopy, the role of the STED beam is to de-excite, via stimulated emission, the fluorophores that have been previously excited by the excitation beam. This condition, together with specific beam intensity distributions, allows obtaining true sub-diffraction spatial resolution images. However, if the STED beam has a non-negligible probability to excite the fluorophores, a strong fluorescent background signal (anti-Stokes emission) reduces the effective resolution. For STED scanning microscopy, different synchronous detection methods have been proposed to remove this anti-Stokes emission background and recover the resolution. However, every method works only for a specific STED microscopy implementation. Here we present a user-friendly synchronous detection method compatible with any STED scanning microscope. It exploits a data acquisition (DAQ) card based on a field-programmable gate array (FPGA), which is progressively used in STED microscopy. In essence, the FPGA-based DAQ card synchronizes the fluorescent signal registration, the beam deflection, and the excitation beam interruption, providing a fully automatic pixel-by-pixel synchronous detection method. We validate the proposed method in both continuous wave and pulsed STED microscope systems.

  5. Calibration of the venµs super-spectral camera

    NASA Astrophysics Data System (ADS)

    Topaz, Jeremy; Sprecher, Tuvia; Tinto, Francesc; Echeto, Pierre; Hagolle, Olivier

    2017-11-01

    A high-resolution super-spectral camera is being developed by Elbit Systems in Israel for the joint CNES- Israel Space Agency satellite, VENμS (Vegetation and Environment monitoring on a new Micro-Satellite). This camera will have 12 narrow spectral bands in the Visible/NIR region and will give images with 5.3 m resolution from an altitude of 720 km, with an orbit which allows a two-day revisit interval for a number of selected sites distributed over some two-thirds of the earth's surface. The swath width will be 27 km at this altitude. To ensure the high radiometric and geometric accuracy needed to fully exploit such multiple data sampling, careful attention is given in the design to maximize characteristics such as signal-to-noise ratio (SNR), spectral band accuracy, stray light rejection, inter- band pixel-to-pixel registration, etc. For the same reasons, accurate calibration of all the principle characteristics is essential, and this presents some major challenges. The methods planned to achieve the required level of calibration are presented following a brief description of the system design. A fuller description of the system design is given in [2], [3] and [4].

  6. A novel approach for establishing benchmark CBCT/CT deformable image registrations in prostate cancer radiotherapy

    PubMed Central

    Kim, Jinkoo; Kumar, Sanath; Liu, Chang; Zhong, Hualiang; Pradhan, Deepak; Shah, Mira; Cattaneo, Richard; Yechieli, Raphael; Robbins, Jared R.; Elshaikh, Mohamed A.; Chetty, Indrin J.

    2014-01-01

    Purpose Deformable image registration (DIR) is an integral component for adaptive radiation therapy. However, accurate registration between daily cone-beam computed tomography (CBCT) and treatment planning CT is challenging, due to significant daily variations in rectal and bladder fillings as well as the increased noise levels in CBCT images. Another significant challenge is the lack of “ground-truth” registrations in the clinical setting, which is necessary for quantitative evaluation of various registration algorithms. The aim of this study is to establish benchmark registrations of clinical patient data. Materials/Methods Three pairs of CT/CBCT datasets were chosen for this IRB-approved retrospective study. On each image, in order to reduce the contouring uncertainty, ten independent sets of organs were manually delineated by five physicians. The mean contour set for each image was derived from the ten contours. A set of distinctive points (round natural calcifications and 3 implanted prostate fiducial markers) were also manually identified. The mean contours and point features were then incorporated as constraints into a B-spline based DIR algorithm. Further, a rigidity penalty was imposed on the femurs and pelvic bones to preserve their rigidity. A piecewise-rigid registration approach was adapted to account for the differences in femur pose and the sliding motion between bones. For each registration, the magnitude of the spatial Jacobian (|JAC|) was calculated to quantify the tissue compression and expansion. Deformation grids and finite-element-model-based unbalanced energy maps were also reviewed visually to evaluate the physical soundness of the resultant deformations. Organ DICE indices (indicating the degree of overlap between registered organs) and residual misalignments of the fiducial landmarks were quantified. Results Manual organ delineation on CBCT images varied significantly among physicians with overall mean DICE index of only 0.7 among redundant contours. Seminal vesicle contours were found to have the lowest correlation amongst physicians (DICE=0.5). After DIR, the organ surfaces between CBCT and planning CT were in good alignment with mean DICE indices of 0.9 for prostate, rectum, and bladder, and 0.8 for seminal vesicles. The Jacobian magnitudes |JAC| in the prostate, rectum, and seminal vesicles were in the range of 0.4–1.5, indicating mild compression/expansion. The bladder volume differences were larger between CBCT and CT images with mean |JAC| values of 2.2, 0.7, and 1.0 for three respective patients. Bone deformation was negligible (|JAC|=~1.0). The difference between corresponding landmark points between CBCT and CT was less than 1.0 mm after DIR. Conclusions We have presented a novel method of establishing benchmark deformable image registration accuracy between CT and CBCT images in the pelvic region. The method incorporates manually delineated organ surfaces and landmark points as well as pixel similarity in the optimization, while ensuring bone rigidity and avoiding excessive deformation in soft tissue organs. Redundant contouring is necessary to reduce the overall registration uncertainty. PMID:24171908

  7. Lunar geodesy and cartography: a new era

    NASA Astrophysics Data System (ADS)

    Duxbury, Thomas; Smith, David; Robinson, Mark; Zuber, Maria T.; Neumann, Gregory; Danton, Jacob; Oberst, Juergen; Archinal, Brent; Glaeser, Philipp

    The Lunar Reconnaissance Orbiter (LRO) ushers in a new era in precision lunar geodesy and cartography. LRO was launched in June, 2009, completed its Commissioning Phase in Septem-ber 2009 and is now in its Primary Mission Phase on its way to collecting high precision, global topographic and imaging data. Aboard LRO are the Lunar Orbiter Laser Altimeter (LOLA -Smith, et al., 2009) and the Lunar Reconnaissance Orbiter Camera (LROC -Robinson, et al., ). LOLA is a derivative of the successful MOLA at Mars that produced the global reference surface being used for all precision cartographic products. LOLA produces 5 altimetry spots having footprints of 5 m at a frequency of 28 Hz, significantly bettering MOLA that produced 1 spot having a footprint of 150 m at a frequency of 10 Hz. LROC has twin narrow angle cameras having pixel resolutions of 0.5 meters from a 50 km orbit and a wide-angle camera having a pixel resolution of 75 m and in up to 7 color bands. One of the two NACs looks to the right of nadir and the other looks to the left with a few hundred pixel overlap in the nadir direction. LOLA is mounted on the LRO spacecraft to look nadir, in the overlap region of the NACs. The LRO spacecraft has the ability to look nadir and build up global coverage as well as looking off-nadir to provide stereo coverage and fill in data gaps. The LROC wide-angle camera builds up global stereo coverage naturally from its large field-of-view overlap from orbit to orbit during nadir viewing. To date, the LROC WAC has already produced global stereo coverage of the lunar surface. This report focuses on the registration of LOLA altimetry to the LROC NAC images. LOLA has a dynamic range of tens of km while producing elevation data at sub-meter precision. LOLA also has good return in off-nadir attitudes. Over the LRO mission, multiple LOLA tracks will be in each of the NAC images at the lunar equator and even more tracks in the NAC images nearer the poles. The registration of LOLA altimetry to NAC images is aided by the 5 spots showing regional and local slopes, along and cross-track, that are easily correlated visually to features within the images. Once can precisely register each of the 5 LOLA spots to specific pixels in LROC images of distinct features such as craters and boulders. This can be performed routinely for features at the 100 m level and larger. However, even features at the several m level can also be registered if a single LOLA spots probes the depth of a small crater while the other 4 spots are on the surrounding surface or one spot returns from the top of a small boulder seen by NAC. The automatic registration of LOLA tracks with NAC stereo digital terrain models should provide for even higher accuracy. Also the LOLA pulse spread of the returned signal, which is sensitive to slopes and roughness, is an additional source of information to help match the LOLA tracks to the images As the global coverage builds, LOLA will provide absolute coordinates in latitude, longitude and radius of surface features with accuracy at the meter level or better. The NAC images will then be reg-istered to the LOLA reference surface in the production of precision, controlled photomosaics, having spatial resolutions as good as 0.5 m/pixel. For hundreds of strategic sites viewed in stereo, even higher precision and more complete surface coverage is possible for the produc-tion of digital terrain models and mosaics. LRO, with LOLA and LROC, will improve the relative and absolute accuracy of geodesy and cartography by orders of magnitude, ushering in a new era for lunar geodesy and cartography. Robinson, M., et al., Space Sci. Rev., DOI 10.1007/s11214-010-9634-2, Date: 2010-02-23, in press. Smith, D., et al., Space Sci. Rev., DOI 10.1007/s11214-009-9512-y, published online 16 May 2009.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mateos, M-J; Brandan, M-E; Gastelum, A

    Purpose: To evaluate the time evolution of texture parameters, based on the gray level co-occurrence matrix (GLCM), in subtracted images of 17 patients (10 malignant and 7 benign) subjected to contrast-enhanced digital mammography (CEDM). The goal is to determine the sensitivity of texture to iodine uptake at the lesion, and its correlation (or lack of) with mean-pixel-value (MPV). Methods: Acquisition of clinical images followed a single-energy CEDM protocol using Rh/Rh/48 kV plus external 0.5 cm Al from a Senographe DS unit. Prior to the iodine-based contrast medium (CM) administration a mask image was acquired; four CM images were obtained 1,more » 2, 3, and 5 minutes after CM injection. Temporal series were obtained by logarithmic subtraction of registered CM minus mask images.Regions of interest (ROI) for the lesion were drawn by a radiologist and the texture was analyzed. GLCM was evaluated at a 3 pixel distance, 0° angle, and 64 gray-levels. Pixels identified as registration errors were excluded from the computation. 17 texture parameters were chosen, classified according to similarity into 7 groups, and analyzed. Results: In all cases the texture parameters within a group have similar dynamic behavior. Two texture groups (associated to cluster and sum mean) show a strong correlation with MPV; their average correlation coefficient (ACC) is r{sup 2}=0.90. Other two groups (contrast, homogeneity) remain constant with time, that is, a low-sensitivity to CM uptake. Three groups (regularity, lacunarity and diagonal moment) are sensitive to CM uptake but less correlated with MPV; their ACC is r{sup 2}=0.78. Conclusion: This analysis has shown that, at least groups associated to regularity, lacunarity and diagonal moment offer dynamical information additional to the mean pixel value due to the presence of CM at the lesion. The next step will be the analysis in terms of the lesion pathology. Authors thank PAPIIT-IN105813 for support. Consejo Nacional de Ciencia Y Tecnologia, PAPIIT-IN105813.« less

  9. Retinal oxygen saturation evaluation by multi-spectral fundus imaging

    NASA Astrophysics Data System (ADS)

    Khoobehi, Bahram; Ning, Jinfeng; Puissegur, Elise; Bordeaux, Kimberly; Balasubramanian, Madhusudhanan; Beach, James

    2007-03-01

    Purpose: To develop a multi-spectral method to measure oxygen saturation of the retina in the human eye. Methods: Five Cynomolgus monkeys with normal eyes were anesthetized with intramuscular ketamine/xylazine and intravenous pentobarbital. Multi-spectral fundus imaging was performed in five monkeys with a commercial fundus camera equipped with a liquid crystal tuned filter in the illumination light path and a 16-bit digital camera. Recording parameters were controlled with software written specifically for the application. Seven images at successively longer oxygen-sensing wavelengths were recorded within 4 seconds. Individual images for each wavelength were captured in less than 100 msec of flash illumination. Slightly misaligned images of separate wavelengths due to slight eye motion were registered and corrected by translational and rotational image registration prior to analysis. Numerical values of relative oxygen saturation of retinal arteries and veins and the underlying tissue in between the artery/vein pairs were evaluated by an algorithm previously described, but which is now corrected for blood volume from averaged pixels (n > 1000). Color saturation maps were constructed by applying the algorithm at each image pixel using a Matlab script. Results: Both the numerical values of relative oxygen saturation and the saturation maps correspond to the physiological condition, that is, in a normal retina, the artery is more saturated than the tissue and the tissue is more saturated than the vein. With the multi-spectral fundus camera and proper registration of the multi-wavelength images, we were able to determine oxygen saturation in the primate retinal structures on a tolerable time scale which is applicable to human subjects. Conclusions: Seven wavelength multi-spectral imagery can be used to measure oxygen saturation in retinal artery, vein, and tissue (microcirculation). This technique is safe and can be used to monitor oxygen uptake in humans. This work is original and is not under consideration for publication elsewhere.

  10. 32 CFR 636.10 - Hunter Army Airfield vehicle registration.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Temporary passes will be conspicuously placed on the left side of the vehicle dashboard between the... registered in one of two places: (1) Exterior, front windshield lower left corner. (2) Front, left bumper of... decal with the month on the left and the year on the right. (4) Decals will not be affixed to any other...

  11. Design and Fabrication of Electrostatically Actuated Silicon Microshutters Arrays

    NASA Technical Reports Server (NTRS)

    Oh, L.; Li, M.; Kim, K.; Kelly, D.; Kutyrev, A.; Moseley, S.

    2017-01-01

    We have developed a new fabrication process to actuate microshutter arrays (MSA) electrostatically at NASA Goddard Space Flight Center. The microshutters are fabricated on silicon with thin silicon nitride membranes. A pixel size of each microshutter is 100 x 200 micrometers 2. The microshutters rotate 90 degrees on torsion bars. The selected microshutters are actuated, held, and addressed electrostatically by applying voltages on the electrodes the front and back sides of the microshutters. The atomic layer deposition (ALD) of aluminum oxide was used to insulate electrodes on the back side of walls; the insulation can withstand over 100 V. The ALD aluminum oxide is dry etched, and then the microshutters are released in vapor HF.

  12. Method for maskless lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of thesemore » individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides.« less

  13. Maskless lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sweatt, W.C.; Stulen, R.H.

    The present invention provides a method for maskless lithography. A plurality of individually addressable and rotatable micromirrors together comprise a two-dimensional array of micromirrors. Each micromirror in the two-dimensional array can be envisioned as an individually addressable element in the picture that comprises the circuit pattern desired. As each micromirror is addressed it rotates so as to reflect light from a light source onto a portion of the photoresist coated wafer thereby forming a pixel within the circuit pattern. By electronically addressing a two-dimensional array of these micromirrors in the proper sequence a circuit pattern that is comprised of thesemore » individual pixels can be constructed on a microchip. The reflecting surface of the micromirror is configured in such a way as to overcome coherence and diffraction effects in order to produce circuit elements having straight sides. 12 figs.« less

  14. Tradeoff between picture element dimensions and noncoherent averaging in side-looking airborne radar

    NASA Technical Reports Server (NTRS)

    Moore, R. K.

    1979-01-01

    An experiment was performed in which three synthetic-aperture images and one real-aperture image were successively degraded in spatial resolution, both retaining the same number of independent samples per pixel and using the spatial degradation to allow averaging of different numbers of independent samples within each pixel. The original and degraded images were provided to three interpreters familiar with both aerial photographs and radar images. The interpreters were asked to grade each image in terms of their ability to interpret various specified features on the image. The numerical interpretability grades were then used as a quantitative measure of the utility of the different kinds of image processing and different resolutions. The experiment demonstrated empirically that the interpretability is related exponentially to the SGL volume which is the product of azimuth, range, and gray-level resolution.

  15. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data

    PubMed Central

    Huang, Huabing; Gong, Peng; Cheng, Xiao; Clinton, Nick; Li, Zengyuan

    2009-01-01

    Forest structural parameters, such as tree height and crown width, are indispensable for evaluating forest biomass or forest volume. LiDAR is a revolutionary technology for measurement of forest structural parameters, however, the accuracy of crown width extraction is not satisfactory when using a low density LiDAR, especially in high canopy cover forest. We used high resolution aerial imagery with a low density LiDAR system to overcome this shortcoming. A morphological filtering was used to generate a DEM (Digital Elevation Model) and a CHM (Canopy Height Model) from LiDAR data. The LiDAR camera image is matched to the aerial image with an automated keypoints search algorithm. As a result, a high registration accuracy of 0.5 pixels was obtained. A local maximum filter, watershed segmentation, and object-oriented image segmentation are used to obtain tree height and crown width. Results indicate that the camera data collected by the integrated LiDAR system plays an important role in registration with aerial imagery. The synthesis with aerial imagery increases the accuracy of forest structural parameter extraction when compared to only using the low density LiDAR data. PMID:22573971

  16. Alternative Packaging for Back-Illuminated Imagers

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2009-01-01

    An alternative scheme has been conceived for packaging of silicon-based back-illuminated, back-side-thinned complementary metal oxide/semiconductor (CMOS) and charge-coupled-device image-detector integrated circuits, including an associated fabrication process. This scheme and process are complementary to those described in "Making a Back-Illuminated Imager With Back-Side Connections" (NPO-42839), NASA Tech Briefs, Vol. 32, No. 7 (July 2008), page 38. To avoid misunderstanding, it should be noted that in the terminology of imaging integrated circuits, "front side" or "back side" does not necessarily refer to the side that, during operation, faces toward or away from a source of light or other object to be imaged. Instead, "front side" signifies that side of a semiconductor substrate upon which the pixel pattern and the associated semiconductor devices and metal conductor lines are initially formed during fabrication, and "back side" signifies the opposite side. If the imager is of the type called "back-illuminated," then the back side is the one that faces an object to be imaged. Initially, a back-illuminated, back-side-thinned image-detector is fabricated with its back side bonded to a silicon handle wafer. At a subsequent stage of fabrication, the front side is bonded to a glass wafer (for mechanical support) and the silicon handle wafer is etched away to expose the back side. The frontside integrated circuitry includes metal input/output contact pads, which are rendered inaccessible by the bonding of the front side to the glass wafer. Hence, one of the main problems is to make the input/output contact pads accessible from the back side, which is ultimately to be the side accessible to the external world. The present combination of an alternative packaging scheme and associated fabrication process constitute a solution of the problem.

  17. Comparison of MISR and Meteosat-9 cloud-motion vectors

    NASA Astrophysics Data System (ADS)

    Lonitz, Katrin; HorváTh, ÁKos

    2011-12-01

    Stereo motion vectors (SMVs) from the Multiangle Imaging SpectroRadiometer (MISR) were evaluated against Meteosat-9 cloud-motion vectors (CMVs) over a one-year period. In general, SMVs had weaker westerlies and southerlies than CMVs at all latitudes and levels. The E-W wind comparison showed small vertical variations with a mean difference of -0.4 m s-1, -1 m s-1, -0.7 m s-1 and corresponding rmsd of 2.4 m s-1, 3.8 m s-1, 3.5 m s-1for low-, mid-, and high-level clouds, respectively. The N-S wind discrepancies were larger and steadily increased with altitude, having a mean difference of -0.8 m s-1, -2.9 m s-1, -4.4 m s-1 and rmsd of 3.5 m s-1, 6.9 m s-1, 9.5 m s-1at low, mid, and high levels. The best overall agreement was found in marine stratocumulus off Namibia, while differences were larger in the Tropics and convective clouds. The SMVs were typically assigned to higher altitudes than CMVs. Attributing each observed height difference to MISR and/or Meteosat-9 retrieval biases will require further research; nevertheless, we already identified a few regions and cloud types where CMV height assignment seemed to be the one in error. In thin mid- and high-level clouds over Africa and Arabia as well as in broken marine boundary layer clouds the 10.8-μm brightness temperature-based heights were often biased low due to radiance contributions from the warm surface. Contrarily, low-level CMVs in the South Atlantic were frequently assigned to mid levels by the CO2-slicing method in multilayer situations. We also noticed an apparent cross-swath dependence in SMVs, whereby retrievals were less accurate on the eastern side of the MISR swath than on the western side. This artifact was traced back to sub-pixel MISR co-registration errors, which introduced cross-swath biases in E-W wind, N-S wind, and height of 0.6 m s-1, 2.6 m s-1, and 210 m.

  18. Report on recent results of the PERCIVAL soft X-ray imager

    NASA Astrophysics Data System (ADS)

    Khromova, A.; Cautero, G.; Giuressi, D.; Menk, R.; Pinaroli, G.; Stebel, L.; Correa, J.; Marras, A.; Wunderer, C. B.; Lange, S.; Tennert, M.; Niemann, M.; Hirsemann, H.; Smoljanin, S.; Reza, S.; Graafsma, H.; Göttlicher, P.; Shevyakov, I.; Supra, J.; Xia, Q.; Zimmer, M.; Guerrini, N.; Marsh, B.; Sedgwick, I.; Nicholls, T.; Turchetta, R.; Pedersen, U.; Tartoni, N.; Hyun, H. J.; Kim, K. S.; Rah, S. Y.; Hoenk, M. E.; Jewell, A. D.; Jones, T. J.; Nikzad, S.

    2016-11-01

    The PERCIVAL (Pixelated Energy Resolving CMOS Imager, Versatile And Large) soft X-ray 2D imaging detector is based on stitched, wafer-scale sensors possessing a thick epi-layer, which together with back-thinning and back-side illumination yields elevated quantum efficiency in the photon energy range of 125-1000 eV. Main application fields of PERCIVAL are foreseen in photon science with FELs and synchrotron radiation. This requires high dynamic range up to 105 ph @ 250 eV paired with single photon sensitivity with high confidence at moderate frame rates in the range of 10-120 Hz. These figures imply the availability of dynamic gain switching on a pixel-by-pixel basis and a highly parallel, low noise analog and digital readout, which has been realized in the PERCIVAL sensor layout. Different aspects of the detector performance have been assessed using prototype sensors with different pixel and ADC types. This work will report on the recent test results performed on the newest chip prototypes with the improved pixel and ADC architecture. For the target frame rates in the 10-120 Hz range an average noise floor of 14e- has been determined, indicating the ability of detecting single photons with energies above 250 eV. Owing to the successfully implemented adaptive 3-stage multiple-gain switching, the integrated charge level exceeds 4 · 106 e- or 57000 X-ray photons at 250 eV per frame at 120 Hz. For all gains the noise level remains below the Poisson limit also in high-flux conditions. Additionally, a short overview over the updates on an oncoming 2 Mpixel (P2M) detector system (expected at the end of 2016) will be reported.

  19. SU-D-BRF-03: Improvement of TomoTherapy Megavoltage Topogram Image Quality for Automatic Registration During Patient Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholey, J; White, B; Qi, S

    2014-06-01

    Purpose: To improve the quality of mega-voltage orthogonal scout images (MV topograms) for a fast and low-dose alternative technique for patient localization on the TomoTherapy HiART system. Methods: Digitally reconstructed radiographs (DRR) of anthropomorphic head and pelvis phantoms were synthesized from kVCT under TomoTherapy geometry (kV-DRR). Lateral (LAT) and anterior-posterior (AP) aligned topograms were acquired with couch speeds of 1cm/s, 2cm/s, and 3cm/s. The phantoms were rigidly translated in all spatial directions with known offsets in increments of 5mm, 10mm, and 15mm to simulate daily positioning errors. The contrast of the MV topograms was automatically adjusted based on the imagemore » intensity characteristics. A low-pass fast Fourier transform filter removed high-frequency noise and a Weiner filter reduced stochastic noise caused by scattered radiation to the detector array. An intensity-based image registration algorithm was used to register the MV topograms to a corresponding kV-DRR by minimizing the mean square error between corresponding pixel intensities. The registration accuracy was assessed by comparing the normalized cross correlation coefficients (NCC) between the registered topograms and the kV-DRR. The applied phantom offsets were determined by registering the MV topograms with the kV-DRR and recovering the spatial translation of the MV topograms. Results: The automatic registration technique provided millimeter accuracy and was robust for the deformed MV topograms for three tested couch speeds. The lowest average NCC for all AP and LAT MV topograms was 0.96 for the head phantom and 0.93 for the pelvis phantom. The offsets were recovered to within 1.6mm and 6.5mm for the processed and the original MV topograms respectively. Conclusion: Automatic registration of the processed MV topograms to a corresponding kV-DRR recovered simulated daily positioning errors that were accurate to the order of a millimeter. These results suggest the clinical use of MV topograms as a promising alternative to MVCT patient alignment.« less

  20. Automated geographic registration and radiometric correction for UAV-based mosaics

    NASA Astrophysics Data System (ADS)

    Thomasson, J. Alex; Shi, Yeyin; Sima, Chao; Yang, Chenghai; Cope, Dale A.

    2017-05-01

    Texas A and M University has been operating a large-scale, UAV-based, agricultural remote-sensing research project since 2015. To use UAV-based images in agricultural production, many high-resolution images must be mosaicked together to create an image of an agricultural field. Two key difficulties to science-based utilization of such mosaics are geographic registration and radiometric calibration. In our current research project, image files are taken to the computer laboratory after the flight, and semi-manual pre-processing is implemented on the raw image data, including ortho-mosaicking and radiometric calibration. Ground control points (GCPs) are critical for high-quality geographic registration of images during mosaicking. Applications requiring accurate reflectance data also require radiometric-calibration references so that reflectance values of image objects can be calculated. We have developed a method for automated geographic registration and radiometric correction with targets that are installed semi-permanently at distributed locations around fields. The targets are a combination of black (≍5% reflectance), dark gray (≍20% reflectance), and light gray (≍40% reflectance) sections that provide for a transformation of pixel-value to reflectance in the dynamic range of crop fields. The exact spectral reflectance of each target is known, having been measured with a spectrophotometer. At the time of installation, each target is measured for position with a real-time kinematic GPS receiver to give its precise latitude and longitude. Automated location of the reference targets in the images is required for precise, automated, geographic registration; and automated calculation of the digital-number to reflectance transformation is required for automated radiometric calibration. To validate the system for radiometric calibration, a calibrated UAV-based image mosaic of a field was compared to a calibrated single image from a manned aircraft. Reflectance values in selected zones of each image were strongly linearly related, and the average error of UAV-mosaic reflectances was 3.4% in the red band, 1.9% in the green band, and 1.5% in the blue band. Based on these results, the proposed physical system and automated software for calibrating UAV mosaics show excellent promise.

  1. Method and system for detecting polygon boundaries of structures in images as particle tracks through fields of corners and pixel gradients

    DOEpatents

    Paglieroni, David W [Pleasanton, CA; Manay, Siddharth [Livermore, CA

    2011-12-20

    A stochastic method and system for detecting polygon structures in images, by detecting a set of best matching corners of predetermined acuteness .alpha. of a polygon model from a set of similarity scores based on GDM features of corners, and tracking polygon boundaries as particle tracks using a sequential Monte Carlo approach. The tracking involves initializing polygon boundary tracking by selecting pairs of corners from the set of best matching corners to define a first side of a corresponding polygon boundary; tracking all intermediate sides of the polygon boundaries using a particle filter, and terminating polygon boundary tracking by determining the last side of the tracked polygon boundaries to close the polygon boundaries. The particle tracks are then blended to determine polygon matches, which may be made available, such as to a user, for ranking and inspection.

  2. Processing and characterization of high resolution GaN/InGaN LED arrays at 10 micron pitch for micro display applications

    NASA Astrophysics Data System (ADS)

    Dupré, Ludovic; Marra, Marjorie; Verney, Valentin; Aventurier, Bernard; Henry, Franck; Olivier, François; Tirano, Sauveur; Daami, Anis; Templier, François

    2017-02-01

    We report the fabrication process and characterization of high resolution 873 x 500 pixels emissive arrays based on blue or green GaN/InGaN light emitting diodes (LEDs) at a reduced pixel pitch of 10 μm. A self-aligned process along with a combination of damascene metallization steps is presented as the key to create a common cathode which is expected to provide good thermal dissipation and prevent voltage drops between center and side of the micro LED matrix. We will discuss the challenges of a self-aligned technology related to the choice of a good P contact metal and will present our solutions for the realization of the metallic interconnections between the GaN contacts and the higher levels of metallization at such a small pixel pitch. Enhanced control of each technological step allows scalability of the process up to 4 inch LED wafers and production of high quality LED arrays. The very high brightness (up to 107 cd.m-2) and good external quantum efficiency (EQE) of the resulting device make these kind of micro displays suitable for augmented reality or head up display applications.

  3. Convolving optically addressed VLSI liquid crystal SLM

    NASA Astrophysics Data System (ADS)

    Jared, David A.; Stirk, Charles W.

    1994-03-01

    We designed, fabricated, and tested an optically addressed spatial light modulator (SLM) that performs a 3 X 3 kernel image convolution using ferroelectric liquid crystal on VLSI technology. The chip contains a 16 X 16 array of current-mirror-based convolvers with a fixed kernel for finding edges. The pixels are located on 75 micron centers, and the modulators are 20 microns on a side. The array successfully enhanced edges in illumination patterns. We developed a high-level simulation tool (CON) for analyzing the performance of convolving SLM designs. CON has a graphical interface and simulates SLM functions using SPICE-like device models. The user specifies the pixel function along with the device parameters and nonuniformities. We discovered through analysis, simulation and experiment that the operation of current-mirror-based convolver pixels is degraded at low light levels by the variation of transistor threshold voltages inherent to CMOS chips. To function acceptable, the test SLM required the input image to have an minimum irradiance of 10 (mu) W/cm2. The minimum required irradiance can be further reduced by adding a photodarlington near the photodetector or by increasing the size of the transistors used to calculate the convolution.

  4. Connecting Swath Satellite Data With Imagery in Mapping Applications

    NASA Astrophysics Data System (ADS)

    Thompson, C. K.; Hall, J. R.; Penteado, P. F.; Roberts, J. T.; Zhou, A. Y.

    2016-12-01

    Visualizations of gridded science data products (referred to as Level 3 or Level 4) typically provide a straightforward correlation between image pixels and the source science data. This direct relationship allows users to make initial inferences based on imagery values, facilitating additional operations on the underlying data values, such as data subsetting and analysis. However, that same pixel-to-data relationship for ungridded science data products (referred to as Level 2) is significantly more challenging. These products, also referred to as "swath products", are in orbital "instrument space" and raster visualization pixels do not directly correlate to science data values. Interpolation algorithms are often employed during the gridding or projection of a science dataset prior to image generation, introducing intermediary values that separate the image from the source data values. NASA's Global Imagery Browse Services (GIBS) is researching techniques for efficiently serving "image-ready" data allowing client-side dynamic visualization and analysis capabilities. This presentation will cover some GIBS prototyping work designed to maintain connectivity between Level 2 swath data and its corresponding raster visualizations. Specifically, we discuss the DAta-to-Image-SYstem (DAISY), an indexing approach for Level 2 swath data, and the mechanisms whereby a client may dynamically visualize the data in raster form.

  5. Registration of 3D and Multispectral Data for the Study of Cultural Heritage Surfaces

    PubMed Central

    Chane, Camille Simon; Schütze, Rainer; Boochs, Frank; Marzani, Franck S.

    2013-01-01

    We present a technique for the multi-sensor registration of featureless datasets based on the photogrammetric tracking of the acquisition systems in use. This method is developed for the in situ study of cultural heritage objects and is tested by digitizing a small canvas successively with a 3D digitization system and a multispectral camera while simultaneously tracking the acquisition systems with four cameras and using a cubic target frame with a side length of 500 mm. The achieved tracking accuracy is better than 0.03 mm spatially and 0.150 mrad angularly. This allows us to seamlessly register the 3D acquisitions and to project the multispectral acquisitions on the 3D model. PMID:23322103

  6. Multiple-Frame Detection of Subpixel Targets in Thermal Image Sequences

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Kremens, Robert

    2013-01-01

    The new technology in this approach combines the subpixel detection information from multiple frames of a sequence to achieve a more sensitive detection result, using only the information found in the images themselves. It is taken as a constraint that the method is automated, robust, and computationally feasible for field networks with constrained computation and data rates. This precludes simply downloading a video stream for pixel-wise co-registration on the ground. It is also important that this method not require precise knowledge of sensor position or direction, because such information is often not available. It is also assumed that the scene in question is approximately planar, which is appropriate for a high-altitude airborne or orbital view.

  7. Spitzer secondary eclipses of Qatar-1b

    NASA Astrophysics Data System (ADS)

    Garhart, Emily; Deming, Drake; Mandell, Avi; Knutson, Heather; Fortney, Jonathan J.

    2018-02-01

    Aims: Previous secondary eclipse observations of the hot Jupiter Qatar-1b in the Ks band suggest that it may have an unusually high day side temperature, indicative of minimal heat redistribution. There have also been indications that the orbit may be slightly eccentric, possibly forced by another planet in the system. We investigate the day side temperature and orbital eccentricity using secondary eclipse observations with Spitzer. Methods: We observed the secondary eclipse with Spitzer/IRAC in subarray mode, in both 3.6 and 4.5 μm wavelengths. We used pixel-level decorrelation to correct for Spitzer's intra-pixel sensitivity variations and thereby obtain accurate eclipse depths and central phases. Results: Our 3.6 μm eclipse depth is 0.149 ± 0.051% and the 4.5 μm depth is 0.273 ± 0.049%. Fitting a blackbody planet to our data and two recent Ks band eclipse depths indicates a brightness temperature of 1506 ± 71 K. Comparison to model atmospheres for the planet indicates that its degree of longitudinal heat redistribution is intermediate between fully uniform and day-side only. The day side temperature of the planet is unlikely to be as high (1885 K) as indicated by the ground-based eclipses in the Ks band, unless the planet's emergent spectrum deviates strongly from model atmosphere predictions. The average central phase for our Spitzer eclipses is 0.4984 ± 0.0017, yielding e cos ω = -0.0028 ± 0.0027. Our results are consistent with a circular orbit, and we constrain e cos ω much more strongly than has been possible with previous observations. Tables of the lightcurve data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/610/A55

  8. A novel approach for establishing benchmark CBCT/CT deformable image registrations in prostate cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Kim, Jinkoo; Kumar, Sanath; Liu, Chang; Zhong, Hualiang; Pradhan, Deepak; Shah, Mira; Cattaneo, Richard; Yechieli, Raphael; Robbins, Jared R.; Elshaikh, Mohamed A.; Chetty, Indrin J.

    2013-11-01

    Deformable image registration (DIR) is an integral component for adaptive radiation therapy. However, accurate registration between daily cone-beam computed tomography (CBCT) and treatment planning CT is challenging, due to significant daily variations in rectal and bladder fillings as well as the increased noise levels in CBCT images. Another significant challenge is the lack of ‘ground-truth’ registrations in the clinical setting, which is necessary for quantitative evaluation of various registration algorithms. The aim of this study is to establish benchmark registrations of clinical patient data. Three pairs of CT/CBCT datasets were chosen for this institutional review board approved retrospective study. On each image, in order to reduce the contouring uncertainty, ten independent sets of organs were manually delineated by five physicians. The mean contour set for each image was derived from the ten contours. A set of distinctive points (round natural calcifications and three implanted prostate fiducial markers) were also manually identified. The mean contours and point features were then incorporated as constraints into a B-spline based DIR algorithm. Further, a rigidity penalty was imposed on the femurs and pelvic bones to preserve their rigidity. A piecewise-rigid registration approach was adapted to account for the differences in femur pose and the sliding motion between bones. For each registration, the magnitude of the spatial Jacobian (|JAC|) was calculated to quantify the tissue compression and expansion. Deformation grids and finite-element-model-based unbalanced energy maps were also reviewed visually to evaluate the physical soundness of the resultant deformations. Organ DICE indices (indicating the degree of overlap between registered organs) and residual misalignments of the fiducial landmarks were quantified. Manual organ delineation on CBCT images varied significantly among physicians with overall mean DICE index of only 0.7 among redundant contours. Seminal vesicle contours were found to have the lowest correlation amongst physicians (DICE = 0.5). After DIR, the organ surfaces between CBCT and planning CT were in good alignment with mean DICE indices of 0.9 for prostate, rectum, and bladder, and 0.8 for seminal vesicles. The Jacobian magnitudes |JAC| in the prostate, rectum, and seminal vesicles were in the range of 0.4-1.5, indicating mild compression/expansion. The bladder volume differences were larger between CBCT and CT images with mean |JAC| values of 2.2, 0.7, and 1.0 for three respective patients. Bone deformation was negligible (|JAC| = ˜ 1.0). The difference between corresponding landmark points between CBCT and CT was less than 1.0 mm after DIR. We have presented a novel method of establishing benchmark DIR accuracy between CT and CBCT images in the pelvic region. The method incorporates manually delineated organ surfaces and landmark points as well as pixel similarity in the optimization, while ensuring bone rigidity and avoiding excessive deformation in soft tissue organs. Redundant contouring is necessary to reduce the overall registration uncertainty.

  9. 77 FR 21091 - Alabama Power Company; Notice of Application for Amendment of License and Soliciting Comments...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-09

    ... characters, without prior registration, using the eComment system at http://www.ferc.gov/docs-filing/ecomment... applicant proposes to construct a 20-ft wide and approximately 187-ft long concrete boat ramp with courtesy dock on each side of the ramp. Ninety-four feet of the proposed boat ramp will be within the project...

  10. High resolution phoswich gamma-ray imager utilizing monolithic MPPC arrays with submillimeter pixelized crystals

    NASA Astrophysics Data System (ADS)

    Kato, T.; Kataoka, J.; Nakamori, T.; Kishimoto, A.; Yamamoto, S.; Sato, K.; Ishikawa, Y.; Yamamura, K.; Kawabata, N.; Ikeda, H.; Kamada, K.

    2013-05-01

    We report the development of a high spatial resolution tweezers-type coincidence gamma-ray camera for medical imaging. This application consists of large-area monolithic Multi-Pixel Photon Counters (MPPCs) and submillimeter pixelized scintillator matrices. The MPPC array has 4 × 4 channels with a three-side buttable, very compact package. For typical operational gain of 7.5 × 105 at + 20 °C, gain fluctuation over the entire MPPC device is only ± 5.6%, and dark count rates (as measured at the 1 p.e. level) amount to <= 400 kcps per channel. We selected Ce-doped (Lu,Y)2(SiO4)O (Ce:LYSO) and a brand-new scintillator, Ce-doped Gd3Al2Ga3O12 (Ce:GAGG) due to their high light yield and density. To improve the spatial resolution, these scintillators were fabricated into 15 × 15 matrices of 0.5 × 0.5 mm2 pixels. The Ce:LYSO and Ce:GAGG scintillator matrices were assembled into phosphor sandwich (phoswich) detectors, and then coupled to the MPPC array along with an acrylic light guide measuring 1 mm thick, and with summing operational amplifiers that compile the signals into four position-encoded analog outputs being used for signal readout. Spatial resolution of 1.1 mm was achieved with the coincidence imaging system using a 22Na point source. These results suggest that the gamma-ray imagers offer excellent potential for applications in high spatial medical imaging.

  11. Staring at Saturn

    NASA Image and Video Library

    2016-09-15

    NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047

  12. Multimodal ophthalmic imaging using swept source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Malone, Joseph D.; El-Haddad, Mohamed T.; Tye, Logan A.; Majeau, Lucas; Godbout, Nicolas; Rollins, Andrew M.; Boudoux, Caroline; Tao, Yuankai K.

    2016-03-01

    Scanning laser ophthalmoscopy (SLO) and optical coherence tomography (OCT) benefit clinical diagnostic imaging in ophthalmology by enabling in vivo noninvasive en face and volumetric visualization of retinal structures, respectively. Spectrally encoding methods enable confocal imaging through fiber optics and reduces system complexity. Previous applications in ophthalmic imaging include spectrally encoded confocal scanning laser ophthalmoscopy (SECSLO) and a combined SECSLO-OCT system for image guidance, tracking, and registration. However, spectrally encoded imaging suffers from speckle noise because each spectrally encoded channel is effectively monochromatic. Here, we demonstrate in vivo human retinal imaging using a swept source spectrally encoded scanning laser ophthalmoscope and OCT (SSSESLO- OCT) at 1060 nm. SS-SESLO-OCT uses a shared 100 kHz Axsun swept source, shared scanner and imaging optics, and are detected simultaneously on a shared, dual channel high-speed digitizer. SESLO illumination and detection was performed using the single mode core and multimode inner cladding of a double clad fiber coupler, respectively, to preserve lateral resolution while improving collection efficiency and reducing speckle contrast at the expense of confocality. Concurrent en face SESLO and cross-sectional OCT images were acquired with 1376 x 500 pixels at 200 frames-per-second. Our system design is compact and uses a shared light source, imaging optics, and digitizer, which reduces overall system complexity and ensures inherent co-registration between SESLO and OCT FOVs. En face SESLO images acquired concurrent with OCT cross-sections enables lateral motion tracking and three-dimensional volume registration with broad applications in multivolume OCT averaging, image mosaicking, and intraoperative instrument tracking.

  13. Image registration algorithm for high-voltage electric power live line working robot based on binocular vision

    NASA Astrophysics Data System (ADS)

    Li, Chengqi; Ren, Zhigang; Yang, Bo; An, Qinghao; Yu, Xiangru; Li, Jinping

    2017-12-01

    In the process of dismounting and assembling the drop switch for the high-voltage electric power live line working (EPL2W) robot, one of the key problems is the precision of positioning for manipulators, gripper and the bolts used to fix drop switch. To solve it, we study the binocular vision system theory of the robot and the characteristic of dismounting and assembling drop switch. We propose a coarse-to-fine image registration algorithm based on image correlation, which can improve the positioning precision of manipulators and bolt significantly. The algorithm performs the following three steps: firstly, the target points are marked respectively in the right and left visions, and then the system judges whether the target point in right vision can satisfy the lowest registration accuracy by using the similarity of target points' backgrounds in right and left visions, this is a typical coarse-to-fine strategy; secondly, the system calculates the epipolar line, and then the regional sequence existing matching points is generated according to neighborhood of epipolar line, the optimal matching image is confirmed by calculating the similarity between template image in left vision and the region in regional sequence according to correlation matching; finally, the precise coordinates of target points in right and left visions are calculated according to the optimal matching image. The experiment results indicate that the positioning accuracy of image coordinate is within 2 pixels, the positioning accuracy in the world coordinate system is within 3 mm, the positioning accuracy of binocular vision satisfies the requirement dismounting and assembling the drop switch.

  14. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations.

    PubMed

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-04-11

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10-0.20 m, and vertical accuracy was approximately 0.01-0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed.

  15. 3D registration of depth data of porous surface coatings based on 3D phase correlation and the trimmed ICP algorithm

    NASA Astrophysics Data System (ADS)

    Loftfield, Nina; Kästner, Markus; Reithmeier, Eduard

    2017-06-01

    A critical factor of endoprostheses is the quality of the tribological pairing. The objective of this research project is to manufacture stochastically porous aluminum oxide surface coatings with high wear resistance and an active friction minimization. There are many experimental and computational techniques from mercury porosimetry to imaging methods for studying porous materials, however, the characterization of disordered pore networks is still a great challenge. To meet this challenge it is striven to gain a three dimensional high resolution reconstruction of the surface. In this work, the reconstruction is approached by repeatedly milling down the surface by a fixed decrement while measuring each layer using a confocal laser scanning microscope (CLSM). The so acquired depth data of the successive layers is then registered pairwise. Within this work a direct registration approach is deployed and implemented in two steps, a coarse and a fine alignment. The coarse alignment of the depth data is limited to a translational shift which occurs in horizontal direction due to placing the sample in turns under the CLSM and the milling machine and in vertical direction due to the milling process itself. The shift is determined by an approach utilizing 3D phase correlation. The fine alignment is implemented by the Trimmed Iterative Closest Point algorithm, matching the most likely common pixels roughly specified by an estimated overlap rate. With the presented two-step approach a proper 3D registration of the successive depth data of the layer is obtained.

  16. Endoscopic laser range scanner for minimally invasive, image guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.

    2013-03-01

    Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.

  17. RAMTaB: Robust Alignment of Multi-Tag Bioimages

    PubMed Central

    Raza, Shan-e-Ahmed; Humayun, Ahmad; Abouna, Sylvie; Nattkemper, Tim W.; Epstein, David B. A.; Khan, Michael; Rajpoot, Nasir M.

    2012-01-01

    Background In recent years, new microscopic imaging techniques have evolved to allow us to visualize several different proteins (or other biomolecules) in a visual field. Analysis of protein co-localization becomes viable because molecules can interact only when they are located close to each other. We present a novel approach to align images in a multi-tag fluorescence image stack. The proposed approach is applicable to multi-tag bioimaging systems which (a) acquire fluorescence images by sequential staining and (b) simultaneously capture a phase contrast image corresponding to each of the fluorescence images. To the best of our knowledge, there is no existing method in the literature, which addresses simultaneous registration of multi-tag bioimages and selection of the reference image in order to maximize the overall overlap between the images. Methodology/Principal Findings We employ a block-based method for registration, which yields a confidence measure to indicate the accuracy of our registration results. We derive a shift metric in order to select the Reference Image with Maximal Overlap (RIMO), in turn minimizing the total amount of non-overlapping signal for a given number of tags. Experimental results show that the Robust Alignment of Multi-Tag Bioimages (RAMTaB) framework is robust to variations in contrast and illumination, yields sub-pixel accuracy, and successfully selects the reference image resulting in maximum overlap. The registration results are also shown to significantly improve any follow-up protein co-localization studies. Conclusions For the discovery of protein complexes and of functional protein networks within a cell, alignment of the tag images in a multi-tag fluorescence image stack is a key pre-processing step. The proposed framework is shown to produce accurate alignment results on both real and synthetic data. Our future work will use the aligned multi-channel fluorescence image data for normal and diseased tissue specimens to analyze molecular co-expression patterns and functional protein networks. PMID:22363510

  18. Distant Moons

    NASA Image and Video Library

    2016-08-15

    Saturn's moons Tethys and Hyperion appear to be near neighbors in this Cassini view, even though they are actually 930,000 miles (1.5 million kilometers) apart here. Tethys is the larger body on the left. These two icy moons of Saturn are very different worlds. To learn more about Hyperion (170 miles or 270 kilometers across). This view looks toward the trailing side of Tethys. North on Tethys is up and rotated 1 degree to the left. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Aug. 15, 2015. The view was acquired at a distance of approximately 750,000 miles (1.2 million kilometers) from Tethys. Image scale is 4.4 miles (7.0 kilometers) per pixel. The distance to Hyperion was 1.7 million miles (2.7 million kilometers) with an image scale of 10 mile (16 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20493

  19. Dot Against the Dark

    NASA Image and Video Library

    2014-09-02

    As if trying to get our attention, Mimas is positioned against the shadow of Saturn's rings, bright on dark. As we near summer in Saturn's northern hemisphere, the rings cast ever larger shadows on the planet. With a reflectivity of about 96 percent, Mimas (246 miles, or 396 kilometers across) appears bright against the less-reflective Saturn. This view looks toward the sunlit side of the rings from about 10 degrees above the ringplane. The image was taken with the Cassini spacecraft wide-angle camera on July 13, 2014 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 752 nanometers. The view was acquired at a distance of approximately 1.1 million miles (1.8 million kilometers) from Saturn and approximately 1 million miles (1.6 million kilometers) from Mimas. Image scale is 67 miles (108 kilometers) per pixel at Saturn and 60 miles (97 kilometers) per pixel at Mimas. http://photojournal.jpl.nasa.gov/catalog/PIA18282

  20. Three-dimensional cross point readout detector design for including depth information

    NASA Astrophysics Data System (ADS)

    Lee, Seung-Jae; Baek, Cheol-Ha

    2018-04-01

    We designed a depth-encoding positron emission tomography (PET) detector using a cross point readout method with wavelength-shifting (WLS) fibers. To evaluate the characteristics of the novel detector module and the PET system, we used the DETECT2000 to perform optical photon transport in the crystal array. The GATE was also used. The detector module is made up of four layers of scintillator arrays, the five layers of WLS fiber arrays, and two sensor arrays. The WLS fiber arrays in each layer cross each other to transport light to each sensor array. The two sensor arrays are coupled to the forward and left sides of the WLS fiber array, respectively. The identification of three-dimensional pixels was determined using a digital positioning algorithm. All pixels were well decoded, with the system resolution ranging from 2.11 mm to 2.29 mm at full width at half maximum (FWHM).

  1. Peeking over Saturn Shoulder

    NASA Image and Video Library

    2017-01-16

    No Earth-based telescope could ever capture a view quite like this. Earth-based views can only show Saturn's daylit side, from within about 25 degrees of Saturn's equatorial plane. A spacecraft in orbit, like Cassini, can capture stunning scenes that would be impossible from our home planet. This view looks toward the sunlit side of the rings from about 25 degrees (if Saturn is dominant in image) above the ring plane. The image was taken in violet light with the Cassini spacecraft wide-angle camera on Oct. 28, 2016. The view was obtained at a distance of approximately 810,000 miles (1.3 million kilometers) from Saturn. Image scale is 50 miles (80 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20517

  2. CERESVis: A QC Tool for CERES that Leverages Browser Technology for Data Validation

    NASA Astrophysics Data System (ADS)

    Chu, C.; Sun-Mack, S.; Heckert, E.; Chen, Y.; Doelling, D.

    2015-12-01

    In this poster, we are going to present three user interfaces that CERES team uses to validate pixel-level data. Besides our home grown tools, we will aslo present the browser technology that we use to provide interactive interfaces, such as jquery, HighCharts and Google Earth. We pass data to the users' browsers and use the browsers to do some simple computations. The three user interfaces are: Thumbnails -- it displays hundrends images to allow users to browse 24-hour data files in few seconds. Multiple-synchronized cursors -- it allows users to compare multiple images side by side. Bounding Boxes and Histograms -- it allows users to draw multiple bounding boxes on an image and the browser computes/display the histograms.

  3. Planar Submillimeter-Wave Mixer Technology with Integrated Antenna

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Gautam; Mehdi, Imran; Gill, John J.; Lee, Choonsup; lombart, Muria L.; Thomas, Betrand

    2010-01-01

    High-performance mixers at terahertz frequencies require good matching between the coupling circuits such as antennas and local oscillators and the diode embedding impedance. With the availability of amplifiers at submillimeter wavelengths and the need to have multi-pixel imagers and cameras, planar mixer architecture is required to have an integrated system. An integrated mixer with planar antenna provides a compact and optimized design at terahertz frequencies. Moreover, it leads to a planar architecture that enables efficient interconnect with submillimeter-wave amplifiers. In this architecture, a planar slot antenna is designed on a thin gallium arsenide (GaAs) membrane in such a way that the beam on either side of the membrane is symmetric and has good beam profile with high coupling efficiency. A coplanar waveguide (CPW) coupled Schottky diode mixer is designed and integrated with the antenna. In this architecture, the local oscillator (LO) is coupled through one side of the antenna and the RF from the other side, without requiring any beam sp litters or diplexers. The intermediate frequency (IF) comes out on a 50-ohm CPW line at the edge of the mixer chip, which can be wire-bonded to external circuits. This unique terahertz mixer has an integrated single planar antenna for coupling both the radio frequency (RF) input and LO injection without any diplexer or beamsplitters. The design utilizes novel planar slot antenna architecture on a 3- mthick GaAs membrane. This work is required to enable future multi-pixel terahertz receivers for astrophysics missions, and lightweight and compact receivers for planetary missions to the outer planets in our solar system. Also, this technology can be used in tera hertz radar imaging applications as well as for testing of quantum cascade lasers (QCLs).

  4. An improved schlieren method for measurement and automatic reconstruction of the far-field focal spot

    PubMed Central

    Wang, Zhengzhou; Hu, Bingliang; Yin, Qinye

    2017-01-01

    The schlieren method of measuring far-field focal spots offers many advantages at the Shenguang III laser facility such as low cost and automatic laser-path collimation. However, current methods of far-field focal spot measurement often suffer from low precision and efficiency when the final focal spot is merged manually, thereby reducing the accuracy of reconstruction. In this paper, we introduce an improved schlieren method to construct the high dynamic-range image of far-field focal spots and improve the reconstruction accuracy and efficiency. First, a detection method based on weak light beam sampling and magnification imaging was designed; images of the main and side lobes of the focused laser irradiance in the far field were obtained using two scientific CCD cameras. Second, using a self-correlation template matching algorithm, a circle the same size as the schlieren ball was dug from the main lobe cutting image and used to change the relative region of the main lobe cutting image within a 100×100 pixel region. The position that had the largest correlation coefficient between the side lobe cutting image and the main lobe cutting image when a circle was dug was identified as the best matching point. Finally, the least squares method was used to fit the center of the side lobe schlieren small ball, and the error was less than 1 pixel. The experimental results show that this method enables the accurate, high-dynamic-range measurement of a far-field focal spot and automatic image reconstruction. Because the best matching point is obtained through image processing rather than traditional reconstruction methods based on manual splicing, this method is less sensitive to the efficiency of focal-spot reconstruction and thus offers better experimental precision. PMID:28207758

  5. Collection and corrections of oblique multiangle hyperspectral bidirectional reflectance imagery of the water surface

    NASA Astrophysics Data System (ADS)

    Bostater, Charles R.; Oney, Taylor S.

    2017-10-01

    Hyperspectral images of coastal waters in urbanized regions were collected from fixed platform locations. Surf zone imagery, images of shallow bays, lagoons and coastal waters are processed to produce bidirectional reflectance factor (BRF) signatures corrected for changing viewing angles. Angular changes as a function of pixel location within a scene are used to estimate changes in pixel size and ground sampling areas. Diffuse calibration targets collected simultaneously from within the image scene provides the necessary information for calculating BRF signatures of the water surface and shorelines. Automated scanning using a pushbroom hyperspectral sensor allows imagery to be collected on the order of one minute or less for different regions of interest. Imagery is then rectified and georeferenced using ground control points within nadir viewing multispectral imagery via image to image registration techniques. This paper demonstrates the above as well as presenting how spectra can be extracted along different directions in the imagery. The extraction of BRF spectra along track lines allows the application of derivative reflectance spectroscopy for estimating chlorophyll-a, dissolved organic matter and suspended matter concentrations at or near the water surface. Imagery is presented demonstrating the techniques to identify subsurface features and targets within the littoral and surf zones.

  6. Multimodal segmentation of optic disc and cup from stereo fundus and SD-OCT images

    NASA Astrophysics Data System (ADS)

    Miri, Mohammad Saleh; Lee, Kyungmoo; Niemeijer, Meindert; Abràmoff, Michael D.; Kwon, Young H.; Garvin, Mona K.

    2013-03-01

    Glaucoma is one of the major causes of blindness worldwide. One important structural parameter for the diagnosis and management of glaucoma is the cup-to-disc ratio (CDR), which tends to become larger as glaucoma progresses. While approaches exist for segmenting the optic disc and cup within fundus photographs, and more recently, within spectral-domain optical coherence tomography (SD-OCT) volumes, no approaches have been reported for the simultaneous segmentation of these structures within both modalities combined. In this work, a multimodal pixel-classification approach for the segmentation of the optic disc and cup within fundus photographs and SD-OCT volumes is presented. In particular, after segmentation of other important structures (such as the retinal layers and retinal blood vessels) and fundus-to-SD-OCT image registration, features are extracted from both modalities and a k-nearest-neighbor classification approach is used to classify each pixel as cup, rim, or background. The approach is evaluated on 70 multimodal image pairs from 35 subjects in a leave-10%-out fashion (by subject). A significant improvement in classification accuracy is obtained using the multimodal approach over that obtained from the corresponding unimodal approach (97.8% versus 95.2%; p < 0:05; paired t-test).

  7. Automated coregistration of MTI spectral bands

    NASA Astrophysics Data System (ADS)

    Theiler, James P.; Galbraith, Amy E.; Pope, Paul A.; Ramsey, Keri A.; Szymanski, John J.

    2002-08-01

    In the focal plane of a pushbroom imager, a linear array of pixels is scanned across the scene, building up the image one row at a time. For the Multispectral Thermal Imager (MTI), each of fifteen different spectral bands has its own linear array. These arrays are pushed across the scene together, but since each band's array is at a different position on the focal plane, a separate image is produced for each band. The standard MTI data products (LEVEL1B_R_COREG and LEVEL1B_R_GEO) resample these separate images to a common grid and produce coregistered multispectral image cubes. The coregistration software employs a direct ``dead reckoning' approach. Every pixel in the calibrated image is mapped to an absolute position on the surface of the earth, and these are resampled to produce an undistorted coregistered image of the scene. To do this requires extensive information regarding the satellite position and pointing as a function of time, the precise configuration of the focal plane, and the distortion due to the optics. These must be combined with knowledge about the position and altitude of the target on the rotating ellipsoidal earth. We will discuss the direct approach to MTI coregistration, as well as more recent attempts to tweak the precision of the band-to-band registration using correlations in the imagery itself.

  8. The Dawn Topography Investigation

    NASA Technical Reports Server (NTRS)

    Raymond, C. A.; Jaumann, R.; Nathues, A.; Sierks, H.; Roatsch, T.; Preusker, E; Scholten, F.; Gaskell, R. W.; Jorda, L.; Keller, H.-U.; hide

    2011-01-01

    The objective of the Dawn topography investigation is to derive the detailed shapes of 4 Vesta and 1 Ceres in order to create orthorectified image mosaics for geologic interpretation, as well as to study the asteroids' landforms, interior structure, and the processes that have modified their surfaces over geologic time. In this paper we describe our approaches for producing shape models, plans for acquiring the needed image data for Vesta, and the results of a numerical simulation of the Vesta mapping campaign that quantify the expected accuracy of our results. Multi-angle images obtained by Dawn's framing camera will be used to create topographic models with 100 m/pixel horizontal resolution and 10 m height accuracy at Vesta, and 200 m/pixel horizontal resolution and 20 m height accuracy at Ceres. Two different techniques, stereophotogrammetry and stereophotoclinometry, are employed to model the shape; these models will be merged with the asteroidal gravity fields obtained by Dawn to produce geodetically controlled topographic models for each body. The resulting digital topography models, together with the gravity data, will reveal the tectonic, volcanic and impact history of Vesta, and enable co-registration of data sets to determine Vesta's geologic history. At Ceres, the topography will likely reveal much about processes of surface modification as well as the internal structure and evolution of this dwarf planet.

  9. Visible Human 2.0--the next generation.

    PubMed

    Ratiu, Peter; Hillen, Berend; Glaser, Jack; Jenkins, Donald P

    2003-01-01

    The National Library of Medicine has initiated the development of new anatomical methods and techniques for the acquisition of higher resolution data sets, aiming to address the anatomical artifacts encountered in the development of the Visible Human Male and Female and to insure enhanced detection of structures, providing data in greater depth and breadth. Given this framework, we acquired a complete data set of the head and neck. CT and MR scans were also obtained with registration hardware inserted prior to imaging. The arterial and venous systems were injected with colorized araldite-F. After freezing, axial cryosectioning and digital photography at 147 microns/voxel resolution was performed. Two slabs of the specimen were acquired with a special tissue harvesting technique. The resulting tissue slices of the whole specimen were stained for different tissue types. The resulting histological material was then scanned at a 60x magnification using the Virtual Slice technology at 2 microns/pixel resolution (each slide approximately 75,000 x 100,000 pixels). In this data set, for the first time anatomy is presented as a continuum from a radiologic granularity of 1 mm/voxel, to a macroscopic resolution of .147 mm/voxel, to microscopic resolution of 2 microns/pixel. The hiatus between gross anatomy and histology has been assumed insurmountable, and until the present time this gap was bridged by extrapolating findings on minute samples. The availability of anatomical data with the fidelity presented will render it possible to perform a seamless study of whole organs at a cellular level and provide a testbed for the validation of histological estimation techniques. A future complete Visible Human created from data acquired at a cellular resolution, aside from its daunting size, will open new possibilities in multiple directions in medical research and simulation.

  10. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Li, Qingbo; Lu, Xiandan

    1998-04-21

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  11. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Chang, Huan-Tsang; Fung, Eliza N.; Li, Qingbo; Lu, Xiandan

    1996-12-10

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  12. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Li, Q.; Lu, X.

    1998-04-21

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  13. Multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Chang, H.T.; Fung, E.N.; Li, Q.; Lu, X.

    1996-12-10

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  14. The influence of car registration year on driver casualty rates in Great Britain.

    PubMed

    Broughton, Jeremy

    2012-03-01

    A previous paper analysed data from the British national road accident reporting system to investigate the influence upon car driver casualty rates of the general type of car being driven and its year of first registration. A statistical model was fitted to accident data from 2001 to 2005, and this paper updates the principal results using accident data from 2003 to 2007. Attention focuses upon the role of year of first registration since this allows the influence of developments in car design upon occupant casualty numbers to be evaluated. Three additional topics are also examined with these accident data. Changes over time in frontal and side impacts are compared. Changes in the combined risk for the two drivers involved in a car-car collision are investigated, being the net result of changes in secondary safety and aggressivity. Finally, the results of the new model relating to occupant protection are related to an index that had been developed previously to analyse changes over time in the secondary safety of the car fleet. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Geocoding and stereo display of tropical forest multisensor datasets

    NASA Technical Reports Server (NTRS)

    Welch, R.; Jordan, T. R.; Luvall, J. C.

    1990-01-01

    Concern about the future of tropical forests has led to a demand for geocoded multisensor databases that can be used to assess forest structure, deforestation, thermal response, evapotranspiration, and other parameters linked to climate change. In response to studies being conducted at the Braulino Carrillo National Park, Costa Rica, digital satellite and aircraft images recorded by Landsat TM, SPOT HRV, Thermal Infrared Multispectral Scanner, and Calibrated Airborne Multispectral Scanner sensors were placed in register using the Landsat TM image as the reference map. Despite problems caused by relief, multitemporal datasets, and geometric distortions in the aircraft images, registration was accomplished to within + or - 20 m (+ or - 1 data pixel). A digital elevation model constructed from a multisensor Landsat TM/SPOT stereopair proved useful for generating perspective views of the rugged, forested terrain.

  16. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  17. Concepts for on board satellite image registration. Volume 4: Impact of data set selection on satellite on board signal processing

    NASA Technical Reports Server (NTRS)

    Ruedger, W. H.; Aanstoos, J. V.; Snyder, W. E.

    1982-01-01

    The NASA NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. This volume addresses the impact of data set selection on data formatting required for efficient telemetering of the acquired satellite sensor data. More specifically, the FILE algorithm developed by Martin-Marietta provides a means for the determination of those pixels from the data stream effects an improvement in the achievable system throughput. It will be seen that based on the lack of statistical stationarity in cloud cover, spatial distribution periods exist where data acquisition rates exceed the throughput capability. The study therefore addresses various approaches to data compression and truncation as applicable to this sensor mission.

  18. SpcAudace: Spectroscopic processing and analysis package of Audela software

    NASA Astrophysics Data System (ADS)

    Mauclaire, Benjamin

    2017-11-01

    SpcAudace processes long slit spectra with automated pipelines and performs astrophysical analysis of the latter data. These powerful pipelines do all the required steps in one pass: standard preprocessing, masking of bad pixels, geometric corrections, registration, optimized spectrum extraction, wavelength calibration and instrumental response computation and correction. Both high and low resolution long slit spectra are managed for stellar and non-stellar targets. Many types of publication-quality figures can be easily produced: pdf and png plots or annotated time series plots. Astrophysical quantities can be derived from individual or large amount of spectra with advanced functions: from line profile characteristics to equivalent width and periodogram. More than 300 documented functions are available and can be used into TCL scripts for automation. SpcAudace is based on Audela open source software.

  19. CdTe Based Hard X-ray Imager Technology For Space Borne Missions

    NASA Astrophysics Data System (ADS)

    Limousin, Olivier; Delagnes, E.; Laurent, P.; Lugiez, F.; Gevin, O.; Meuris, A.

    2009-01-01

    CEA Saclay has recently developed an innovative technology for CdTe based Pixelated Hard X-Ray Imagers with high spectral performance and high timing resolution for efficient background rejection when the camera is coupled to an active veto shield. This development has been done in a R&D program supported by CNES (French National Space Agency) and has been optimized towards the Simbol-X mission requirements. In the latter telescope, the hard X-Ray imager is 64 cm² and is equipped with 625µm pitch pixels (16384 independent channels) operating at -40°C in the range of 4 to 80 keV. The camera we demonstrate in this paper consists of a mosaic of 64 independent cameras, divided in 8 independent sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique 1 cm² component, juxtaposable on its four sides. Recently, promising results have been obtained from the first micro-camera prototypes called Caliste 64 and will be presented to illustrate the capabilities of the device as well as the expected performance of an instrument based on it. The modular design of Caliste enables to consider extended developments toward IXO type mission, according to its specific scientific requirements.

  20. CdTe focal plane detector for hard x-ray focusing optics

    NASA Astrophysics Data System (ADS)

    Seller, Paul; Wilson, Matthew D.; Veale, Matthew C.; Schneider, Andreas; Gaskin, Jessica; Wilson-Hodge, Colleen; Christe, Steven; Shih, Albert Y.; Gregory, Kyle; Inglis, Andrew; Panessa, Marco

    2015-08-01

    The demand for higher resolution x-ray optics (a few arcseconds or better) in the areas of astrophysics and solar science has, in turn, driven the development of complementary detectors. These detectors should have fine pixels, necessary to appropriately oversample the optics at a given focal length, and an energy response also matched to that of the optics. Rutherford Appleton Laboratory have developed a 3-side buttable, 20 mm x 20 mm CdTe-based detector with 250 μm square pixels (80x80 pixels) which achieves 1 keV FWHM @ 60 keV and gives full spectroscopy between 5 keV and 200 keV. An added advantage of these detectors is that they have a full-frame readout rate of 10 kHz. Working with NASA Goddard Space Flight Center and Marshall Space Flight Center, 4 of these 1mm-thick CdTe detectors are tiled into a 2x2 array for use at the focal plane of a balloon-borne hard-x-ray telescope, and a similar configuration could be suitable for astrophysics and solar space-based missions. This effort encompasses the fabrication and testing of flightsuitable front-end electronics and calibration of the assembled detector arrays. We explain the operation of the pixelated ASIC readout and measurements, front-end electronics development, preliminary X-ray imaging and spectral performance, and plans for full calibration of the detector assemblies. Work done in conjunction with the NASA Centers is funded through the NASA Science Mission Directorate Astrophysics Research and Analysis Program.

  1. A noiseless, kHz frame rate imaging detector for AO wavefront sensors based on MCPs read out with the Medipix2 CMOS pixel chip

    NASA Astrophysics Data System (ADS)

    Vallerga, J. V.; McPhate, J. B.; Tremsin, A. S.; Siegmund, O. H. W.; Mikulec, B.; Clark, A. G.

    2004-12-01

    Future wavefront sensors in adaptive optics (AO) systems for the next generation of large telescopes (> 30 m diameter) will require large formats (512x512) , kHz frame rates, low readout noise (<3 electrons) and high optical QE. The current generation of CCDs cannot achieve the first three of these specifications simultaneously. We present a detector scheme that can meet the first three requirements with an optical QE > 40%. This detector consists of a vacuum tube with a proximity focused GaAs photocathode whose photoelectrons are amplified by microchannel plates and the resulting output charge cloud counted by a pixelated CMOS application specific integrated circuit (ASIC) called the Medipix2 (http://medipix.web.cern.ch/MEDIPIX/). Each 55 micron square pixel of the Medipix2 chip has an amplifier, discriminator and 14 bit counter and the 256x256 array can be read out in 287 microseconds. The chip is 3 side abuttable so a 512x512 array is feasible in one vacuum tube. We will present the first results with an open-faced, demountable version of the detector where we have mounted a pair of MCPs 500 microns above a Medipix2 readout inside a vacuum chamber and illuminated it with UV light. The results include: flat field response, spatial resolution, spatial linearity on the sub-pixel level and global event counting rate. We will also discuss the vacuum tube design and the fabrication issues associated with the Medipix2 surviving the tube making process.

  2. CdTe Focal Plane Detector for Hard X-Ray Focusing Optics

    NASA Technical Reports Server (NTRS)

    Seller, Paul; Wilson, Matthew D.; Veale, Matthew C.; Schneider, Andreas; Gaskin, Jessica; Wilson-Hodge, Colleen; Christe, Steven; Shih, Albert Y.; Inglis, Andrew; Panessa, Marco

    2015-01-01

    The demand for higher resolution x-ray optics (a few arcseconds or better) in the areas of astrophysics and solar science has, in turn, driven the development of complementary detectors. These detectors should have fine pixels, necessary to appropriately oversample the optics at a given focal length, and an energy response also matched to that of the optics. Rutherford Appleton Laboratory have developed a 3-side buttable, 20 millimeter x 20 millimeter CdTe-based detector with 250 micrometer square pixels (80 x 80 pixels) which achieves 1 kiloelectronvolt FWHM (Full-Width Half-Maximum) @ 60 kiloelectronvolts and gives full spectroscopy between 5 kiloelectronvolts and 200 kiloelectronvolts. An added advantage of these detectors is that they have a full-frame readout rate of 10 kilohertz. Working with NASA Goddard Space Flight Center and Marshall Space Flight Center, 4 of these 1 millimeter-thick CdTe detectors are tiled into a 2 x 2 array for use at the focal plane of a balloon-borne hard-x-ray telescope, and a similar configuration could be suitable for astrophysics and solar space-based missions. This effort encompasses the fabrication and testing of flight-suitable front-end electronics and calibration of the assembled detector arrays. We explain the operation of the pixelated ASIC readout and measurements, front-end electronics development, preliminary X-ray imaging and spectral performance, and plans for full calibration of the detector assemblies. Work done in conjunction with the NASA Centers is funded through the NASA Science Mission Directorate Astrophysics Research and Analysis Program.

  3. Preliminary GOES-R ABI navigation and registration assessment results

    NASA Astrophysics Data System (ADS)

    Tan, B.; Dellomo, J.; Wolfe, R. E.; Reth, A. D.

    2017-12-01

    The US Geostationary Operational Environmental Satellite - R Series (GOES-R) was launched on November 19, 2016, and was designated GOESR-16 upon reaching geostationary orbit ten days later. The Advanced Baseline Imager (ABI) is the primary instrument on the GOES-R series for imaging Earth's surface and atmosphere to aid in weather prediction and climate monitoring. We developed algorithms and software for independent verification of the ABI Image Navigation and Registration (INR). Since late January 2017, four INR metrics have been continuously generated to monitor the ABI INR performance: navigation (NAV) error, channel-to-channel registration (CCR) error, frame-to-frame registration (FFR) error, and within-frame registration (WIFR) error. In this paper, we will describe the fundamental algorithm used for the image registration and briefly discuss the processing flow of INR Performance Assessment Tool Set (IPATS) developed for ABI INR. The assessment of the accuracy shows that IPATS measurements error is about 1/20 of the size of a pixel. Then the GOES-16 NAV assessments results, the primary metric, from January to August 2017, will be presented. The INR has improved over time as post-launch tests were performed and corrections were applied. The mean NAV error of the visible and near infrared (VNIR) channels dropped from 20 μrad in January to around 5 μrad (+/-4 μrad, 1 σ) in June, while the mean NAV error of long wave infrared (LWIR) channels dropped from around 70 μrad in January to around 5 μrad (+/-15 μrad, 1 σ) in June. A full global ABI image is composed with 22 east-west direction swaths. The swath-wise NAV error analysis shows that there was some variation in the mean swath-wise NAV errors. The variations are about as much as 20% of the scene NAV mean errors. As expected, the swaths over the tropical area have far fewer valid assessments (matchups) than those in mid-latitude region due to cloud coverage. It was also found that there was a rotation (clocking) of the focal plane of LWIR that was seen in both the NAV and CCR results. The rotation was corrected by an INR update in June 2017. Through deep-dive examinations of the scenes with large mean and/or variation in INR errors, we validated that IPATS is an excellent tool for assessing and improving the GOES-16 ABI INR and is also useful in INR long-term monitoring.

  4. Surface Deformation Associated With a Historical Diking Event in Afar From Correlation of Space and Air-Borne Optical Images

    NASA Astrophysics Data System (ADS)

    Harrington, J.; Peltzer, G.; Leprince, S.; Ayoub, F.; Kasser, M.

    2011-12-01

    We present new measurements of the surface deformation associated with the rifting event of 1978 in the Asal-Ghoubbet rift, Republic of Djibouti. The Asal-Ghoubbet rift forms a component of the Afar Depression, a broad extensional region at the junction between the Nubia, Arabia, and Somalia plates, which apart from Iceland, is the only spreading center located above sea-level. The 1978 rifting event was marked by a 2-month sequence of small to moderate earthquakes (Mb ~3-5) and a fissural eruption of the Ardukoba Volcano. Deformation in the Asal rift associated with the event included the reactivation of the main bordering faults and the development of numerous open fissures on the rift floor. The movement of the rift shoulders, measured using ground-based geodesy, showed up to 2.5 m of opening in the N40E direction. Our data include historical aerial photographs from 1962 and 1984 (less than 0.8 m/pixel) along the northern border fault, three KH-9 Hexagon(~8 m/pixel) satellite images from 1973, and recently acquired ASTER (15 m/pixel) and SPOT5 (2.5 m/pixel) data. The measurements are made by correlating pre- and post-event images using the COSI-Corr (Co-registration of Optically Sensed Images and Correlation) software developed at Caltech. The ortho-rectification of the images is done with a mosaic of a 10 m resolution digital elevation model, made by French Institut Geographique National (IGN), and the SRTM and GDEM datasets. Correlation results from the satellite images indicate 2-3 meters of opening across the rift. Preliminary results obtained using the 1962 and 1984 aerial photographs indicate that a large fraction of the opening occurred on or near Fault γ, which borders the rift to the North. These preliminary results are largely consistent with the ground based measurements made after the event. A complete analysis of the aerial photograph coverage will provide a better characterization of the spatial distribution of the deformation throughout the rift.

  5. From the Night Side

    NASA Image and Video Library

    2015-09-14

    The night sides of Saturn and Tethys are dark places indeed. We know that shadows are darker areas than sunlit areas, and in space, with no air to scatter the light, shadows can appear almost totally black. Tethys (660 miles or 1,062 kilometers across) is just barely seen in the lower left quadrant of this image below the ring plane and has been brightened by a factor of three to increase its visibility. The wavy outline of Saturn's polar hexagon is visible at top center. This view looks toward the sunlit side of the rings from about 10 degrees above the ring plane. The image was taken with the Cassini spacecraft wide-angle camera on Jan. 15, 2015 using a spectral filter which preferentially admits wavelengths of near-infrared light centered at 752 nanometers. The view was obtained at a distance of approximately 1.5 million miles (2.4 million kilometers) from Saturn. Image scale is 88 miles (141 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18333

  6. Surface topography acquisition method for double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry.

    PubMed

    Zhang, Tao; Gao, Feng; Jiang, Xiangqian

    2017-10-02

    This paper proposes an approach to measure double-sided near-right-angle structured surfaces based on dual-probe wavelength scanning interferometry (DPWSI). The principle and mathematical model is discussed and the measurement system is calibrated with a combination of standard step-height samples for both probes vertical calibrations and a specially designed calibration artefact for building up the space coordinate relationship of the dual-probe measurement system. The topography of the specially designed artefact is acquired by combining the measurement results with white light scanning interferometer (WLSI) and scanning electron microscope (SEM) for reference. The relative location of the two probes is then determined with 3D registration algorithm. Experimental validation of the approach is provided and the results show that the method is able to measure double-sided near-right-angle structured surfaces with nanometer vertical resolution and micrometer lateral resolution.

  7. Extraction of topography from side-looking satellite systems - A case study with SPOT simulation data

    NASA Technical Reports Server (NTRS)

    Ungar, Stephen G.; Merry, Carolyn J.; Mckim, Harlan L.; Irish, Richard; Miller, Michael S.

    1988-01-01

    A simulated data set was used to evaluate techniques for extracting topography from side-looking satellite systems for an area of northwest Washington state. A negative transparency orthophotoquad was digitized at a spacing of 85 microns, resulting in an equivalent ground distance of 9.86 m between pixels and a radiometric resolution of 256 levels. A bilinear interpolation was performed on digital elevation model data to generate elevation data at a 9.86-m resolution. The nominal orbital characteristics and geometry of the SPOT satellite were convoluted with the data to produce simulated panchromatic HRV digital stereo imagery for three different orbital paths and techniques for reconstructing topographic data were developed. Analyses with the simulated HRV data and other data sets show that the method is effective.

  8. Non-uniform dose distributions in cranial radiation therapy

    NASA Astrophysics Data System (ADS)

    Bender, Edward T.

    Radiation treatments are often delivered to patients with brain metastases. For those patients who receive radiation to the entire brain, there is a risk of long-term neuro-cognitive side effects, which may be due to damage to the hippocampus. In clinical MRI and CT scans it can be difficult to identify the hippocampus, but once identified it can be partially spared from radiation dose. Using deformable image registration we demonstrate a semi-automatic technique for obtaining an estimated location of this structure in a clinical MRI or CT scan. Deformable image registration is a useful tool in other areas such as adaptive radiotherapy, where the radiation oncology team monitors patients during the course of treatment and adjusts the radiation treatments if necessary when the patient anatomy changes. Deformable image registration is used in this setting, but there is a considerable level of uncertainty. This work represents one of many possible approaches at investigating the nature of these uncertainties utilizing consistency metrics. We will show that metrics such as the inverse consistency error correlate with actual registration uncertainties. Specifically relating to brain metastases, this work investigates where in the brain metastases are likely to form, and how the primary cancer site is related. We will show that the cerebellum is at high risk for metastases and that non-uniform dose distributions may be advantageous when delivering prophylactic cranial irradiation for patients with small cell lung cancer in complete remission.

  9. Saturnian Dawn

    NASA Image and Video Library

    2017-06-26

    NASA's Cassini spacecraft peers toward a sliver of Saturn's sunlit atmosphere while the icy rings stretch across the foreground as a dark band. This view looks toward the unilluminated side of the rings from about 7 degrees below the ring plane. The image was taken in green light with the Cassini spacecraft wide-angle camera on March 31, 2017. The view was obtained at a distance of approximately 620,000 miles (1 million kilometers) from Saturn. Image scale is 38 miles (61 kilometers) per pixel. https://photojournal.jpl.nasa.gov/catalog/PIA21334

  10. Capillaries for use in a multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, Edward S.; Chang, Huan-Tsang; Fung, Eliza N.

    1997-12-09

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification ("base calling") is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations.

  11. Capillaries for use in a multiplexed capillary electrophoresis system

    DOEpatents

    Yeung, E.S.; Chang, H.T.; Fung, E.N.

    1997-12-09

    The invention provides a side-entry optical excitation geometry for use in a multiplexed capillary electrophoresis system. A charge-injection device is optically coupled to capillaries in the array such that the interior of a capillary is imaged onto only one pixel. In Sanger-type 4-label DNA sequencing reactions, nucleotide identification (``base calling``) is improved by using two long-pass filters to split fluorescence emission into two emission channels. A binary poly(ethyleneoxide) matrix is used in the electrophoretic separations. 19 figs.

  12. Sentinel 2 global reference image

    NASA Astrophysics Data System (ADS)

    Dechoz, C.; Poulain, V.; Massera, S.; Languille, F.; Greslou, D.; de Lussy, F.; Gaudel, A.; L'Helguen, C.; Picard, C.; Trémas, T.

    2015-10-01

    Sentinel-2 is a multispectral, high-resolution, optical imaging mission, developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. In cooperation with ESA, the Centre National d'Etudes Spatiales (CNES) is responsible for the image quality of the project, and will ensure the CAL/VAL commissioning phase. Sentinel-2 mission is devoted the operational monitoring of land and coastal areas, and will provide a continuity of SPOT- and Landsat-type data. Sentinel-2 will also deliver information for emergency services. Launched in 2015 and 2016, there will be a constellation of 2 satellites on a polar sun-synchronous orbit, imaging systematically terrestrial surfaces with a revisit time of 5 days, in 13 spectral bands in visible and shortwave infra-red. Therefore, multi-temporal series of images, taken under the same viewing conditions, will be available. So as to ensure for the multi-temporal registration of the products, specified to be better than 0.3 pixels at 2σ, a Global Reference Image (GRI) will be produced during the CAL/VAL period. This GRI is composed of a set of Sentinel-2 acquisitions, which geometry has been corrected by bundle block adjustment. During L1B processing, Ground Control Points will be taken between this reference image and the sentinel-2 acquisition processed and the geometric model of the image corrected, so as to ensure the good multi-temporal registration. This paper first details the production of the reference during the CALVAL period, and then details the qualification and geolocation performance assessment of the GRI. It finally presents its use in the Level-1 processing chain and gives a first assessment of the multi-temporal registration.

  13. Registration of Vehicle-Borne Point Clouds and Panoramic Images Based on Sensor Constellations

    PubMed Central

    Yao, Lianbi; Wu, Hangbin; Li, Yayun; Meng, Bin; Qian, Jinfei; Liu, Chun; Fan, Hongchao

    2017-01-01

    A mobile mapping system (MMS) is usually utilized to collect environmental data on and around urban roads. Laser scanners and panoramic cameras are the main sensors of an MMS. This paper presents a new method for the registration of the point clouds and panoramic images based on sensor constellation. After the sensor constellation was analyzed, a feature point, the intersection of the connecting line between the global positioning system (GPS) antenna and the panoramic camera with a horizontal plane, was utilized to separate the point clouds into blocks. The blocks for the central and sideward laser scanners were extracted with the segmentation feature points. Then, the point clouds located in the blocks were separated from the original point clouds. Each point in the blocks was used to find the accurate corresponding pixel in the relative panoramic images via a collinear function, and the position and orientation relationship amongst different sensors. A search strategy is proposed for the correspondence of laser scanners and lenses of panoramic cameras to reduce calculation complexity and improve efficiency. Four cases of different urban road types were selected to verify the efficiency and accuracy of the proposed method. Results indicate that most of the point clouds (with an average of 99.7%) were successfully registered with the panoramic images with great efficiency. Geometric evaluation results indicate that horizontal accuracy was approximately 0.10–0.20 m, and vertical accuracy was approximately 0.01–0.02 m for all cases. Finally, the main factors that affect registration accuracy, including time synchronization amongst different sensors, system positioning and vehicle speed, are discussed. PMID:28398256

  14. NoRMCorre: An online algorithm for piecewise rigid motion correction of calcium imaging data.

    PubMed

    Pnevmatikakis, Eftychios A; Giovannucci, Andrea

    2017-11-01

    Motion correction is a challenging pre-processing problem that arises early in the analysis pipeline of calcium imaging data sequences. The motion artifacts in two-photon microscopy recordings can be non-rigid, arising from the finite time of raster scanning and non-uniform deformations of the brain medium. We introduce an algorithm for fast Non-Rigid Motion Correction (NoRMCorre) based on template matching. NoRMCorre operates by splitting the field of view (FOV) into overlapping spatial patches along all directions. The patches are registered at a sub-pixel resolution for rigid translation against a regularly updated template. The estimated alignments are subsequently up-sampled to create a smooth motion field for each frame that can efficiently approximate non-rigid artifacts in a piecewise-rigid manner. Existing approaches either do not scale well in terms of computational performance or are targeted to non-rigid artifacts arising just from the finite speed of raster scanning, and thus cannot correct for non-rigid motion observable in datasets from a large FOV. NoRMCorre can be run in an online mode resulting in comparable to or even faster than real time motion registration of streaming data. We evaluate its performance with simple yet intuitive metrics and compare against other non-rigid registration methods on simulated data and in vivo two-photon calcium imaging datasets. Open source Matlab and Python code is also made available. The proposed method and accompanying code can be useful for solving large scale image registration problems in calcium imaging, especially in the presence of non-rigid deformations. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  15. BrachyView: Combining LDR seed positions with transrectal ultrasound imaging in a prostate gel phantom.

    PubMed

    Alnaghy, S; Cutajar, D L; Bucci, J A; Enari, K; Safavi-Naeini, M; Favoino, M; Tartaglia, M; Carriero, F; Jakubek, J; Pospisil, S; Lerch, M; Rosenfeld, A B; Petasecca, M

    2017-02-01

    BrachyView is a novel in-body imaging system which aims to provide LDR brachytherapy seeds position reconstruction within the prostate in real-time. The first prototype is presented in this study: the probe consists of a gamma camera featuring three single cone pinhole collimators embedded in a tungsten tube, above three, high resolution pixelated detectors (Timepix). The prostate was imaged with a TRUS system using a sagittal crystal with a 2.5mm slice thickness. Eleven needles containing a total of thirty 0.508U 125 I seeds were implanted under ultrasound guidance. A CT scan was used to localise the seed positions, as well as provide a reference when performing the image co-registration between the BrachyView coordinate system and the TRUS coordinate system. An in-house visualisation software interface was developed to provide a quantitative 3D reconstructed prostate based on the TRUS images and co-registered with the LDR seeds in situ. A rigid body image registration was performed between the BrachyView and TRUS systems, with the BrachyView and CT-derived source locations compared. The reconstructed seed positions determined by the BrachyView probe showed a maximum discrepancy of 1.78mm, with 75% of the seeds reconstructed within 1mm of their nominal location. An accurate co-registration between the BrachyView and TRUS coordinate system was established. The BrachyView system has shown its ability to reconstruct all implanted LDR seeds within a tissue equivalent prostate gel phantom, providing both anatomical and seed position information in a single interface. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  16. Automated brain tumor segmentation in magnetic resonance imaging based on sliding-window technique and symmetry analysis.

    PubMed

    Lian, Yanyun; Song, Zhijian

    2014-01-01

    Brain tumor segmentation from magnetic resonance imaging (MRI) is an important step toward surgical planning, treatment planning, monitoring of therapy. However, manual tumor segmentation commonly used in clinic is time-consuming and challenging, and none of the existed automated methods are highly robust, reliable and efficient in clinic application. An accurate and automated tumor segmentation method has been developed for brain tumor segmentation that will provide reproducible and objective results close to manual segmentation results. Based on the symmetry of human brain, we employed sliding-window technique and correlation coefficient to locate the tumor position. At first, the image to be segmented was normalized, rotated, denoised, and bisected. Subsequently, through vertical and horizontal sliding-windows technique in turn, that is, two windows in the left and the right part of brain image moving simultaneously pixel by pixel in two parts of brain image, along with calculating of correlation coefficient of two windows, two windows with minimal correlation coefficient were obtained, and the window with bigger average gray value is the location of tumor and the pixel with biggest gray value is the locating point of tumor. At last, the segmentation threshold was decided by the average gray value of the pixels in the square with center at the locating point and 10 pixels of side length, and threshold segmentation and morphological operations were used to acquire the final tumor region. The method was evaluated on 3D FSPGR brain MR images of 10 patients. As a result, the average ratio of correct location was 93.4% for 575 slices containing tumor, the average Dice similarity coefficient was 0.77 for one scan, and the average time spent on one scan was 40 seconds. An fully automated, simple and efficient segmentation method for brain tumor is proposed and promising for future clinic use. Correlation coefficient is a new and effective feature for tumor location.

  17. Venus' night side atmospheric dynamics using near infrared observations from VEx/VIRTIS and TNG/NICS

    NASA Astrophysics Data System (ADS)

    Mota Machado, Pedro; Peralta, Javier; Luz, David; Gonçalves, Ruben; Widemann, Thomas; Oliveira, Joana

    2016-10-01

    We present night side Venus' winds based on coordinated observations carried out with Venus Express' VIRTIS instrument and the Near Infrared Camera (NICS) of the Telescopio Nazionale Galileo (TNG). With NICS camera, we acquired images of the continuum K filter at 2.28 μm, which allows to monitor motions at the Venus' lower cloud level, close to 48 km altitude. We will present final results of cloud tracked winds from ground-based TNG observations and from coordinated space-based VEx/VIRTIS observations.The Venus' lower cloud deck is centred at 48 km of altitude, where fundamental dynamical exchanges that help maintain superrotation are thought to occur. The lower Venusian atmosphere is a strong source of thermal radiation, with the gaseous CO2 component allowing radiation to escape in windows at 1.74 and 2.28 μm. At these wavelengths radiation originates below 35 km and unit opacity is reached at the lower cloud level, close to 48 km. Therefore, it is possible to observe the horizontal cloud structure, with thicker clouds seen silhouetted against the bright thermal background from the low atmosphere. By continuous monitoring of the horizontal cloud structure at 2.28 μm (NICS Kcont filter), it is possible to determine wind fields using the technique of cloud tracking. We acquired a series of short exposures of the Venus disk. Cloud displacements in the night side of Venus were computed taking advantage of a phase correlation semi-automated technique. The Venus apparent diameter at observational dates was greater than 32" allowing a high spatial precision. The 0.13" pixel scale of the NICS narrow field camera allowed to resolve ~3-pixel displacements. The absolute spatial resolution on the disk was ~100 km/px at disk center, and the (0.8-1") seeing-limited resolution was ~400 km/px. By co-adding the best images and cross-correlating regions of clouds the effective resolution was significantly better than the seeing-limited resolution. In order to correct for scattered light from the (saturated) day side crescent into the night side, a set of observations with a Br Υ filter were performed. Cloud features are invisible at this wavelength, and this technique allowed for a good correction of scattered light.

  18. Segmentation and Visual Analysis of Whole-Body Mouse Skeleton microSPECT

    PubMed Central

    Khmelinskii, Artem; Groen, Harald C.; Baiker, Martin; de Jong, Marion; Lelieveldt, Boudewijn P. F.

    2012-01-01

    Whole-body SPECT small animal imaging is used to study cancer, and plays an important role in the development of new drugs. Comparing and exploring whole-body datasets can be a difficult and time-consuming task due to the inherent heterogeneity of the data (high volume/throughput, multi-modality, postural and positioning variability). The goal of this study was to provide a method to align and compare side-by-side multiple whole-body skeleton SPECT datasets in a common reference, thus eliminating acquisition variability that exists between the subjects in cross-sectional and multi-modal studies. Six whole-body SPECT/CT datasets of BALB/c mice injected with bone targeting tracers 99mTc-methylene diphosphonate (99mTc-MDP) and 99mTc-hydroxymethane diphosphonate (99mTc-HDP) were used to evaluate the proposed method. An articulated version of the MOBY whole-body mouse atlas was used as a common reference. Its individual bones were registered one-by-one to the skeleton extracted from the acquired SPECT data following an anatomical hierarchical tree. Sequential registration was used while constraining the local degrees of freedom (DoFs) of each bone in accordance to the type of joint and its range of motion. The Articulated Planar Reformation (APR) algorithm was applied to the segmented data for side-by-side change visualization and comparison of data. To quantitatively evaluate the proposed algorithm, bone segmentations of extracted skeletons from the correspondent CT datasets were used. Euclidean point to surface distances between each dataset and the MOBY atlas were calculated. The obtained results indicate that after registration, the mean Euclidean distance decreased from 11.5±12.1 to 2.6±2.1 voxels. The proposed approach yielded satisfactory segmentation results with minimal user intervention. It proved to be robust for “incomplete” data (large chunks of skeleton missing) and for an intuitive exploration and comparison of multi-modal SPECT/CT cross-sectional mouse data. PMID:23152834

  19. A New Normalized Difference Cloud Retrieval Technique Applied to Landsat Radiances Over the Oklahoma ARM Site

    NASA Technical Reports Server (NTRS)

    Orepoulos, Lazaros; Cahalan, Robert; Marshak, Alexander; Wen, Guoyong

    1999-01-01

    We suggest a new approach to cloud retrieval, using a normalized difference of nadir reflectivities (NDNR) constructed from a non-absorbing and absorbing (with respect to liquid water) wavelength. Using Monte Carlo simulations we show that this quantity has the potential of removing first order scattering effects caused by cloud side illumination and shadowing at oblique Sun angles. Application of the technique to TM (Thematic Mapper) radiance observations from Landsat-5 over the Southern Great Plains site of the ARM (Atmospheric Radiation Measurement) program gives very similar regional statistics and histograms, but significant differences at the pixel level. NDNR can be also combined with the inverse NIPA (Nonlocal Independent Pixel Approximation) of Marshak (1998) which is applied for the first time on overcast Landsat scene subscenes. We demonstrate the sensitivity of the NIPA-retrieved cloud fields on the parameters of the method and discuss practical issues related to the optimal choice of these parameters.

  20. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  1. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  2. Surface topography of 1€ coin measured by stereo-PIXE

    NASA Astrophysics Data System (ADS)

    Gholami-Hatam, E.; Lamehi-Rachti, M.; Vavpetič, P.; Grlj, N.; Pelicon, P.

    2013-07-01

    We demonstrate the stereo-PIXE method by measurement of surface topography of the relief details on 1€ coin. Two X-ray elemental maps were simultaneously recorded by two X-ray detectors positioned at the left and the right side of the proton microbeam. The asymmetry of the yields in the pixels of the two X-ray maps occurs due to different photon attenuation on the exit travel path of the characteristic X-rays from the point of emission through the sample into the X-ray detectors. In order to calibrate the inclination angle with respect to the X-ray asymmetry, a flat inclined surface model was at first applied for the sample in which the matrix composition and the depth elemental concentration profile is known. After that, the yield asymmetry in each image pixel was transferred into corresponding local inclination angle using calculated dependence of the asymmetry on the surface inclination. Finally, the quantitative topography profile was revealed by integrating the local inclination angle over the lateral displacement of the probing beam.

  3. Long Divisions

    NASA Image and Video Library

    2016-08-08

    The shadow of Saturn on the rings, which stretched across all of the rings earlier in Cassini's mission (see PIA08362), now barely makes it past the Cassini division. The changing length of the shadow marks the passing of the seasons on Saturn. As the planet nears its northern-hemisphere solstice in May 2017, the shadow will get even shorter. At solstice, the shadow's edge will be about 28,000 miles (45,000 kilometers) from the planet's surface, barely making it past the middle of the B ring. The moon Mimas is a few pixels wide, near the lower left in this image. This view looks toward the sunlit side of the rings from about 35 degrees above the ring plane. The image was taken in visible light with the Cassini spacecraft wide-angle camera on May 21, 2016. The view was obtained at a distance of approximately 2.0 million miles (3.2 million kilometers) from Saturn. Image scale is 120 miles (190 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20494

  4. Tracking Efficiency And Charge Sharing of 3D Silicon Sensors at Different Angles in a 1.4T Magnetic Field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gjersdal, H.; /Oslo U.; Bolle, E.

    2012-05-07

    A 3D silicon sensor fabricated at Stanford with electrodes penetrating throughout the entire silicon wafer and with active edges was tested in a 1.4 T magnetic field with a 180 GeV/c pion beam at the CERN SPS in May 2009. The device under test was bump-bonded to the ATLAS pixel FE-I3 readout electronics chip. Three readout electrodes were used to cover the 400 {micro}m long pixel side, this resulting in a p-n inter-electrode distance of {approx} 71 {micro}m. Its behavior was confronted with a planar sensor of the type presently installed in the ATLAS inner tracker. Time over threshold, chargemore » sharing and tracking efficiency data were collected at zero and 15{sup o} angles with and without magnetic field. The latest is the angular configuration expected for the modules of the Insertable B-Layer (IBL) currently under study for the LHC phase 1 upgrade expected in 2014.« less

  5. Radiological and histopathological evaluation of experimentally-induced periapical lesion in rats

    PubMed Central

    TEIXEIRA, Renata Cordeiro; RUBIRA, Cassia Maria Fischer; ASSIS, Gerson Francisco; LAURIS, José Roberto Pereira; CESTARI, Tania Mary; RUBIRA-BULLEN, Izabel Regina Fischer

    2011-01-01

    Objective This study evaluated experimentally-induced periapical bone loss sites using digital radiographic and histopathologic parameters. Material and Methods Twenty-seven Wistar rats were submitted to coronal opening of their mandibular right first molars. They were radiographed at 2, 15 and 30 days after the operative procedure by two digital radiographic storage phosphor plates (Digora®). The images were analyzed by creating a region of interest at the periapical region of each tooth (ImageJ) and registering the corresponding pixel values. After the sacrifice, the specimens were submitted to microscopic analysis in order to confirm the pulpal and periapical status of the tooth. Results There was significant statistically difference between the control and test sides in all the experimental periods regarding the pixel values (two-way ANOVA; p<0.05). Conclusions The microscopic analysis proved that a periapical disease development occurred during the experimental periods with an evolution from pulpal necrosis to periapical bone resorption. PMID:21922123

  6. Investigation of kinematics of knuckling shot in soccer

    NASA Astrophysics Data System (ADS)

    Asai, T.; Hong, S.

    2017-02-01

    In this study, we use four high-speed video cameras to investigate the swing characteristics of the kicking leg while delivering the knuckling shot in soccer. We attempt to elucidate the impact process of the kicking foot at the instant of its impact with the ball and the technical mechanisms of the knuckling shot via comparison of its curved motion with that of the straight and curved shots. Two high-speed cameras (Fastcam, Photron Inc., Tokyo, Japan; 1000 fps, 1024 × 1024 pixels) are set up 2 m away from the site of impact with a line of sight perpendicular to the kicking-leg side. In addition, two semi-high-speed cameras (EX-F1, Casio Computer Co., Ltd., Tokyo, Japan; 300 fps; 720 × 480 pixels) are positioned, one at the rear and the other on the kicking-leg side, to capture the kicking motion. We observe that the ankle joint at impact in the knuckling shot flexes in an approximate L-shape in a manner similar to the joint flexing for the curve shot. The hip's external rotation torque in the knuckling shot is greater than those of other shots, which suggests the tendency of the kicker to push the heel forward and impact with the inside of the foot. The angle of attack in the knuckling shot is smaller than that in other shots, and we speculate that this small attack angle is a factor in soccer kicks which generate shots with smaller rotational frequencies of the ball.

  7. Improved Space Object Observation Techniques Using CMOS Detectors

    NASA Astrophysics Data System (ADS)

    Schildknecht, T.; Hinze, A.; Schlatter, P.; Silha, J.; Peltonen, J.; Santti, T.; Flohrer, T.

    2013-08-01

    CMOS-sensors, or in general Active Pixel Sensors (APS), are rapidly replacing CCDs in the consumer camera market. Due to significant technological advances during the past years these devices start to compete with CCDs also for demanding scientific imaging applications, in particular in the astronomy community. CMOS detectors offer a series of inherent advantages compared to CCDs, due to the structure of their basic pixel cells, which each contain their own amplifier and readout electronics. The most prominent advantages for space object observations are the extremely fast and flexible readout capabilities, feasibility for electronic shuttering and precise epoch registration, and the potential to perform image processing operations on-chip and in real-time. Presently applied and proposed optical observation strategies for space debris surveys and space surveillance applications had to be analyzed. The major design drivers were identified and potential benefits from using available and future CMOS sensors were assessed. The major challenges and design drivers for ground-based and space-based optical observation strategies have been analyzed. CMOS detector characteristics were critically evaluated and compared with the established CCD technology, especially with respect to the above mentioned observations. Similarly, the desirable on-chip processing functionalities which would further enhance the object detection and image segmentation were identified. Finally, the characteristics of a particular CMOS sensor available at the Zimmerwald observatory were analyzed by performing laboratory test measurements.

  8. Differential standard deviation of log-scale intensity based optical coherence tomography angiography.

    PubMed

    Shi, Weisong; Gao, Wanrong; Chen, Chaoliang; Yang, Victor X D

    2017-12-01

    In this paper, a differential standard deviation of log-scale intensity (DSDLI) based optical coherence tomography angiography (OCTA) is presented for calculating microvascular images of human skin. The DSDLI algorithm calculates the variance in difference images of two consecutive log-scale intensity based structural images from the same position along depth direction to contrast blood flow. The en face microvascular images were then generated by calculating the standard deviation of the differential log-scale intensities within the specific depth range, resulting in an improvement in spatial resolution and SNR in microvascular images compared to speckle variance OCT and power intensity differential method. The performance of DSDLI was testified by both phantom and in vivo experiments. In in vivo experiments, a self-adaptive sub-pixel image registration algorithm was performed to remove the bulk motion noise, where 2D Fourier transform was utilized to generate new images with spatial interval equal to half of the distance between two pixels in both fast-scanning and depth directions. The SNRs of signals of flowing particles are improved by 7.3 dB and 6.8 dB on average in phantom and in vivo experiments, respectively, while the average spatial resolution of images of in vivo blood vessels is increased by 21%. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Evaluation of LANDSAT multispectral scanner images for mapping altered rocks in the east Tintic Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Rowan, L. C.; Abrams, M. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Positive findings of earlier evaluations of the color-ratio compositing technique for mapping limonitic altered rocks in south-central Nevada are confirmed, but important limitations in the approach used are pointed out. These limitations arise from environmental, geologic, and image processing factors. The greater vegetation density in the East Tintic Mountains required several modifications in procedures to improve the overall mapping accuracy of the CRC approach. Large format ratio images provide better internal registration of the diazo films and avoids the problems associated with magnifications required in the original procedure. Use of the Linoscan 204 color recognition scanner permits accurate consistent extraction of the green pixels representing limonitic bedrock maps that can be used for mapping at large scales as well as for small scale reconnaissance.

  10. Coherent multiscale image processing using dual-tree quaternion wavelets.

    PubMed

    Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G

    2008-07-01

    The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.

  11. Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.

    PubMed

    Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme

    2014-03-01

    Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.

  12. Using MASHA+TIMEPIX Setup for Registration Beta Decay Isotopes Produced in Heavy Ion Induced Reactions

    NASA Astrophysics Data System (ADS)

    Rodin, A. M.; Belozerov, A. V.; Chernysheva, E. V.; Dmitriev, S. N.; Gulyaev, A. V.; Gulyaeva, A. V.; Itkis, M. G.; Novoselov, A. S.; Oganessian, Yu. Ts.; Salamatin, V. S.; Stepantsov, S. V.; Vedeneev, V. Yu.; Yukhimchuk, S. A.; Krupa, L.; Granja, C.; Pospisil, S.; Kliman, J.; Motycak, S.; Sivacek, I.

    2015-06-01

    Radon and mercury isotopes were produced in multi nucleon transfer (48Ca + 232Th) and complete fusion (48Ca + naturalNd) reactions, respectively. The isotopes with given masses were detected using two detectors: a multi-strip detector of the well-type (made in CANBERRA) and a position-sensitive quantum counting hybrid pixel detector of the TIMEPIX type. The isotopes implanted into the detectors then emit alpha- and betaparticles until reaching the long lived isotopes. The position of the isotopes, the tracks, the time and energy of beta-particles were measured and analyzed. A new software for the particle recognition and data analysis of experimental results was developed and used. It was shown that MASHA+ TIMEPIX setup is a powerful instrument for investigation of neutron-rich isotopes far from stability limits.

  13. The algorithm of fast image stitching based on multi-feature extraction

    NASA Astrophysics Data System (ADS)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  14. Removing ring artefacts from synchrotron radiation-based hard x-ray tomography data

    NASA Astrophysics Data System (ADS)

    Thalmann, Peter; Bikis, Christos; Schulz, Georg; Paleo, Pierre; Mirone, Alessandro; Rack, Alexander; Siegrist, Stefan; Cörek, Emre; Huwyler, Jörg; Müller, Bert

    2017-09-01

    In hard X-ray microtomography, ring artefacts regularly originate from incorrectly functioning pixel elements on the detector or from particles and scratches on the scintillator. We show that due to the high sensitivity of contemporary beamline setups further causes inducing inhomogeneities in the impinging wavefronts have to be considered. We propose in this study a method to correct the thereby induced failure of simple flatfield approaches. The main steps of the pipeline are (i) registration of the reference images with the radiographs (projections), (ii) integration of the flat-field corrected projection over the acquisition angle, (iii) high-pass filtering of the integrated projection, (iv) subtraction of filtered data from the flat-field corrected projections. The performance of the protocol is tested on data sets acquired at the beamline ID19 at ESRF using single distance phase tomography.

  15. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Dougalas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2003-12-09

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  16. Calibration of fluorescence resonance energy transfer in microscopy

    DOEpatents

    Youvan, Douglas C.; Silva, Christopher M.; Bylina, Edward J.; Coleman, William J.; Dilworth, Michael R.; Yang, Mary M.

    2002-09-24

    Imaging hardware, software, calibrants, and methods are provided to visualize and quantitate the amount of Fluorescence Resonance Energy Transfer (FRET) occurring between donor and acceptor molecules in epifluorescence microscopy. The MicroFRET system compensates for overlap among donor, acceptor, and FRET spectra using well characterized fluorescent beads as standards in conjunction with radiometrically calibrated image processing techniques. The MicroFRET system also provides precisely machined epifluorescence cubes to maintain proper image registration as the sample is illuminated at the donor and acceptor excitation wavelengths. Algorithms are described that pseudocolor the image to display pixels exhibiting radiometrically-corrected fluorescence emission from the donor (blue), the acceptor (green) and FRET (red). The method is demonstrated on samples exhibiting FRET between genetically engineered derivatives of the Green Fluorescent Protein (GFP) bound to the surface of Ni chelating beads by histidine-tags.

  17. Singular Stokes-polarimetry as new technique for metrology and inspection of polarized speckle fields

    NASA Astrophysics Data System (ADS)

    Soskin, Marat S.; Denisenko, Vladimir G.; Egorov, Roman I.

    2004-08-01

    Polarimetry is effective technique for polarized light fields characterization. It was shown recently that most full "finger-print" of light fields with arbitrary complexity is network of polarization singularities: C points with circular polarization and L lines with variable azimuth. The new singular Stokes-polarimetry was elaborated for such measurements. It allows define azimuth, eccentricity and handedness of elliptical vibrations in each pixel of receiving CCD camera in the range of mega-pixels. It is based on precise measurement of full set of Stokes parameters by the help of high quality analyzers and quarter-wave plates with λ/500 preciseness and 4" adjustment. The matrices of obtained data are processed in PC by special programs to find positions of polarization singularities and other needed topological features. The developed SSP technique was proved successfully by measurements of topology of polarized speckle-fields produced by multimode "photonic-crystal" fibers, double side rubbed polymer films, biomedical samples. Each singularity is localized with preciseness up to +/- 1 pixel in comparison with 500 pixels dimensions of typical speckle. It was confirmed that network of topological features appeared in polarized light field after its interaction with specimen under inspection is exact individual "passport" for its characterization. Therefore, SSP can be used for smart materials characterization. The presented data show that SSP technique is promising for local analysis of properties and defects of thin films, liquid crystal cells, optical elements, biological samples, etc. It is able discover heterogeneities and defects, which define essentially merits of specimens under inspection and can"t be checked by usual polarimetry methods. The detected extra high sensitivity of polarization singularities position and network to any changes of samples position and deformation opens quite new possibilities for sensing of deformations and displacement of checked elements in the sub-micron range.

  18. A longitudinal study of neuromelanin-sensitive magnetic resonance imaging in Parkinson's disease.

    PubMed

    Matsuura, Keita; Maeda, Masayuki; Tabei, Ken-Ichi; Umino, Maki; Kajikawa, Hiroyuki; Satoh, Masayuki; Kida, Hirotaka; Tomimoto, Hidekazu

    2016-10-28

    Neuromelanin-sensitive MR imaging (NMI) is an increasingly powerful tool for the diagnosis of Parkinson's disease (PD). This study was undertaken to evaluate longitudinal changes on NMI in PD patients. We examined longitudinal changes on NMI in 14 PD patients. The area and contrast ratio (CR) of the substantia nigra pars compacta (SNc) were comparatively analyzed. The total area and CR of the SNc upon follow-up NMI were significantly smaller than those on initial NMI (from 33.5±18.9 pixels and 6.35±2.86% to 21.5±16.7 pixels and 4.19±2.11%; Wilcoxon signed-rank test, p<0.001 and p=0.022, respectively). The area and CR of the dominant side SNc upon initial NMI were significantly greater than those on follow-up NMI (from 15.3±9.1 pixels and 6.5±2.7% to 7.9±8.5 pixels and 3.7±2.9%; Wilcoxon signed-rank test, p=0.002 and p=0.007, respectively). On a case-by-case basis, the area of the SNc invariably decreased upon follow-up NMI in all patients. We further demonstrated that the total area and CR of the SNc negatively correlated with disease duration (Pearson correlation coefficient, r=-0.63, p<0.001 and r=-0.41, p=0.031, respectively). In area analyses, our results demonstrated very high intraclass correlation coefficients for both intra- and inter-rater reliability. NMI is a useful and reliable tool for detecting neuropathological changes over time in PD patients. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. SU-E-T-291: Dosimetric Accuracy of Multitarget Single Isocenter Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tannazi, F; Huang, M; Thomas, E

    2015-06-15

    Purpose: To evaluate the accuracy of single-isocenter multiple-target VMAT radiosurgery (SIMT-VMAT-SRS) by analysis of pre-treatment verification measurements. Methods: Our QA procedure used a phantom having a coronal plane for EDR2 film and a 0.125 cm3 ionization chamber. Film measurements were obtained for the largest and smallest targets for each plan. An ionization chamber measurement (ICM) was obtained for sufficiently large targets. Films were converted to dose using a patient-specific calibration curve and compared to treatment planning system calculations. Alignment error was estimated using image registration. The gamma index was calculated for 3%/3 and 3%/1 mm criteria. The median dose inmore » the target region and, for plans having an ICM, the average dose in the central 5 mm was calculated. Results: The average equivalent target diameter of the 48 targets was 15 mm (3–43 mm). Twenty of the 24 plans had an ICM for the plan corresponding to the largest target (diameter 11–43 mm) with a mean ratio of chamber reading to expected dose (ED) and the mean ratio of film to ED (averaged over the central 5 mm) was 1.001 (0.025 SD) and 1.000 (0.029 SD), respectively. For all plans, the mean film to ED (from the median dose in the target region) was 0.997 (0.027 SD). The mean registration vector was (0.15,0.29) mm, with an average magnitude of 0.96 mm. Before (after) registration, the average fraction of pixels having gamma < 1 was 99.3% (99.6%) and 89.1% (97.6%) for 3%/3mm and 3%/1mm, respectively. Conclusion: Our results demonstrate dosimetric accuracy of SIMT-VMAT-SRS for targets as small as 3 mm. Film dosimetry provides accurate assessment of the absolute dose delivered to targets too small for an ionization chamber measurement; however, the relatively large registration vector indicates that image-guidance should replace laser-based setup for patient-specific evaluation of geometric accuracy.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, M; Brezovich, I; Duan, J

    Purpose: To demonstrate a patient specific, image-guided quality assurance method that tests both dosimetric and geometric accuracy for single-isocenter multiple-target VMAT radiosurgery (SIMT-VMAT-SRS) Method: We used a new film type, EBT-XD (optimal range 0.4–40Gy), and an in-house PMMA phantom having a coronal plane for film and a 0.125 cm3 ionization chamber (IC). The phantom contained fiducial features for kV image guided setup and for accurate film marking. Five patient plans with multiple targets sizes ranging from 3 to 21mm in diameter and prescribed doses from 14 to 18 Gy were selected. Two verification plans were generated for each case withmore » the film plane passing through the center of the largest and smallest targets. For the four largest targets we obtained an IC measurement. For each case, a calibration film was irradiated using a custom designed step pattern. The films were scanned using a flatbed color scanner and converted to dose using the calibration film and the three channel calibration method. Image registration was performed between film and treatment planning system calculations to evaluate the geometric accuracy. Results: The mean registration vector had an average magnitude of 0.47 mm (range from 0.13mm to 0.64 mm). For the four largest targets, the mean ratio of the IC and film measurement to expected dose was 0.990 (range 0.968 to 1.009) and 1.032 (1.021 to 1.046), respectively. The fraction of pixels having gamma index < 1 for criteria of 3%/3mm, 3%/2mm, 3%/1mm was 98.8%, 97.5% and 87.2% before geometric registration and 99.1%, 98.3% and 94.8% after registration. Conclusion: We have demonstrated an image-guided QA method can assess both geometric and dosimetric accuracy. The phantom was positioned with sub-millimeter accuracy. Absolute film dosimetry using EBT-XD film was sufficiently accurate for assessment of dose to multi-targets too small for IC measurement in SRS VMAT plans.« less

  1. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  2. A pixelated x-ray detector for diffraction imaging at next-generation high-rate FEL sources

    NASA Astrophysics Data System (ADS)

    Lodola, L.; Ratti, L.; Comotti, D.; Fabris, L.; Grassi, M.; Malcovati, P.; Manghisoni, M.; Re, V.; Traversi, G.; Vacchi, C.; Batignani, G.; Bettarini, S.; Forti, F.; Casarosa, G.; Morsani, F.; Paladino, A.; Paoloni, E.; Rizzo, G.; Benkechkache, M. A.; Dalla Betta, G.-F.; Mendicino, R.; Pancheri, L.; Verzellesi, G.; Xu, H.

    2017-08-01

    The PixFEL collaboration has developed the building blocks for an X-ray imager to be used in applications at FELs. In particular, slim edge pixel detectors with high detection efficiency over a broad energy range, from 1 to 12 keV, have been developed. Moreover, a multichannel readout chip, called PFM2 (PixFEL front-end Matrix 2) and consisting of 32 × 32 cells, has been designed and fabricated in a 65 nm CMOS technology. The pixel pitch is 110 μm, the overall area is around 16 mm2. In the chip, different solutions have been implemented for the readout channel, which includes a charge sensitive amplifier (CSA) with dynamic signal compression, a time-variant shaper and an A-to-D converter with a 10 bit resolution. The CSA can be configured in four different gain modes, so as to comply with photon energies in the 1 to 10 keV range. The paper will describe in detail the channel architecture and present the results from the characterization of PFM2. It will discuss the design of a new version of the chip, called PFM3, suitable for post-processing with peripheral, under-pad through silicon vias (TSVs), which are needed to develop four-side buttable chips and cover large surfaces with minimum inactive area.

  3. Data registration without explicit correspondence for adjustment of camera orientation parameter estimation

    NASA Astrophysics Data System (ADS)

    Barsai, Gabor

    Creating accurate, current digital maps and 3-D scenes is a high priority in today's fast changing environment. The nation's maps are in a constant state of revision, with many alterations or new additions each day. Digital maps have become quite common. Google maps, Mapquest and others are examples. These also have 3-D viewing capability. Many details are now included, such as the height of low bridges, in the attribute data for the objects displayed on digital maps and scenes. To expedite the updating of these datasets, they should be created autonomously, without human intervention, from data streams. Though systems exist that attain fast, or even real-time performance mapping and reconstruction, they are typically restricted to creating sketches from the data stream, and not accurate maps or scenes. The ever increasing amount of image data available from private companies, governments and the internet, suggest the development of an automated system is of utmost importance. The proposed framework can create 3-D views autonomously; which extends the functionality of digital mapping. The first step to creating 3-D views is to reconstruct the scene of the area to be mapped. To reconstruct a scene from heterogeneous sources, the data has to be registered: either to each other or, preferably, to a general, absolute coordinate system. Registering an image is based on the reconstruction of the geometric relationship of the image to the coordinate system at the time of imaging. Registration is the process of determining the geometric transformation parameters of a dataset in one coordinate system, the source, with respect to the other coordinate system, the target. The advantages of fusing these datasets by registration manifests itself by the data contained in the complementary information that different modality datasets have. The complementary characteristics of these systems can be fully utilized only after successful registration of the photogrammetric and alternative data relative to a common reference frame. This research provides a novel approach to finding registration parameters, without the explicit use of conjugate points, but using conjugate features. These features are open or closed free-form linear features, there is no need for a parametric or any other type of representation of these features The proposed method will use different modality datasets of the same area: lidar data, image data and GIS data. There are two datasets: one from the Ohio State University and the other from San Bernardino, California. The reconstruction of scenes from imagery and range data, using laser and radar data, has been an active research area in the fields of photogrammetry and computer vision. Automatic, or just less human intervention, would have a great impact on alleviating the "bottle-neck" that describes the current state of creating knowledge from data. Pixels or laser points, the output of the sensor, represent a discretization of the real world. By themselves, these data points do not contain representative information. The values that are associated with them, intensity values and coordinates, do not define an object, and thus accurate maps are not possible just from data. Data is not an end product, nor does it directly provide answers to applications, although implicitly, the information about the object in question is contained in the data. In some form, the data from the initial data acquisition by the sensor has to be further processed to create useable information, and this information has to be combined with facts, procedures and heuristics that can be used to make inferences for reconstruction. To reconstruct a scene perfectly, whether it is an urban or rural scene, requires prior knowledge, heuristics. Buildings are, usually, smooth surfaces and many buildings are blocky with orthogonal, straight edges and sides; streets are smooth; vegetation is rough, with different shapes and sizes of trees, bushes. This research provides a path to fuse data from lidar, GIS and digital multispectral images and reconstructing the precise 3-D scene model, without human intervention, regardless of the type of data or features in the data. The data are initially registered to each other using GPS/INS initial positional values, then conjugate features are found in the datasets to refine the registration. The novelty of the research is that no conjugate points are necessary in the various datasets, and registration is performed without human intervention. The proposed system uses the original lidar and GIS data and finds edges of buildings with the help of the digital images, utilizing the exterior orientation parameters to project the lidar points onto the edge extracted image/map. These edge points are then utilized to orient and locate the datasets, in a correct position with respect to each other.

  4. Spatial patterns of aquatic habitat richness in the Upper Mississippi River floodplain, USA

    USGS Publications Warehouse

    De Jager, Nathan R.; Rohweder, Jason J.

    2012-01-01

    Interactions among hydrology and geomorphology create shifting mosaics of aquatic habitat patches in large river floodplains (e.g., main and side channels, floodplain lakes, and shallow backwater areas) and the connectivity among these habitat patches underpins high levels of biotic diversity and productivity. However, the diversity and connectivity among the habitats of most floodplain rivers have been negatively impacted by hydrologic and structural modifications that support commercial navigation and control flooding. We therefore tested the hypothesis that the rate of increase in patch richness (# of types) with increasing scale reflects anthropogenic modifications to habitat diversity and connectivity in a large floodplain river, the Upper Mississippi River (UMR). To do this, we calculated the number of aquatic habitat patch types within neighborhoods surrounding each of the ≈19 million 5-m aquatic pixels of the UMR for multiple neighborhood sizes (1–100 ha). For all of the 87 river-reach focal areas we examined, changes in habitat richness (R) with increasing neighborhood length (L, # pixels) were characterized by a fractal-like power function R = Lz (R2 > 0.92 (P z) measures the rate of increase in habitat richness with neighborhood size and is related to a fractal dimension. Variation in z reflected fundamental changes to spatial patterns of aquatic habitat richness in this river system. With only a few exceptions, z exceeded the river-wide average of 0.18 in focal areas where side channels, contiguous floodplain lakes, and contiguous shallow-water areas exceeded 5%, 5%, and 10% of the floodplain respectively. In contrast, z was always less than 0.18 for focal areas where impounded water exceeded 40% of floodplain area. Our results suggest that rehabilitation efforts that target areas with <5% of the floodplain in side channels, <5% in floodplain lakes, and/or <10% in shallow-water areas could improve habitat diversity across multiple scales in the UMR.

  5. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  6. Validation of early GOES-16 ABI on-orbit geometrical calibration accuracy using SNO method

    NASA Astrophysics Data System (ADS)

    Yu, Fangfang; Shao, Xi; Wu, Xiangqian; Kondratovich, Vladimir; Li, Zhengping

    2017-09-01

    The Advanced Baseline Imager (ABI) onboard the GOES-16 satellite, which was launched on 19 November 2016, is the first next-generation geostationary weather instrument in the west hemisphere. It has 16 spectral solar reflective and emissive bands located in three focal plane modules (FPM): one visible and near infrared (VNIR) FPM, one midwave infrared (MWIR), and one longwave infrared (LWIR) FPM. All the ABI bands are geometeorically calibrated with new techniques of Kalman filtering and Global Positioning System (GPS) to determine the accurate spacecraft attitude and orbit configuration to meet the challenging image navigation and registration (INR) requirements of ABI data. This study is to validate the ABI navigation and band-to-band registration (BBR) accuracies using the spectrally matched pixels of the Suomi National Polar-orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS) M-band data and the ABI images from the Simultaneous Nadir Observation (SNO) images. The preliminary results showed that during the ABI post-launch product test (PLPT) period, the ABI BBR errors at the y-direction (along the VIIRS track direction) is smaller than at the x-direction (along the VIIRS scan direction). Variations in the ABI BBR calibration residuals and navigation difference to VIIRS can be observed. Note that ABI is not operational yet and the data is experimental and still under testing. Effort is still ongoing to improve the ABI data quality.

  7. Investigation of Thermal Expansion of a Glass Ceramic Material with an Extra-Low Thermal Linear Expansion Coefficient

    NASA Astrophysics Data System (ADS)

    Kompan, T. A.; Korenev, A. S.; Lukin, A. Ya.

    2008-10-01

    The artificial material sitall CO-115M was developed purposely as a material with an extra-low thermal expansion. The controlled crystallization of an aluminosilicate glass melt leads to the formation of a mixture of β-spodumen, β-eucryptite, and β-silica anisotropic microcrystals in a matrix of residual glass. Due to the small size of the microcrystals, the material is homogeneous and transparent. Specific lattice anharmonism of these microcrystal materials results in close to zero average thermal linear expansion coefficient (TLEC) of the sitall material. The thermal expansion coefficient of this material was measured using an interferometric method in line with the classical approach of Fizeau. To obtain the highest accuracy, the registration of light intensity of the total interference field was used. Then, the parameters of the interference pattern were calculated. Due to the large amount of information in the interference pattern, the error of the calculated fringe position was less than the size of a pixel of the optical registration system. The thermal expansion coefficient of the sitall CO-115M and its temperature dependence were measured. The TLEC value of about 3 × 10-8 K-1 to 5 × 10-8 K-1 in the temperature interval from -20 °C to +60 °C was obtained. A special investigation was carried out to show the homogeneity of the material.

  8. Dawn Survey Orbit Image 42

    NASA Image and Video Library

    2015-08-06

    This image of Ceres, taken by NASA's Dawn spacecraft, features a large, steep-sided mountain and several intriguing bright spots. The mountain's height is estimated to be about 4 miles (6 kilometers), which is a revision of the previous estimate of 3 miles (5 kilometers). It is the highest point seen on Ceres so far. The image was obtained on June 25, 2015 from an altitude of 2,700 miles (4,400 kilometers) above Ceres and has a resolution of 1,400 feet (410 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19615

  9. Global Map of Pluto

    NASA Image and Video Library

    2015-07-27

    The science team of NASA's New Horizons mission has produced an updated global map of the dwarf planet Pluto. The map includes all resolved images of the surface acquired between July 7-14, 2015, at pixel resolutions ranging from 40 kilometers (24 miles) on the Charon-facing hemisphere (left and right sides of the map) to 400 meters (1,250 feet) on the anti-Charon facing hemisphere (map center). Many additional images are expected in fall of 2015 and these will be used to complete the global map. http://photojournal.jpl.nasa.gov/catalog/PIA19858

  10. A robust sebum, oil, and particulate pollution model for assessing cleansing efficacy of human skin.

    PubMed

    Peterson, G; Rapaka, S; Koski, N; Kearney, M; Ortblad, K; Tadlock, L

    2017-06-01

    With increasing concerns over the rise of atmospheric particulate pollution globally and its impact on systemic health and skin ageing, we have developed a pollution model to mimic particulate matter trapped in sebum and oils creating a robust (difficult to remove) surrogate for dirty, polluted skin. To evaluate the cleansing efficacy/protective effect of a sonic brush vs. manual cleansing against particulate pollution (trapped in grease/oil typical of human sebum). The pollution model (Sebollution; sebum pollution model; SPM) consists of atmospheric particulate matter/pollution combined with grease/oils typical of human sebum. Twenty subjects between the ages of 18-65 were enrolled in a single-centre, cleansing study comparisons between the sonic cleansing brush (normal speed) compared to manual cleansing. Equal amount of SPM was applied to the centre of each cheek (left and right). Method of cleansing (sonic vs. manual) was randomized to the side of the face (left or right) for each subject. Each side was cleansed for five-seconds using the sonic cleansing device with sensitive brush head or manually, using equal amounts of water and a gel cleanser. Photographs (VISIA-CR, Canfield Imaging, NJ, USA) were taken at baseline (before application of the SPM), after application of SPM (pre-cleansing), and following cleansing. Image analysis (ImageJ, NIH, Bethesda, MD, USA) was used to quantify colour intensity (amount of particulate pollutants on the skin) using a scale of 0 to 255 (0 = all black pixels; 255 = all white pixels). Differences between the baseline and post-cleansing values (pixels) are reported as the amount of SPM remaining following each method of cleansing. Using a robust cleansing protocol to assess removal of pollutants (SPM; atmospheric particulate matter trapped in grease/oil), the sonic brush removed significantly more SPM than manual cleansing (P < 0.001). While extreme in colour, this pollution method easily allows assessment of efficacy through image analysis. © 2016 The Authors. International Journal of Cosmetic Science published by John Wiley & Sons Ltd on behalf of Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  11. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    NASA Astrophysics Data System (ADS)

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  12. ISCE: A Modular, Reusable Library for Scalable SAR/InSAR Processing

    NASA Astrophysics Data System (ADS)

    Agram, P. S.; Lavalle, M.; Gurrola, E. M.; Sacco, G. F.; Rosen, P. A.

    2016-12-01

    Traditional community SAR/InSAR processing software tools have primarily focused on differential interferometry and Solid Earth applications. The InSAR Scientific Computing Environment (ISCE) was specifically designed to support the Earth Sciences user community as well as large scale operational processing tasks, thanks to its two-layered (Python+C/Fortran) architecture and modular framework. ISCE is freely distributed as a source tarball, allowing advanced users to modify and extend it for their research purposes and developing exploratory applications, while providing a relatively simple user interface for novice users to perform routine data analysis efficiently. Modular design of the ISCE library also enables easier development of applications to address the needs of Ecosystems, Cryosphere and Disaster Response communities in addition to the traditional Solid Earth applications. In this talk, we would like to emphasize the broader purview of the ISCE library and some of its unique features that sets it apart from other freely available community software like GMTSAR and DORIS, including: Support for multiple geometry regimes - Native Doppler (ALOS-1) as well Zero Doppler (ESA missions) systems. Support for data acquired by airborne platforms - e.g, JPL's UAVSAR and AirMOSS, DLR's F-SAR. Radiometric Terrain Correction - Auxiliary output layers from the geometry modules include projection angles, incidence angles, shadow-layover masks. Dense pixel offsets - Parallelized amplitude cross correlation for cryosphere / ionospheric correction applications. Rubber sheeting - Pixel-by-pixel offsets fields for resampling slave imagery for geometric co-registration/ ionospheric corrections. Preliminary Tandem-X processing support - Bistatic geometry modules. Extensibility to support other non-Solid Earth missions - Modules can be directly adopted for use with other SAR missions, e.g., SWOT. Preliminary support for multi-dimensional data products- multi-polarization, multi-frequency, multi-temporal, multi-baseline stacks via the PLANT and GIAnT toolboxes. Rapid prototyping - Geometry manipulation functionality at the python level allows users to prototype and test processing modules at the interpreter level before optimal implementation in C/C++/Fortran.

  13. Novel Fluorescein Angiography-Based Computer-Aided Algorithm for Assessment of Retinal Vessel Permeability

    PubMed Central

    Chassidim, Yoash; Parmet, Yisrael; Tomkins, Oren; Knyazer, Boris; Friedman, Alon; Levy, Jaime

    2013-01-01

    Purpose To present a novel method for quantitative assessment of retinal vessel permeability using a fluorescein angiography-based computer algorithm. Methods Twenty-one subjects (13 with diabetic retinopathy, 8 healthy volunteers) underwent fluorescein angiography (FA). Image pre-processing included removal of non-retinal and noisy images and registration to achieve spatial and temporal pixel-based analysis. Permeability was assessed for each pixel by computing intensity kinetics normalized to arterial values. A linear curve was fitted and the slope value was assigned, color-coded and displayed. The initial FA studies and the computed permeability maps were interpreted in a masked and randomized manner by three experienced ophthalmologists for statistical validation of diagnosis accuracy and efficacy. Results Permeability maps were successfully generated for all subjects. For healthy volunteers permeability values showed a normal distribution with a comparable range between subjects. Based on the mean cumulative histogram for the healthy population a threshold (99.5%) for pathological permeability was determined. Clear differences were found between patients and healthy subjects in the number and spatial distribution of pixels with pathological vascular leakage. The computed maps improved the discrimination between patients and healthy subjects, achieved sensitivity and specificity of 0.974 and 0.833 respectively, and significantly improved the consensus among raters for the localization of pathological regions. Conclusion The new algorithm allows quantification of retinal vessel permeability and provides objective, more sensitive and accurate evaluation than the present subjective clinical diagnosis. Future studies with a larger patients’ cohort and different retinal pathologies are awaited to further validate this new approach and its role in diagnosis and treatment follow-up. Successful evaluation of vasculature permeability may be used for the early diagnosis of brain microvascular pathology and potentially predict associated neurological sequelae. Finally, the algorithm could be implemented for intraoperative evaluation of micovascular integrity in other organs or during animal experiments. PMID:23626701

  14. System and method for phase retrieval for radio telescope and antenna control

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor)

    2013-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for radio phase retrieval. A system practicing the method gathers first data from radio waves associated with an object observed via a first aperture, gathers second data from radio waves associated with the object observed via an introduced second aperture associated with the first aperture, generates reduced noise data by incoherently subtracting the second data from the first data, and performs phase retrieval for the radio waves by modeling the reduced noise data using a single Fourier transform. The first and second apertures are at different positions, such as side by side. This approach can include determining a value Q which represents a ratio of wavelength times a focal ratio divided by pixel spacing. This information can be used to accurately measure and correct alignment errors or other optical system flaws in the apertures.

  15. The Halo

    NASA Image and Video Library

    2013-12-23

    NASA's Cassini spacecraft looks towards the dark side of Saturn's largest moon, Titan, capturing the blue halo caused by a haze layer that hovers high in the moon's atmosphere. The haze that permeates Titan's atmosphere scatters sunlight and produces the orange color seen here. More on Titan's orange and blue hazes can be found at PIA14913. This view looks towards the side of Titan (3,200 miles or 5,150 kilometers across) that leads in its orbit around Saturn. North on Titan is up and rotated 40 degrees to the left. Images taken using red, green and blue spectral filters were combined to create this natural-color view. The images were taken with the Cassini spacecraft narrow-angle camera on Nov. 3, 2013. The view was acquired at a distance of approximately 2.421 million miles (3.896 million kilometers) from Titan. Image scale is 14 miles (23 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA17180

  16. Global and Local Features Based Classification for Bleed-Through Removal

    NASA Astrophysics Data System (ADS)

    Hu, Xiangyu; Lin, Hui; Li, Shutao; Sun, Bin

    2016-12-01

    The text on one side of historical documents often seeps through and appears on the other side, so the bleed-through is a common problem in historical document images. It makes the document images hard to read and the text difficult to recognize. To improve the image quality and readability, the bleed-through has to be removed. This paper proposes a global and local features extraction based bleed-through removal method. The Gaussian mixture model is used to get the global features of the images. Local features are extracted by the patch around each pixel. Then, the extreme learning machine classifier is utilized to classify the scanned images into the foreground text and the bleed-through component. Experimental results on real document image datasets show that the proposed method outperforms the state-of-the-art bleed-through removal methods and preserves the text strokes well.

  17. On-board landmark navigation and attitude reference parallel processor system

    NASA Technical Reports Server (NTRS)

    Gilbert, L. E.; Mahajan, D. T.

    1978-01-01

    An approach to autonomous navigation and attitude reference for earth observing spacecraft is described along with the landmark identification technique based on a sequential similarity detection algorithm (SSDA). Laboratory experiments undertaken to determine if better than one pixel accuracy in registration can be achieved consistent with onboard processor timing and capacity constraints are included. The SSDA is implemented using a multi-microprocessor system including synchronization logic and chip library. The data is processed in parallel stages, effectively reducing the time to match the small known image within a larger image as seen by the onboard image system. Shared memory is incorporated in the system to help communicate intermediate results among microprocessors. The functions include finding mean values and summation of absolute differences over the image search area. The hardware is a low power, compact unit suitable to onboard application with the flexibility to provide for different parameters depending upon the environment.

  18. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array.

    PubMed

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-11-10

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above.

  19. Robust Approach for Nonuniformity Correction in Infrared Focal Plane Array

    PubMed Central

    Boutemedjet, Ayoub; Deng, Chenwei; Zhao, Baojun

    2016-01-01

    In this paper, we propose a new scene-based nonuniformity correction technique for infrared focal plane arrays. Our work is based on the use of two well-known scene-based methods, namely, adaptive and interframe registration-based exploiting pure translation motion model between frames. The two approaches have their benefits and drawbacks, which make them extremely effective in certain conditions and not adapted for others. Following on that, we developed a method robust to various conditions, which may slow or affect the correction process by elaborating a decision criterion that adapts the process to the most effective technique to ensure fast and reliable correction. In addition to that, problems such as bad pixels and ghosting artifacts are also dealt with to enhance the overall quality of the correction. The performance of the proposed technique is investigated and compared to the two state-of-the-art techniques cited above. PMID:27834893

  20. Scene-based nonuniformity correction and enhancement: pixel statistics and subpixel motion.

    PubMed

    Zhao, Wenyi; Zhang, Chao

    2008-07-01

    We propose a framework for scene-based nonuniformity correction (NUC) and nonuniformity correction and enhancement (NUCE) that is required for focal-plane array-like sensors to obtain clean and enhanced-quality images. The core of the proposed framework is a novel registration-based nonuniformity correction super-resolution (NUCSR) method that is bootstrapped by statistical scene-based NUC methods. Based on a comprehensive imaging model and an accurate parametric motion estimation, we are able to remove severe/structured nonuniformity and in the presence of subpixel motion to simultaneously improve image resolution. One important feature of our NUCSR method is the adoption of a parametric motion model that allows us to (1) handle many practical scenarios where parametric motions are present and (2) carry out perfect super-resolution in principle by exploring available subpixel motions. Experiments with real data demonstrate the efficiency of the proposed NUCE framework and the effectiveness of the NUCSR method.

  1. Dual-Modality Small Animal Imaging System*

    NASA Astrophysics Data System (ADS)

    Ranck, Amoreena; Feldmann, John; Saunders, Robert S.; Welsh, Robert E.; Bradley, Eric L.; Saha, Margaret S.; Kross, Brian; Majewski, Stan; Popov, Vladimir; Weisenberger, Andrew G.; Wojcik, Randolph

    2000-10-01

    We describe preliminary results from an imaging system consisting of an array of position-sensitive photomultiplier tubes (PSPMTs) viewing pixelated scintillators and a small fluoroscopic x-ray system (Lixi, Inc.). The PSPMT detectors are used to follow the uptake of lignads tagged principally with ^125I which emits photons in the 30keV region. The fluoroscope allows the superposition of structural information on the pattern of the radioligands. This "dual modality" technique permits more accurate tracking of the tagged material in the animal under study. Small sources give fiducial information on both x-ray and radioligand pictures allowing close registration of the two views of the system under study. Improvements to this system incorporating a very versatile rotatable gantry capable of supporting a wide range of detection systems simultaneously will be described. *Supported in part by The American Diabetes Association, The Jeffress Trust, The National Science Foundation, The Department of Energy, and The Howard Hughes Foundation

  2. Decision-Level Fusion of Spatially Scattered Multi-Modal Data for Nondestructive Inspection of Surface Defects

    PubMed Central

    Heideklang, René; Shokouhi, Parisa

    2016-01-01

    This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200

  3. Self calibrating autoTRAC

    NASA Technical Reports Server (NTRS)

    Everett, Louis J.

    1994-01-01

    The work reported here demonstrates how to automatically compute the position and attitude of a targeting reflective alignment concept (TRAC) camera relative to the robot end effector. In the robotics literature this is known as the sensor registration problem. The registration problem is important to solve if TRAC images need to be related to robot position. Previously, when TRAC operated on the end of a robot arm, the camera had to be precisely located at the correct orientation and position. If this location is in error, then the robot may not be able to grapple an object even though the TRAC sensor indicates it should. In addition, if the camera is significantly far from the alignment it is expected to be at, TRAC may give incorrect feedback for the control of the robot. A simple example is if the robot operator thinks the camera is right side up but the camera is actually upside down, the camera feedback will tell the operator to move in an incorrect direction. The automatic calibration algorithm requires the operator to translate and rotate the robot arbitrary amounts along (about) two coordinate directions. After the motion, the algorithm determines the transformation matrix from the robot end effector to the camera image plane. This report discusses the TRAC sensor registration problem.

  4. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    PubMed Central

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection. PMID:28286351

  5. Three-dimensional measurement of small inner surface profiles using feature-based 3-D panoramic registration

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Seibel, Eric J.

    2017-01-01

    Rapid development in the performance of sophisticated optical components, digital image sensors, and computer abilities along with decreasing costs has enabled three-dimensional (3-D) optical measurement to replace more traditional methods in manufacturing and quality control. The advantages of 3-D optical measurement, such as noncontact, high accuracy, rapid operation, and the ability for automation, are extremely valuable for inline manufacturing. However, most of the current optical approaches are eligible for exterior instead of internal surfaces of machined parts. A 3-D optical measurement approach is proposed based on machine vision for the 3-D profile measurement of tiny complex internal surfaces, such as internally threaded holes. To capture the full topographic extent (peak to valley) of threads, a side-view commercial rigid scope is used to collect images at known camera positions and orientations. A 3-D point cloud is generated with multiview stereo vision using linear motion of the test piece, which is repeated by a rotation to form additional point clouds. Registration of these point clouds into a complete reconstruction uses a proposed automated feature-based 3-D registration algorithm. The resulting 3-D reconstruction is compared with x-ray computed tomography to validate the feasibility of our proposed method for future robotically driven industrial 3-D inspection.

  6. Model-based estimation with boundary side information or boundary regularization [cardiac emission CT].

    PubMed

    Chiao, P C; Rogers, W L; Fessler, J A; Clinthorne, N H; Hero, A O

    1994-01-01

    The authors have previously developed a model-based strategy for joint estimation of myocardial perfusion and boundaries using ECT (emission computed tomography). They have also reported difficulties with boundary estimation in low contrast and low count rate situations. Here they propose using boundary side information (obtainable from high resolution MRI and CT images) or boundary regularization to improve both perfusion and boundary estimation in these situations. To fuse boundary side information into the emission measurements, the authors formulate a joint log-likelihood function to include auxiliary boundary measurements as well as ECT projection measurements. In addition, they introduce registration parameters to align auxiliary boundary measurements with ECT measurements and jointly estimate these parameters with other parameters of interest from the composite measurements. In simulated PET O-15 water myocardial perfusion studies using a simplified model, the authors show that the joint estimation improves perfusion estimation performance and gives boundary alignment accuracy of <0.5 mm even at 0.2 million counts. They implement boundary regularization through formulating a penalized log-likelihood function. They also demonstrate in simulations that simultaneous regularization of the epicardial boundary and myocardial thickness gives comparable perfusion estimation accuracy with the use of boundary side information.

  7. Association between the location of diverticular disease and the irritable bowel syndrome: a multicenter study in Japan.

    PubMed

    Yamada, Eiji; Inamori, Masahiko; Uchida, Eri; Tanida, Emiko; Izumi, Motoyoshi; Takeshita, Kimiya; Fujii, Tetsuro; Komatsu, Kazuto; Hamanaka, Jun; Maeda, Shin; Kanesaki, Akira; Matsuhashi, Nobuyuki; Nakajima, Atsushi

    2014-12-01

    No previous reports have shown an association between location of diverticular disease (DD) and the irritable bowel syndrome (IBS). We included 1,009 consecutive patients undergoing total colonoscopy in seven centers in Japan from June 2013 to September 2013. IBS was diagnosed using Rome III criteria, and diverticulosis was diagnosed by colonoscopy with transparent soft-short-hood. Left-sided colon was defined as sigmoid colon, descending colon, and rectum. Right-sided colon was defined as cecum, ascending colon, and transverse colon. We divided the patients into IBS and non-IBS groups and compared characteristics. Patient characteristics included mean age, 64.2±12.9 years and male:female ratio, 1.62:1. Right-sided DD was identified in 21.6% of subjects. Left-sided and bilateral DD was identified in 6.6 and 12.0% of subjects, respectively. IBS was observed in 7.5% of subjects. Multiple logistic regression analysis showed left-sided DD (odds ratio, 3.1; 95% confidence interval (CI): 1.4-7.1; P=0.0060) and bilateral DD (odds ratio, 2.6; 95% CI, 1.3-5.2; P=0.0070) were independent risk factors for IBS. Right-sided DD was not a risk factor for IBS. Our data showed that the presence of left-sided and bilateral DD, but not right-sided disease, was associated with a higher risk of IBS, indicating that differences in pathological factors caused by the location of the DD are important in the development of IBS. Clarifying the specific changes associated with left-sided DD could provide a better understanding of the pathogenic mechanisms of IBS (Trial registration # R000012739).

  8. Outcomes of the Bowel Cancer Screening Programme (BCSP) in England after the first 1 million tests

    PubMed Central

    Patnick, Julietta; Nickerson, Claire; Coleman, Lynn; Rutter, Matt D; von Wagner, Christian

    2012-01-01

    Introduction The Bowel Cancer Screening Programme in England began operating in 2006 with the aim of full roll out across England by December 2009. Subjects aged 60–69 are being invited to complete three guaiac faecal occult blood tests (6 windows) every 2 years. The programme aims to reduce mortality from colorectal cancer by 16% in those invited for screening. Methods All subjects eligible for screening in the National Health Service in England are included on one database, which is populated from National Health Service registration data covering about 98% of the population of England. This analysis is only of subjects invited to participate in the first (prevalent) round of screening. Results By October 2008 almost 2.1 million had been invited to participate, with tests being returned by 49.6% of men and 54.4% of women invited. Uptake ranged between 55–60% across the four provincial hubs which administer the programme but was lower in the London hub (40%). Of the 1.08 million returning tests 2.5% of men and 1.5% of women had an abnormal test. 17 518 (10 608 M, 6910 F) underwent investigation, with 98% having a colonoscopy as their first investigation. Cancer (n=1772) and higher risk adenomas (n=6543) were found in 11.6% and 43% of men and 7.8% and 29% of women investigated, respectively. 71% of cancers were ‘early’ (10% polyp cancer, 32% Dukes A, 30% Dukes B) and 77% were left-sided (29% rectal, 45% sigmoid) with only 14% being right-sided compared with expected figures of 67% and 24% for left and right side from UK cancer registration. Conclusion In this first round of screening in England uptake and fecal occult blood test positivity was in line with that from the pilot and the original European trials. Although there was the expected improvement in cancer stage at diagnosis, the proportion with left-sided cancers was higher than expected. PMID:22156981

  9. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  10. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  11. Superlattice Barrier Infrared Detector Development at the Jet Propulsion Laboratory

    NASA Technical Reports Server (NTRS)

    Ting, David Z.; Soibel, Alexander; Rafol, Sir B.; Nguyen, Jean; Hoglund, Linda; Khoshakhlagh, Arezou; Keo, Sam A.; Liu, John K.; Mumolo, Jason M.

    2011-01-01

    We report recent efforts in achieving state-of-the-art performance in type-II superlattice based infrared photodetectors using the barrier infrared detector architecture. We used photoluminescence measurements for evaluating detector material and studied the influence of the material quality on the intensity of the photoluminescence. We performed direct noise measurements of the superlattice detectors and demonstrated that while intrinsic 1/f noise is absent in superlattice heterodiode, side-wall leakage current can become a source of strong frequency-dependent noise. We developed an effective dry etching process for these complex antimonide-based superlattices that enabled us to fabricate single pixel devices as well as large format focal plane arrays. We describe the demonstration of a 1024x1024 pixel long-wavelength infrared focal plane array based the complementary barrier infrared detector (CBIRD) design. An 11.5 micron cutoff focal plane without anti-reflection coating has yielded noise equivalent differential temperature of 53 mK at operating temperature of 80 K, with 300 K background and cold-stop. Imaging results from a recent 10 ?m cutoff focal plane array are also presented.

  12. Shape measurement of objects with large discontinuities and surface isolations using complementary grating projection

    NASA Astrophysics Data System (ADS)

    Hao, Yudong; Zhao, Yang; Li, Dacheng

    1999-11-01

    Grating projection 3D profilometry has three major problems that have to be handled with great care. They are local shadows, phase discontinuities and surface isolations. Carrying no information, shadow areas give us no clue about the profile there. Phase discontinuities often baffle phase unwrappers because they may be generated for several reasons difficult to distinguish. Spatial phase unwrapping will inevitably fail if the object under teste have surface isolations. In this paper, a complementary grating projection profilometry is reported, which attempts to tackle the three aforementioned problems simultaneously. This technique involves projecting two grating patterns form both sides of the CCD camera. Phase unwrapping is carried out pixel by pixel using the two phase maps based on the excess fraction method, which is immune to phase discontinuities or surface isolations. Complementary projection makes sure that no area in the visible volume of CCD is devoid of fringe information, although in some cases a small area of the reconstructed profile is of low accuracy compared with others. The system calibration procedures and measurement results are presented in detail, and possible improvement is discussed.

  13. Overview of the Micro Vertex Detector for the P bar ANDA experiment

    NASA Astrophysics Data System (ADS)

    Calvo, Daniela; P¯ANDA MVD Group

    2017-02-01

    The P bar ANDA experiment is devoted to study interactions between cooled antiproton beams and a fixed target (the interaction rate is of about 107 events/s), hydrogen or heavier nuclei. The innermost tracker of P bar ANDA is the Micro Vertex Detector (MVD), specially designed to ensure the secondary vertex resolution for the discrimination of short-lived charmonium states. Hybrid epitaxial silicon pixels and double-sided silicon microstrips will equip four barrels, arranged around the interaction point, and six forward disks. The experiment features a triggerless architecture with a master clock of 160 MHz, therefore the MVD has to run with a continuous data transmission where the hits need precise timestamps. The energy loss of the particles in the sensor will be measured as well. The challenging request of a triggerless readout suggested to develop custom readout chips for both pixel (ToPix) and microstrip (PASTA) devices. To validate components and the triggerless readout architecture, prototypes have been built and tested. After an overview of the MVD, the technological aspects and performances of some prototypes will be reported.

  14. Compression of Encrypted Images Using Set Partitioning In Hierarchical Trees Algorithm

    NASA Astrophysics Data System (ADS)

    Sarika, G.; Unnithan, Harikuttan; Peter, Smitha

    2011-10-01

    When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. For encrypted real world sources such as images, the use of Markova properties in the slepian-wolf decoder does not work well for gray scale images. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After down sampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. In addition, this method provides only partial access to the current source at the decoder side, which improves the decoder's learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

  15. VizieR Online Data Catalog: Times of transits and occultations of WASP-12b (Patra+, 2017)

    NASA Astrophysics Data System (ADS)

    Patra, K. C.; Winn, J. N.; Holman, M. J.; Yu, L.; Deming, D.; Dai, F.

    2017-08-01

    Between 2016 October and 2017 February, we observed seven transits of WASP-12 with the 1.2m telescope at the Fred Lawrence Whipple Observatory on Mt. Hopkins, Arizona. Images were obtained with the KeplerCam detector through a Sloan r'-band filter. The typical exposure time was 15s, chosen to give a signal-to-noise ratio of about 200 for WASP-12. The field of view of this camera is 23.1' on a side. We used 2*2 binning, giving a pixel scale of 0.68''. We measured two new occultation times based on hitherto unpublished Spitzer observations in 2013 December (program 90186, P.I. Todorov). Two different transits were observed, one at 3.6μm and one at 4.5μm. The data take the form of a time series of 32*32-pixel subarray images, with an exposure time of 2.0s per image. The data were acquired over a wide range of orbital phases, but for our purpose, we analyzed only the ~14000 images within 4hr of each occultation. (1 data file).

  16. Evaluation of resolution and periodic errors of a flatbed scanner used for digitizing spectroscopic photographic plates

    PubMed Central

    Wyatt, Madison; Nave, Gillian

    2017-01-01

    We evaluated the use of a commercial flatbed scanner for digitizing photographic plates used for spectroscopy. The scanner has a bed size of 420 mm by 310 mm and a pixel size of about 0.0106 mm. Our tests show that the closest line pairs that can be resolved with the scanner are 0.024 mm apart, only slightly larger than the Nyquist resolution of 0.021 mm expected by the 0.0106 mm pixel size. We measured periodic errors in the scanner using both a calibrated length scale and a photographic plate. We find no noticeable periodic errors in the direction parallel to the linear detector in the scanner, but errors with an amplitude of 0.03 mm to 0.05 mm in the direction perpendicular to the detector. We conclude that large periodic errors in measurements of spectroscopic plates using flatbed scanners can be eliminated by scanning the plates with the dispersion direction parallel to the linear detector by placing the plate along the short side of the scanner. PMID:28463262

  17. Worlds Apart

    NASA Image and Video Library

    2015-10-12

    Although Mimas and Pandora, shown here, both orbit Saturn, they are very different moons. Pandora, "small" by moon standards (50 miles or 81 kilometers across) is elongated and irregular in shape. Mimas (246 miles or 396 kilometers across), a "medium-sized" moon, formed into a sphere due to self-gravity imposed by its higher mass. The shapes of moons can teach us much about their history. For example, one explanation for Pandora's elongated shape and low density is that it may have formed by gathering ring particles onto a dense core. This view looks toward the unilluminated side of the rings from 0.26 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on July 26, 2015. The view was obtained at a distance of approximately 485,000 miles (781,000 kilometers) from Pandora. Image scale is 3 miles (5 kilometers) per pixel. Mimas is 904,000 miles (1.4 million kilometers) from the spacecraft in this image. The scale on Mimas is 5.4 miles (8.4 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA18339

  18. Film cameras or digital sensors? The challenge ahead for aerial imaging

    USGS Publications Warehouse

    Light, D.L.

    1996-01-01

    Cartographic aerial cameras continue to play the key role in producing quality products for the aerial photography business, and specifically for the National Aerial Photography Program (NAPP). One NAPP photograph taken with cameras capable of 39 lp/mm system resolution can contain the equivalent of 432 million pixels at 11 ??m spot size, and the cost is less than $75 per photograph to scan and output the pixels on a magnetic storage medium. On the digital side, solid state charge coupled device linear and area arrays can yield quality resolution (7 to 12 ??m detector size) and a broader dynamic range. If linear arrays are to compete with film cameras, they will require precise attitude and positioning of the aircraft so that the lines of pixels can be unscrambled and put into a suitable homogeneous scene that is acceptable to an interpreter. Area arrays need to be much larger than currently available to image scenes competitive in size with film cameras. Analysis of the relative advantages and disadvantages of the two systems show that the analog approach is more economical at present. However, as arrays become larger, attitude sensors become more refined, global positioning system coordinate readouts become commonplace, and storage capacity becomes more affordable, the digital camera may emerge as the imaging system for the future. Several technical challenges must be overcome if digital sensors are to advance to where they can support mapping, charting, and geographic information system applications.

  19. Fusion: ultra-high-speed and IR image sensors

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Dao, V. T. S.; Nguyen, Quang A.; Kimata, M.

    2015-08-01

    Most targets of ultra-high-speed video cameras operating at more than 1 Mfps, such as combustion, crack propagation, collision, plasma, spark discharge, an air bag at a car accident and a tire under a sudden brake, generate sudden heat. Researchers in these fields require tools to measure the high-speed motion and heat simultaneously. Ultra-high frame rate imaging is achieved by an in-situ storage image sensor. Each pixel of the sensor is equipped with multiple memory elements to record a series of image signals simultaneously at all pixels. Image signals stored in each pixel are read out after an image capturing operation. In 2002, we developed an in-situ storage image sensor operating at 1 Mfps 1). However, the fill factor of the sensor was only 15% due to a light shield covering the wide in-situ storage area. Therefore, in 2011, we developed a backside illuminated (BSI) in-situ storage image sensor to increase the sensitivity with 100% fill factor and a very high quantum efficiency 2). The sensor also achieved a much higher frame rate,16.7 Mfps, thanks to the wiring on the front side with more freedom 3). The BSI structure has another advantage that it has less difficulties in attaching an additional layer on the backside, such as scintillators. This paper proposes development of an ultra-high-speed IR image sensor in combination of advanced nano-technologies for IR imaging and the in-situ storage technology for ultra-highspeed imaging with discussion on issues in the integration.

  20. Glacier velocity Changes at Novaya Zemlya revealed by ALOS1 and ALOS2

    NASA Astrophysics Data System (ADS)

    Konuma, Y.; Furuya, M.

    2016-12-01

    Matsuo and Heki (2013) revealed substantial ice-mass loss at Novaya Zemlya by Gravity Recovery And Climate Experiment (GRACE). In addition, the elevation thinning (Moholdt et al., 2012) and glacier retreat (Carr et al., 2014) has been reported. Melkonian et al. (2016) showed velocities map at coastal area of Novaya Zemlya by using Worldview, Landsat, ASTER and TerraSAR-X images. However, the entire distributions of ice speed and the temporal evolution remain unclear. In this study, we measured the glacier velocities using L-band SAR sensor onboard ALOS1 and ALOS2. We analyzed the data using pixel-offset tracking technique. We could observe the entire glaciated region in 2007-2008 winter and 2008-2009 winter. In particular, we could examine the velocities at middle of the glaciated region from 2006 to 2015 due to the availability of high-temporal resolution SAR data. As a result, we found the most glaciers in Novaya Zemlya have been accelerating since 1990s (Strozzi et al., 2008). Specially, Shokalskogo glacier has dramatically accelerated from the maximum of 300 ma-1 in 1998 to maximum of 600 ma-1 in 2015. Additionally, it turns out that there are marked differences in the glacier's velocities between the Barents Sea side and the Kara Sea side. The averaged maximum speed of the glaciers in Barents Sea side were approximately two times faster than that in Kara Sea side. We speculate the causes as the difference of topography under the calving front and sea-ice concentration. While each side has many calving glaciers, the fjord distribution in the Barents Sea side is much broader than in the Kara Sea side. Moreover, sea-ice concentration in the Barents Sea is lower than the Kara Sea, which might affect the glaciers' speed distribution.

  1. Quality assessment of the Harmonized Landsat and Sentinel-2 (HLS) data set

    NASA Astrophysics Data System (ADS)

    Masek, J. G.; Claverie, M.; Ju, J.; Vermote, E.

    2017-12-01

    The Harmonized Landsat and Sentinel-2 (HLS) project is a NASA initiative aiming to produce a compatible surface reflectance (SR) data set from a virtual constellation consisting of the US Landsat-8 and the European Sentinel-2 satellites. The creation of such a long-term surface reflectance data record requires the development and implementation of Quality assessment (QA) methods to evaluate the quality of the product. QA is built as an integral part of the HLS production chain. The QA includes three components: (i) the comparison of the HLS data with MODIS data, (ii) an analysis of the geometric accuracy of the Landsat-8 OLI and Sentinel-2 MSI Level-1 products, and (iii) an evaluation of the temporal consistency of the HLS products.The methodology of the cross-comparison of the HLS product with MODIS products was introduced by Claverie et al. (2015, RSE, vol. 169). It consists in comparing HLS SR (L30 products for Landsat-8 and S30 products for Sentinel-2) with MODIS SR (MOD09CMG), after adjustment of sun-view geometry and bandpass differences. The overall uncertainties and biases between MODIS and HLS SR do not exceed, depending on the band (excluding blue bands), 9% and 3%, respectively. No significant spatial or temporal patterns were identified. The most important source of uncertainty comes from the cloud detection omission on the MSI data.The HLS and Level-1 products geometric accuracy was assessed and improved using the automated registration and orthorectification package, AROP (Gao et al., 2009, SPIE JARS, vol. 3). The use of AROP reduces the geometric co-registration error in Level-1 products by about 40% and 60% during HLS processing of OLI and MSI, respectively. The final CE-90 are 6.2 m and 18.8 m, for HLS MSI (computed with 10m pixels) and OLI (30 m pixels), respectively.Finally, the time series (TS) smoothness of the data set was analyzed by computing the time series noise (Vermote et al., 2009, TGRS, vol. 47). We showed that major issue is related to the MSI cloud mask quality. After filtering TS outliers, we demonstrated that the HLS TS noise (i.e., including MSI and OLI data) do not exceed 0.006 for the visible bands and 0.014 for the NIR and SWIR bands.

  2. Initial clinical observations of intra- and interfractional motion variation in MR-guided lung SBRT.

    PubMed

    Thomas, David H; Santhanam, Anand; Kishan, Amar U; Cao, Minsong; Lamb, James; Min, Yugang; O'Connell, Dylan; Yang, Yingli; Agazaryan, Nzhde; Lee, Percy; Low, Daniel

    2018-02-01

    To evaluate variations in intra- and interfractional tumour motion, and the effect on internal target volume (ITV) contour accuracy, using deformable image registration of real-time two-dimensional-sagittal cine-mode MRI acquired during lung stereotactic body radiation therapy (SBRT) treatments. Five lung tumour patients underwent free-breathing SBRT treatments on the ViewRay system, with dose prescribed to a planning target volume (defined as a 3-6 mm expansion of the 4DCT-ITV). Sagittal slice cine-MR images (3.5 × 3.5 mm 2 pixels) were acquired through the centre of the tumour at 4 frames per second throughout the treatments (3-4 fractions of 21-32 min). Tumour gross tumour volumes (GTVs) were contoured on the first frame of the MR cine and tracked for the first 20 min of each treatment using offline optical-flow based deformable registration implemented on a GPU cluster. A ground truth ITV (MR-ITV 20 min ) was formed by taking the union of tracked GTV contours. Pseudo-ITVs were generated from unions of the GTV contours tracked over 10 s segments of image data (MR-ITV 10 s ). Differences were observed in the magnitude of median tumour displacement between days of treatments. MR-ITV 10 s areas were as small as 46% of the MR-ITV 20 min . An ITV offers a "snapshot" of breathing motion for the brief period of time the tumour is imaged on a specific day. Real-time MRI over prolonged periods of time and over multiple treatment fractions shows that ITV size varies. Further work is required to investigate the dosimetric effect of these results. Advances in knowledge: Five lung tumour patients underwent free-breathing MRI-guided SBRT treatments, and their tumours tracked using deformable registration of cine-mode MRI. The results indicate that variability of both intra- and interfractional breathing amplitude should be taken into account during planning of lung radiotherapy.

  3. 3D Cloud Tomography, Followed by Mean Optical and Microphysical Properties, with Multi-Angle/Multi-Pixel Data

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; von Allmen, P. A.; Marshak, A.; Bal, G.

    2010-12-01

    The geometrical assumption in all operational cloud remote sensing algorithms is that clouds are plane-parallel slabs, which applies relatively well to the most uniform stratus layers. Its benefit is to justify using classic 1D radiative transfer (RT) theory, where angular details (solar, viewing, azimuthal) are fully accounted for and precise phase functions can be used, to generate the look-up tables used in the retrievals. Unsurprisingly, these algorithms catastrophically fail when applied to cumulus-type clouds, which are highly 3D. This is unfortunate for the cloud-process modeling community that may thrive on in situ airborne data, but would very much like to use satellite data for more than illustrations in their presentations and publications. So, how can we obtain quantitative information from space-based observations of finite aspect ratio clouds? Cloud base/top heights, vertically projected area, mean liquid water content (LWC), and volume-averaged droplet size would be a good start. Motivated by this science need, we present a new approach suitable for sparse cumulus fields where we turn the tables on the standard procedure in cloud remote sensing. We make no a priori assumption about cloud shape, save an approximately flat base, but use brutal approximations about the RT that is necessarily 3D. Indeed, the first order of business is to roughly determine the cloud's outer shape in one of two ways, which we will frame as competing initial guesses for the next phase of shape refinement and volume-averaged microphysical parameter estimation. Both steps use multi-pixel/multi-angle techniques amenable to MISR data, the latter adding a bi-spectral dimension using collocated MODIS data. One approach to rough cloud shape determination is to fit the multi-pixel/multi-angle data with a geometric primitive such as a scalene hemi-ellipsoid with 7 parameters (translation in 3D space, 3 semi-axes, 1 azimuthal orientation); for the radiometry, a simple radiosity-type model is used where the cloud surface "emits" either reflected (sunny-side) or transmitted (shady-side) light at different levels. As it turns out, the reflected/transmitted light ratio yields an approximate cloud optical thickness. Another approach is to invoke tomography techniques to define the volume occupied by the cloud using, as it were, cloud masks for each direction of observation. In the shape and opacity refinement phase, initial guesses along with solar and viewing geometry information are used to predict radiance in each pixel using a fast diffusion model for the 3D RT in MISR's non-absorbing red channel (275 m resolution). Refinement is constrained and stopped when optimal resolution is reached. Finally, multi-pixel/mono-angle MODIS data for the same cloud (at comparable 250 m resolution) reveals the desired droplet size information, hence the volume-averaged LWC. This is an ambitious remote sensing science project drawing on cross-disciplinary expertise gained in medical imaging using both X-ray and near-IR sources and detectors. It is high risk but with potentially high returns not only for the cloud modeling community but also aerosol and surface characterization in the presence of broken 3D clouds.

  4. Ahuna Mons: Side View

    NASA Image and Video Library

    2016-09-01

    Ceres' lonely mountain, Ahuna Mons, is seen in this simulated perspective view. The elevation has been exaggerated by a factor of two. The view was made using enhanced-color images from NASA's Dawn mission. Images taken using blue (440 nanometers), green (750 nanometers) and infrared (960 nanometers) spectral filters were combined to create the view. The spacecraft's framing camera took the images from Dawn's low-altitude mapping orbit, from an altitude of 240 miles (385 kilometers) in August 2016. The resolution of the component images is 120 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20915

  5. Oxo Crater: Side View

    NASA Image and Video Library

    2016-09-01

    Ceres' lonely mountain, Ahuna Mons, is seen in this simulated perspective view. The elevation has been exaggerated by a factor of two. The view was made using enhanced-color images from NASA's Dawn mission. Images taken using blue (440 nanometers), green (750 nanometers) and infrared (960 nanometers) spectral filters were combined to create the view. The spacecraft's framing camera took the images from Dawn's low-altitude mapping orbit, from an altitude of 240 miles (385 kilometers) in August 2016. The resolution of the component images is 120 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20915

  6. Method of fabricating a microelectronic device package with an integral window

    DOEpatents

    Peterson, Kenneth A.; Watson, Robert D.

    2003-01-01

    A method of fabricating a microelectronic device package with an integral window for providing optical access through an aperture in the package. The package is made of a multilayered insulating material, e.g., a low-temperature cofired ceramic (LTCC) or high-temperature cofired ceramic (HTCC). The window is inserted in-between personalized layers of ceramic green tape during stackup and registration. Then, during baking and firing, the integral window is simultaneously bonded to the sintered ceramic layers of the densified package. Next, the microelectronic device is flip-chip bonded to cofired thick-film metallized traces on the package, where the light-sensitive side is optically accessible through the window. Finally, a cover lid is attached to the opposite side of the package. The result is a compact, low-profile package, flip-chip bonded, hermetically-sealed package having an integral window.

  7. Flight performance of an advanced CZT imaging detector in a balloon-borne wide-field hard X-ray telescope—ProtoEXIST1

    NASA Astrophysics Data System (ADS)

    Hong, J.; Allen, B.; Grindlay, J.; Barthelemy, S.; Baker, R.; Garson, A.; Krawczynski, H.; Apple, J.; Cleveland, W. H.

    2011-10-01

    We successfully carried out the first high-altitude balloon flight of a wide-field hard X-ray coded-aperture telescope ProtoEXIST1, which was launched from the Columbia Scientific Balloon Facility at Ft. Sumner, New Mexico on October 9, 2009. ProtoEXIST1 is the first implementation of an advanced CdZnTe (CZT) imaging detector in our ongoing program to establish the technology required for next generation wide-field hard X-ray telescopes such as the High Energy Telescope (HET) in the Energetic X-ray Imaging Survey Telescope (EXIST). The CZT detector plane in ProtoEXIST1 consists of an 8×8 array of closely tiled 2 cm×2 cm×0.5 cm thick pixellated CZT crystals, each with 8×8 pixels, mounted on a set of readout electronics boards and covering a 256 cm2 active area with 2.5 mm pixels. A tungsten mask, mounted at 90 cm above the detector provides shadowgrams of X-ray sources in the 30-600 keV band for imaging, allowing a fully coded field of view of 9°×9° (and 19°×19° for 50% coding fraction) with an angular resolution of 20‧. In order to reduce the background radiation, the detector is surrounded by semi-graded (Pb/Sn/Cu) passive shields on the four sides all the way to the mask. On the back side, a 26 cm×26 cm×2 cm CsI(Na) active shield provides signals to tag charged particle induced events as well as ≳100keV background photons from below. The flight duration was only about 7.5 h due to strong winds (60 knots) at float altitude (38-39 km). Throughout the flight, the CZT detector performed excellently. The telescope observed Cyg X-1, a bright black hole binary system, for ˜1h at the end of the flight. Despite a few problems with the pointing and aspect systems that caused the telescope to track about 6.4° off the target, the analysis of the Cyg X-1 data revealed an X-ray source at 7.2σ in the 30-100 keV energy band at the expected location from the optical images taken by the onboard daytime star camera. The success of this first flight is very encouraging for the future development of the advanced CZT imaging detectors (ProtoEXIST2, with 0.6 mm pixels), which will take advantage of the modularization architecture employed in ProtoEXIST1.

  8. A street rubbish detection algorithm based on Sift and RCNN

    NASA Astrophysics Data System (ADS)

    Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting

    2018-02-01

    This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).

  9. Multisensor data fusion across time and space

    NASA Astrophysics Data System (ADS)

    Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.

    2014-06-01

    Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.

  10. QWT: Retrospective and New Applications

    NASA Astrophysics Data System (ADS)

    Xu, Yi; Yang, Xiaokang; Song, Li; Traversoni, Leonardo; Lu, Wei

    Quaternion wavelet transform (QWT) achieves much attention in recent years as a new image analysis tool. In most cases, it is an extension of the real wavelet transform and complex wavelet transform (CWT) by using the quaternion algebra and the 2D Hilbert transform of filter theory, where analytic signal representation is desirable to retrieve phase-magnitude description of intrinsically 2D geometric structures in a grayscale image. In the context of color image processing, however, it is adapted to analyze the image pattern and color information as a whole unit by mapping sequential color pixels to a quaternion-valued vector signal. This paper provides a retrospective of QWT and investigates its potential use in the domain of image registration, image fusion, and color image recognition. It is indicated that it is important for QWT to induce the mechanism of adaptive scale representation of geometric features, which is further clarified through two application instances of uncalibrated stereo matching and optical flow estimation. Moreover, quaternionic phase congruency model is defined based on analytic signal representation so as to operate as an invariant feature detector for image registration. To achieve better localization of edges and textures in image fusion task, we incorporate directional filter bank (DFB) into the quaternion wavelet decomposition scheme to greatly enhance the direction selectivity and anisotropy of QWT. Finally, the strong potential use of QWT in color image recognition is materialized in a chromatic face recognition system by establishing invariant color features. Extensive experimental results are presented to highlight the exciting properties of QWT.

  11. Retinal slit lamp video mosaicking.

    PubMed

    De Zanet, Sandro; Rudolph, Tobias; Richa, Rogerio; Tappeiner, Christoph; Sznitman, Raphael

    2016-06-01

    To this day, the slit lamp remains the first tool used by an ophthalmologist to examine patient eyes. Imaging of the retina poses, however, a variety of problems, namely a shallow depth of focus, reflections from the optical system, a small field of view and non-uniform illumination. For ophthalmologists, the use of slit lamp images for documentation and analysis purposes, however, remains extremely challenging due to large image artifacts. For this reason, we propose an automatic retinal slit lamp video mosaicking, which enlarges the field of view and reduces amount of noise and reflections, thus enhancing image quality. Our method is composed of three parts: (i) viable content segmentation, (ii) global registration and (iii) image blending. Frame content is segmented using gradient boosting with custom pixel-wise features. Speeded-up robust features are used for finding pair-wise translations between frames with robust random sample consensus estimation and graph-based simultaneous localization and mapping for global bundle adjustment. Foreground-aware blending based on feathering merges video frames into comprehensive mosaics. Foreground is segmented successfully with an area under the curve of the receiver operating characteristic curve of 0.9557. Mosaicking results and state-of-the-art methods were compared and rated by ophthalmologists showing a strong preference for a large field of view provided by our method. The proposed method for global registration of retinal slit lamp images of the retina into comprehensive mosaics improves over state-of-the-art methods and is preferred qualitatively.

  12. Layer by layer: complex analysis with OCT technology

    NASA Astrophysics Data System (ADS)

    Florin, Christian

    2017-03-01

    Standard visualisation systems capture two- dimensional images and need more or less fast image processing systems. Now, the ASP Array (Actives sensor pixel array) opens a new world in imaging. On the ASP array, each pixel is provided with its own lens and with its own signal pre-processing. The OCT technology works in "real time" with highest accuracy. In the ASP array systems functionalities of the data acquisition and signal processing are even integrated onto the "pixel level". For the extraction of interferometric features, the time-of-flight principle (TOF) is used. The ASP architecture offers the demodulation of the optical signal within a pixel with up to 100 kHz and the reconstruction of the amplitude and its phase. The dynamics of image capture with the ASP array is higher by two orders of magnitude in comparison with conventional image sensors!!! The OCT- Technology allows a topographic imaging in real time with an extremely high geometric spatial resolution. The optical path length is generated by an axial movement of the reference mirror. The amplitude-modulated optical signal and the carrier frequency are proportional to the scan rate and contains the depth information. Each maximum of the signal envelope corresponds to a reflection (or scattering) within a sample. The ASP array produces at same time 300 * 300 axial Interferorgrams which touch each other on all sides. The signal demodulation for detecting the envelope is not limited by the frame rate of the ASP array in comparison to standard OCT systems. If an optical signal arrives to a pixel of the ASP Array an electrical signal is generated. The background is faded to saturation of pixels by high light intensity to avoid. The sampled signal is integrated continuously multiplied by a signal of the same frequency and two paths whose phase is shifted by 90 degrees from each other are averaged. The outputs of the two paths are routed to the PC, where the envelope amplitude and the phase calculate a three-dimensional tomographic image. For 3D measuring technique specially designed ASP- arrays with a very high image rate are available. If ASP- Arrays are coupled with the OCT method, layer thicknesses can be determined without contact, sealing seams can be inspected or geometrical shapes can be measured. From a stack of hundreds of single OCT images, interesting images can be selected and fed to the computer to analyse them.

  13. Looking Forward - A Next Generation of Thermal Infrared Planetary Instruments

    NASA Astrophysics Data System (ADS)

    Christensen, P. R.; Hamilton, V. E.; Edwards, C. S.; Spencer, J. R.

    2017-12-01

    Thermal infrared measurements have provided important information about the physical properties of planetary surfaces beginning with the initial Mariner spacecraft in the early 1960's. These infrared measurements will continue into the future with a series of instruments that are now on their way or in development that will explore a suite of asteroids, Europa, and Mars. These instruments are being developed at Arizona State University, and are next-generation versions of the TES, Mini-TES, and THEMIS infrared spectrometers and imagers. The OTES instrument on OSIRIS-REx, which was launched in Sept. 2016, will map the surface of the asteroid Bennu down to a resolution of 40 m/pixel at seven times of day. This multiple time of day coverage will be used to produce global thermal inertia maps that will be used to determine the particle size distribution, which will in turn help select a safe and appropriate sample site. The EMIRS instrument, which is being built in partnership with the UAE's MBRSC for the Emirates Mars Mission, will measure martian surface temperatures at 200-300 km/pixel scales at over the full diurnal cycle - the first time the full diurnal temperature cycle has been observed since the Viking mission. The E-THEMIS instrument on the Europa Clipper mission will provide global mapping at 5-10 km/pixel scale at multiple times of day, and local observations down to resolutions of 50 m/pixel. These measurements will have a precision of 0.2 K for a 90 K scene, and will be used to map the thermal inertia and block abundances across Europa and to identify areas of localized endogenic heat. These observations will be used to investigate the physical processes of surface formation and evolution and to help select the landing site of a future Europa lander. Finally, the LTES instrument on the Lucy mission will measure temperatures on the day and night sides of the target Trojan asteroids, again providing insights into their surface properties and evolution processes.

  14. Simulation study of light transport in laser-processed LYSO:Ce detectors with single-side readout

    NASA Astrophysics Data System (ADS)

    Bläckberg, L.; El Fakhri, G.; Sabet, H.

    2017-11-01

    A tightly focused pulsed laser beam can locally modify the crystal structure inside the bulk of a scintillator. The result is incorporation of so-called optical barriers with a refractive index different from that of the crystal bulk, that can be used to redirect the scintillation light and control the light spread in the detector. We here systematically study the scintillation light transport in detectors fabricated using the laser induced optical barrier technique, and objectively compare their potential performance characteristics with those of the two mainstream detector types: monolithic and mechanically pixelated arrays. Among countless optical barrier patterns, we explore barriers arranged in a pixel-like pattern extending all-the-way or half-way through a 20 mm thick LYSO:Ce crystal. We analyze the performance of the detectors coupled to MPPC arrays, in terms of light response functions, flood maps, line profiles, and light collection efficiency. Our results show that laser-processed detectors with both barrier patterns constitute a new detector category with a behavior between that of the two standard detector types. Results show that when the barrier-crystal interface is smooth, no DOI information can be obtained regardless of barrier refractive index (RI). However, with a rough barrier-crystal interface we can extract multiple levels of DOI. Lower barrier RI results in larger light confinement, leading to better transverse resolution. Furthermore we see that the laser-processed crystals have the potential to increase the light collection efficiency, which could lead to improved energy resolution and potentially better timing resolution due to higher signals. For a laser-processed detector with smooth barrier-crystal interfaces the light collection efficiency is simulated to  >42%, and for rough interfaces  >73%. The corresponding numbers for a monolithic crystal is 39% with polished surfaces, and 71% with rough surfaces, and for a mechanically pixelated array 35% with polished pixel surfaces and 59% with rough surfaces.

  15. Simulation study of light transport in laser-processed LYSO:Ce detectors with single-side readout.

    PubMed

    Bläckberg, L; El Fakhri, G; Sabet, H

    2017-10-19

    A tightly focused pulsed laser beam can locally modify the crystal structure inside the bulk of a scintillator. The result is incorporation of so-called optical barriers with a refractive index different from that of the crystal bulk, that can be used to redirect the scintillation light and control the light spread in the detector. We here systematically study the scintillation light transport in detectors fabricated using the laser induced optical barrier technique, and objectively compare their potential performance characteristics with those of the two mainstream detector types: monolithic and mechanically pixelated arrays. Among countless optical barrier patterns, we explore barriers arranged in a pixel-like pattern extending all-the-way or half-way through a 20 mm thick LYSO:Ce crystal. We analyze the performance of the detectors coupled to MPPC arrays, in terms of light response functions, flood maps, line profiles, and light collection efficiency. Our results show that laser-processed detectors with both barrier patterns constitute a new detector category with a behavior between that of the two standard detector types. Results show that when the barrier-crystal interface is smooth, no DOI information can be obtained regardless of barrier refractive index (RI). However, with a rough barrier-crystal interface we can extract multiple levels of DOI. Lower barrier RI results in larger light confinement, leading to better transverse resolution. Furthermore we see that the laser-processed crystals have the potential to increase the light collection efficiency, which could lead to improved energy resolution and potentially better timing resolution due to higher signals. For a laser-processed detector with smooth barrier-crystal interfaces the light collection efficiency is simulated to  >42%, and for rough interfaces  >73%. The corresponding numbers for a monolithic crystal is 39% with polished surfaces, and 71% with rough surfaces, and for a mechanically pixelated array 35% with polished pixel surfaces and 59% with rough surfaces.

  16. Quantitative evaluation for accumulative calibration error and video-CT registration errors in electromagnetic-tracked endoscopy.

    PubMed

    Liu, Sheena Xin; Gutiérrez, Luis F; Stanton, Doug

    2011-05-01

    Electromagnetic (EM)-guided endoscopy has demonstrated its value in minimally invasive interventions. Accuracy evaluation of the system is of paramount importance to clinical applications. Previously, a number of researchers have reported the results of calibrating the EM-guided endoscope; however, the accumulated errors of an integrated system, which ultimately reflect intra-operative performance, have not been characterized. To fill this vacancy, we propose a novel system to perform this evaluation and use a 3D metric to reflect the intra-operative procedural accuracy. This paper first presents a portable design and a method for calibration of an electromagnetic (EM)-tracked endoscopy system. An evaluation scheme is then described that uses the calibration results and EM-CT registration to enable real-time data fusion between CT and endoscopic video images. We present quantitative evaluation results for estimating the accuracy of this system using eight internal fiducials as the targets on an anatomical phantom: the error is obtained by comparing the positions of these targets in the CT space, EM space and endoscopy image space. To obtain 3D error estimation, the 3D locations of the targets in the endoscopy image space are reconstructed from stereo views of the EM-tracked monocular endoscope. Thus, the accumulated errors are evaluated in a controlled environment, where the ground truth information is present and systematic performance (including the calibration error) can be assessed. We obtain the mean in-plane error to be on the order of 2 pixels. To evaluate the data integration performance for virtual navigation, target video-CT registration error (TRE) is measured as the 3D Euclidean distance between the 3D-reconstructed targets of endoscopy video images and the targets identified in CT. The 3D error (TRE) encapsulates EM-CT registration error, EM-tracking error, fiducial localization error, and optical-EM calibration error. We present in this paper our calibration method and a virtual navigation evaluation system for quantifying the overall errors of the intra-operative data integration. We believe this phantom not only offers us good insights to understand the systematic errors encountered in all phases of an EM-tracked endoscopy procedure but also can provide quality control of laboratory experiments for endoscopic procedures before the experiments are transferred from the laboratory to human subjects.

  17. Habitat Distribution on the Inner Continental Shelf of Northern South Carolina Based on Sidescan Sonar and Submarine Video Data

    NASA Astrophysics Data System (ADS)

    Ojeda, G. Y.; Gayes, P. T.; van Dolah, R. F.; Schwab, W. C.

    2002-12-01

    Assessment of the extent and variability of benthic habitats is an important mission of biologists and marine scientists, and has supreme relevance in monitoring and maintaining the offshore resources of coastal nations. Mapping `hard bottoms', in particular, is of critical importance because these are the areas that support sessile benthic habitats and associated fisheries. To quantify the extent and distribution of habitats offshore northern South Carolina, we used a spatially quantitative approach that involved textural analysis of side scan sonar images and training of an artificial neural network classifier. This approach was applied to a 2 m-pixel image mosaic of sonar data collected by the USGS in 1999 and 2000. The entire mosaic covered some 686 km2 and extended between the ~6 m and ~10+ m isobaths off the Grand Strand region of South Carolina. Bottom video transects across selected sites provided 2,119 point observations which were used for image-to-ground control as well as training of the neural network classifier. A sensitivity study of 52 space-domain textural features indicated that 12 of them provided reasonable discriminating power between two end-member bottom types: hard bottom and sand. The selected features were calculated over 5 by 5 pixel windows of the image where video point observations existed. These feature vectors were then fed to a 3-layer neural network classifier, trained with a Levenberg-Marquardt backpropagation algorithm. Registration and display of the output habitat map were performed in GIS. Results of our classification indicate that outcropping Tertiary and Cretaceous strata are exposed over a significant portion of northern South Carolina's inner shelf, consistent with a sediment-starved margin type. The combined surface extent classified as hard bottom was 405 km2 -or 59 % of the imaged area-, while only 281 km2 -or 41 % of the area were classified as sand. In addition, our results provided constraints on the spatial continuity of nearshore benthic habitats. The median surface area of the regions classified as hard bottom (n= 190,521) and sand (n= 234,946) were both equal to the output cell size (100 m2), confirming the `patchy' nature of these habitats and suggesting that these medians probably represent upper bounds rather than estimates of the typical extent of individual patches. Furthermore, comparison of the interpretive habitat map with available swath bathymetry data suggests positive correlation between bathymetry `highs' and the major sandy-bottom areas interpreted with our routine. In contrast, the location of hard bottom areas does not appear to be significantly correlated with major bathymetric features. Our findings are in agreement with published qualitative estimates of hard bottom areas on neighboring North Carolina's inner shelf.

  18. Z-Earth: 4D topography from space combining short-baseline stereo and lidar

    NASA Astrophysics Data System (ADS)

    Dewez, T. J.; Akkari, H.; Kaab, A. M.; Lamare, M. L.; Doyon, G.; Costeraste, J.

    2013-12-01

    The advent of free-of-charge global topographic data sets SRTM and Aster GDEM have enabled testing a host of geoscience hypotheses. Availability of such data is now considered standard, and though resolved at 30-m to 90-m pixel size, they are today regarded as obsolete and inappropriate given the regularly updated sub-meter imagery coming through web services like Google Earth. Two features will thus help meet the current topographic data needs of the Geoscience communities: field-scale-compatible elevation datasets (i.e. meter-scale digital models and sub-meter elevation precision) and provision for regularly updated topography to tackle earth surface changes in 4D, while retaining the key for success: data availability at no charge. A new space borne instrumental concept called Z-Earth has undergone phase 0 study at CNES, the French space agency to fulfill these aims. The scientific communities backing this proposal are that of natural hazards, glaciology and biomass. The system under study combines a short-baseline native stereo imager and a lidar profiler. This combination provides spatially resolved elevation swaths together with absolute along-track elevation control point profiles. Acquisition is designed for revisit time better than a year. Intended products not only target single pass digital surface models, color orthoimages and small footprint full-wave-form lidar profiles to update existing topographic coverage, but also time series of them. 3D change detection targets centimetre-scale horizontal precision and metric vertical precision, in complement of -now traditional- spectral change detection. To assess the actual concept value, two real-size experiments were carried out. We used sub-meter-scale Pleiades panchromatic stereo-images to generate digital surface models and check them against dense airborne lidar coverages, one heliborne set purposely flown in Corsica (50-100pts/sq.m) and a second one retrieved from OpenTopography.org (~10pts/sq.m.). In Corsica, over a challenging 45-degree-grade tree-covered mountain side, the Pleiades 2-m-grid-posting digital surface model described the topography with a median error of -4.75m +/-2.59m (NMAD). A planimetric bias between both datasets was found to be about 7m to the South. This planimetric misregistration, though well within Pleiades specifications, partly explains the dramatic effect on elevation difference. In the Redmond area (eastern Oregon), a very gentle desert landscape, elevation differences also contained a vertical median bias of -4.02m+/-1.22m (NMAD). Though here, sub-pixel planimetric registration between stereo DSM and lidar coverage was enforced. This real-size experiment hints that sub-meter accuracy for 2-m-grid-posting DSM is an achievable goal when combining stereoimaging and lidar.

  19. Elbow flexor and extensor muscle weakness in lateral epicondylalgia.

    PubMed

    Coombes, Brooke K; Bisset, Leanne; Vicenzino, Bill

    2012-05-01

    To evaluate whether deficits of elbow flexor and extensor muscle strength exist in lateral epicondylalgia (LE) in comparison with a healthy control population. Cross-sectional study. 150 participants with unilateral LE were compared with 54 healthy control participants. Maximal isometric elbow flexion and extension strength were measured bilaterally using a purpose-built standing frame such that gripping was avoided. The authors found significant side differences in elbow extensor (-6.54 N, 95% CI -11.43 to -1.65, p=0.008, standardised mean difference (SMD) -0.45) and flexor muscle strength (-11.26 N, 95% CI -19.59 to -2.94, p=0.009, SMD -0.46) between LE and control groups. Within the LE group, only elbow extensor muscle strength deficits between sides was significant (affected-unaffected: -2.94 N, 95% CI -5.44 to -0.44). Small significant deficits of elbow extensor and flexor muscle strength exist in the affected arm of unilateral LE in comparison with healthy controls. Notably, comparing elbow strength between the affected and unaffected sides in unilateral epicondylalgia is likely to underestimate these deficits. Trial Registration Australian New Zealand Clinical Trials Register ACTRN12609000051246.

  20. Probing the Physics of Narrow-line Regions in Active Galaxies. IV. Full Data Release of the Siding Spring Southern Seyfert Spectroscopic Snapshot Survey (S7)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Adam D.; Dopita, Michael A.; Davies, Rebecca

    We present the second and final data release of the Siding Spring Southern Seyfert Spectroscopic Snapshot Survey (S7). Data are presented for 63 new galaxies not included in the first data release, and we provide 2D emission-line fitting products for the full S7 sample of 131 galaxies. The S7 uses the WiFeS instrument on the ANU 2.3 m telescope to obtain spectra with a spectral resolution of R  = 7000 in the red (540–700 nm) and R  = 3000 in the blue (350–570 nm), over an integral field of 25 × 38 arcsec{sup 2} with 1 × 1 arcsec{sup 2} spatial pixels. The S7 contains bothmore » the largest sample of active galaxies and the highest spectral resolution of any comparable integral field survey to date. The emission-line fitting products include line fluxes, velocities, and velocity dispersions across the WiFeS field of view, and an artificial neural network has been used to determine the optimal number of Gaussian kinematic components for emission-lines in each spaxel. Broad Balmer lines are subtracted from the spectra of nuclear spatial pixels in Seyfert 1 galaxies before fitting the narrow lines. We bin nuclear spectra and measure reddening-corrected nuclear fluxes of strong narrow lines for each galaxy. The nuclear spectra are classified on optical diagnostic diagrams, where the strength of the coronal line [Fe vii] λ 6087 is shown to be correlated with [O iii]/H β . Maps revealing gas excitation and kinematics are included for the entire sample, and we provide notes on the newly observed objects.« less

  1. Reflectivity quenching of ESR multilayer polymer film reflector in optically bonded scintillator arrays

    NASA Astrophysics Data System (ADS)

    Loignon-Houle, Francis; Pepin, Catherine M.; Charlebois, Serge A.; Lecomte, Roger

    2017-04-01

    The 3M-ESR multilayer polymer film is a widely used reflector in scintillation detector arrays. As specified in the datasheet and confirmed experimentally by measurements in air, it is highly reflective (> 98 %) over the entire visible spectrum (400-1000 nm) for all angles of incidence. Despite these outstanding characteristics, it was previously found that light crosstalk between pixels in a bonded LYSO scintillator array with ESR reflector can be as high as ∼30-35%. This unexplained light crosstalk motivated further investigation of ESR optical performance. Analytical simulation of a multilayer structure emulating the ESR reflector showed that the film becomes highly transparent to incident light at large angles when surrounded on both sides by materials of refractive index higher than air. Monte Carlo simulations indicate that a considerable fraction (∼25-35%) of scintillation photons are incident at these leaking angles in high aspect ratio LYSO scintillation crystals. The film transparency was investigated experimentally by measuring the scintillation light transmission through the ESR film sandwiched between a scintillation crystal and a photodetector with or without layers of silicone grease. Strong light leakage, up to nearly 30%, was measured through the reflector when coated on both sides with silicone, thus elucidating the major cause of light crosstalk in bonded arrays. The reflector transparency was confirmed experimentally for angles of incidence larger than 60 ° using a custom designed setup allowing illumination of the bonded ESR film at selected grazing angles. The unsuspected ESR film transparency can be beneficial for detector arrays exploiting light sharing schemes, but it is highly detrimental for scintillator arrays designed for individual pixel readout.

  2. Probing the Physics of Narrow-line Regions in Active Galaxies. IV. Full Data Release of the Siding Spring Southern Seyfert Spectroscopic Snapshot Survey (S7)

    NASA Astrophysics Data System (ADS)

    Thomas, Adam D.; Dopita, Michael A.; Shastri, Prajval; Davies, Rebecca; Hampton, Elise; Kewley, Lisa; Banfield, Julie; Groves, Brent; James, Bethan L.; Jin, Chichuan; Juneau, Stéphanie; Kharb, Preeti; Sairam, Lalitha; Scharwächter, Julia; Shalima, P.; Sundar, M. N.; Sutherland, Ralph; Zaw, Ingyin

    2017-09-01

    We present the second and final data release of the Siding Spring Southern Seyfert Spectroscopic Snapshot Survey (S7). Data are presented for 63 new galaxies not included in the first data release, and we provide 2D emission-line fitting products for the full S7 sample of 131 galaxies. The S7 uses the WiFeS instrument on the ANU 2.3 m telescope to obtain spectra with a spectral resolution of R = 7000 in the red (540-700 nm) and R = 3000 in the blue (350-570 nm), over an integral field of 25 × 38 arcsec2 with 1 × 1 arcsec2 spatial pixels. The S7 contains both the largest sample of active galaxies and the highest spectral resolution of any comparable integral field survey to date. The emission-line fitting products include line fluxes, velocities, and velocity dispersions across the WiFeS field of view, and an artificial neural network has been used to determine the optimal number of Gaussian kinematic components for emission-lines in each spaxel. Broad Balmer lines are subtracted from the spectra of nuclear spatial pixels in Seyfert 1 galaxies before fitting the narrow lines. We bin nuclear spectra and measure reddening-corrected nuclear fluxes of strong narrow lines for each galaxy. The nuclear spectra are classified on optical diagnostic diagrams, where the strength of the coronal line [Fe vii] λ6087 is shown to be correlated with [O III]/Hβ. Maps revealing gas excitation and kinematics are included for the entire sample, and we provide notes on the newly observed objects.

  3. Landscape West of Bosporos Rupes

    NASA Image and Video Library

    2006-04-07

    This image was taken in the mid-latitudes of Mars' southern hemisphere near the giant Argyre impact basin. It is located just to the west of a prominent scarp known as Bosporos Rupes. The left side of the image shows cratered plains. Some of the craters are heavily mantled and indistinct, whereas others exhibit sharp rims and dramatic topography. The largest crater in this half of the image is about 2.5 kilometers (1.5 miles) wide. Mounds and ridges, which may be remnants of an ice-rich deposit, are visible on its floor. Three sinuous valleys occupy the center of the image. Valleys such as these were first observed in data returned by the NASA Mariner 9 spacecraft, which reached Mars in 1971. The right side of the image shows part of an impact crater that is approximately 20 kilometers (12 miles) in diameter. The furrowed appearance of the crater's inner wall suggests that it has been extensively modified, perhaps by landslides and flowing water. Like other craters in the area, the floor of this crater has a rough and dissected texture that is often attributed to the loss of ice-rich material. This image was taken by the High Resolution Imaging Science Experiment (HiRISE) camera onboard NASA's Mars Reconnaissance Orbiter spacecraft on March 24, 2006. The image is centered at 40.64 degrees south latitude, 303.49 degrees east longitude. The image is oriented such that north is 7 degrees to the left of up. The range to the target was 2,044 kilometers (1,270 miles). At this distance the image scale is 2.04 meters (6.69 feet) per pixel, so objects as small as 6.1 meters (20 feet) are resolved. In total this image is 40.90 kilometers (25.41 miles) or 20,081 pixels wide and 11.22 kilometers (6.97 miles) or 5,523 pixels high. The image was taken at a local Mars time of 07:30 and the scene is illuminated from the upper right with a solar incidence angle of 81.4 degrees, thus the sun was about 8.6 degrees above the horizon. At an Ls of 29 degrees (with Ls an indicator of Mars' position in its orbit around the sun), the season on Mars is southern autumn. http://photojournal.jpl.nasa.gov/catalog/PIA08047

  4. Improving the space surveillance telescope's performance using multi-hypothesis testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chris Zingarelli, J.; Cain, Stephen; Pearce, Eric

    2014-05-01

    The Space Surveillance Telescope (SST) is a Defense Advanced Research Projects Agency program designed to detect objects in space like near Earth asteroids and space debris in the geosynchronous Earth orbit (GEO) belt. Binary hypothesis test (BHT) methods have historically been used to facilitate the detection of new objects in space. In this paper a multi-hypothesis detection strategy is introduced to improve the detection performance of SST. In this context, the multi-hypothesis testing (MHT) determines if an unresolvable point source is in either the center, a corner, or a side of a pixel in contrast to BHT, which only testsmore » whether an object is in the pixel or not. The images recorded by SST are undersampled such as to cause aliasing, which degrades the performance of traditional detection schemes. The equations for the MHT are derived in terms of signal-to-noise ratio (S/N), which is computed by subtracting the background light level around the pixel being tested and dividing by the standard deviation of the noise. A new method for determining the local noise statistics that rejects outliers is introduced in combination with the MHT. An experiment using observations of a known GEO satellite are used to demonstrate the improved detection performance of the new algorithm over algorithms previously reported in the literature. The results show a significant improvement in the probability of detection by as much as 50% over existing algorithms. In addition to detection, the S/N results prove to be linearly related to the least-squares estimates of point source irradiance, thus improving photometric accuracy.« less

  5. CCD imaging sensors

    NASA Technical Reports Server (NTRS)

    Janesick, James R. (Inventor); Elliott, Stythe T. (Inventor)

    1989-01-01

    A method for promoting quantum efficiency (QE) of a CCD imaging sensor for UV, far UV and low energy x-ray wavelengths by overthinning the back side beyond the interface between the substrate and the photosensitive semiconductor material, and flooding the back side with UV prior to using the sensor for imaging. This UV flooding promotes an accumulation layer of positive states in the oxide film over the thinned sensor to greatly increase QE for either frontside or backside illumination. A permanent or semipermanent image (analog information) may be stored in a frontside SiO.sub.2 layer over the photosensitive semiconductor material using implanted ions for a permanent storage and intense photon radiation for a semipermanent storage. To read out this stored information, the gate potential of the CCD is biased more negative than that used for normal imaging, and excess charge current thus produced through the oxide is integrated in the pixel wells for subsequent readout by charge transfer from well to well in the usual manner.

  6. Fabrication of double-sided thallium bromide strip detectors

    NASA Astrophysics Data System (ADS)

    Hitomi, Keitaro; Nagano, Nobumichi; Onodera, Toshiyuki; Kim, Seong-Yun; Ito, Tatsuya; Ishii, Keizo

    2016-07-01

    Double-sided strip detectors were fabricated from thallium bromide (TlBr) crystals grown by the traveling-molten zone method using zone-purified materials. The detectors had three 3.4-mm-long strips with 1-mm widths and a surrounding electrode placed orthogonally on opposite surfaces of the crystals at approximately 6.5×6.5 mm2 in area and 5 mm in thickness. Excellent charge transport properties for both electrons and holes were observed from the TlBr crystals. The mobility-lifetime products for electrons and holes in the detector were measured to be ~3×10-3 cm2/V and ~1×10-3 cm2/V, respectively. The 137Cs spectra corresponding to the gamma-ray interaction position were obtained from the detector. An energy resolution of 3.4% of full width at half maximum for 662-keV gamma rays was obtained from one "pixel" (an intersection of the strips) of the detector at room temperature.

  7. Performance enhancement of uncooled infrared focal plane array by integrating metamaterial absorber

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Wei; Wen, Yongzheng; Yu, Xiaomei, E-mail: yuxm@pku.edu.cn

    2015-03-16

    This letter presents an infrared (IR) focal plane array (FPA) with metamaterial absorber (MMA) integrated to enhance its performance. A glass substrate, on which arrays of bimaterial cantilevers are fabricated as the thermal-sensitive pixels by a polyimide surface sacrificial process, is employed to allow the optical readout from the back side of the substrate. Whereas the IR wave radiates onto the FPA from the front side, which consequently avoids the energy loss caused by the silicon substrate compared with the previous works. This structure also facilitates the integration of MMA by introducing a layer of periodic square resonators atop themore » SiN{sub x} structural layer to form a metal/dielectric/metal stack with the gold mirror functioning as the ground plane. A comparative experiment was carried out on the FPAs that use MMA and ordinary SiN{sub x} as the absorbers, respectively. The performance improvement was verified by the evaluation of the absorbers as well as the imaging results of both FPAs.« less

  8. Segmentation of Oil Spills on Side-Looking Airborne Radar Imagery with Autoencoders.

    PubMed

    Gallego, Antonio-Javier; Gil, Pablo; Pertusa, Antonio; Fisher, Robert B

    2018-03-06

    In this work, we use deep neural autoencoders to segment oil spills from Side-Looking Airborne Radar (SLAR) imagery. Synthetic Aperture Radar (SAR) has been much exploited for ocean surface monitoring, especially for oil pollution detection, but few approaches in the literature use SLAR. Our sensor consists of two SAR antennas mounted on an aircraft, enabling a quicker response than satellite sensors for emergency services when an oil spill occurs. Experiments on TERMA radar were carried out to detect oil spills on Spanish coasts using deep selectional autoencoders and RED-nets (very deep Residual Encoder-Decoder Networks). Different configurations of these networks were evaluated and the best topology significantly outperformed previous approaches, correctly detecting 100% of the spills and obtaining an F 1 score of 93.01% at the pixel level. The proposed autoencoders perform accurately in SLAR imagery that has artifacts and noise caused by the aircraft maneuvers, in different weather conditions and with the presence of look-alikes due to natural phenomena such as shoals of fish and seaweed.

  9. On the creation of high spatial resolution imaging spectroscopy data from multi-temporal low spatial resolution imagery

    NASA Astrophysics Data System (ADS)

    Yao, Wei; van Aardt, Jan; Messinger, David

    2017-05-01

    The Hyperspectral Infrared Imager (HyspIRI) mission aims to provide global imaging spectroscopy data to the benefit of especially ecosystem studies. The onboard spectrometer will collect radiance spectra from the visible to short wave infrared (VSWIR) regions (400-2500 nm). The mission calls for fine spectral resolution (10 nm band width) and as such will enable scientists to perform material characterization, species classification, and even sub-pixel mapping. However, the global coverage requirement results in a relatively low spatial resolution (GSD 30m), which restricts applications to objects of similar scales. We therefore have focused on the assessment of sub-pixel vegetation structure from spectroscopy data in past studies. In this study, we investigate the development or reconstruction of higher spatial resolution imaging spectroscopy data via fusion of multi-temporal data sets to address the drawbacks implicit in low spatial resolution imagery. The projected temporal resolution of the HyspIRI VSWIR instrument is 15 days, which implies that we have access to as many as six data sets for an area over the course of a growth season. Previous studies have shown that select vegetation structural parameters, e.g., leaf area index (LAI) and gross ecosystem production (GEP), are relatively constant in summer and winter for temperate forests; we therefore consider the data sets collected in summer to be from a similar, stable forest structure. The first step, prior to fusion, involves registration of the multi-temporal data. A data fusion algorithm then can be applied to the pre-processed data sets. The approach hinges on an algorithm that has been widely applied to fuse RGB images. Ideally, if we have four images of a scene which all meet the following requirements - i) they are captured with the same camera configurations; ii) the pixel size of each image is x; and iii) at least r2 images are aligned on a grid of x/r - then a high-resolution image, with a pixel size of x/r, can be reconstructed from the multi-temporal set. The algorithm was applied to data from NASA's classic Airborne Visible and Infrared Imaging Spectrometer (AVIRIS-C; GSD 18m), collected between 2013-2015 (summer and fall) over our study area (NEON's Southwest Pacific Domain; Fresno, CA) to generate higher spatial resolution imagery (GSD 9m). The reconstructed data set was validated via comparison to NEON's imaging spectrometer (NIS) data (GSD 1m). The results showed that algorithm worked well with the AVIRIS-C data and could be applied to the HyspIRI data.

  10. Interferometric phase measurement techniques for coherent beam combining

    NASA Astrophysics Data System (ADS)

    Antier, Marie; Bourderionnet, Jérôme; Larat, Christian; Lallier, Eric; Primot, Jérôme; Brignon, Arnaud

    2015-03-01

    Coherent beam combining of fiber amplifiers provides an attractive mean of reaching high power laser. In an interferometric phase measurement the beams issued for each fiber combined are imaged onto a sensor and interfere with a reference plane wave. This registration of interference patterns on a camera allows the measurement of the exact phase error of each fiber beam in a single shot. Therefore, this method is a promising candidate toward very large number of combined fibers. Based on this technique, several architectures can be proposed to coherently combine a high number of fibers. The first one based on digital holography transfers directly the image of the camera to spatial light modulator (SLM). The generated hologram is used to compensate the phase errors induced by the amplifiers. This architecture has therefore a collective phase measurement and correction. Unlike previous digital holography technique, the probe beams measuring the phase errors between the fibers are co-propagating with the phase-locked signal beams. This architecture is compatible with the use of multi-stage isolated amplifying fibers. In that case, only 20 pixels per fiber on the SLM are needed to obtain a residual phase shift error below λ/10rms. The second proposed architecture calculates the correction applied to each fiber channel by tracking the relative position of the interference finges. In this case, a phase modulator is placed on each channel. In that configuration, only 8 pixels per fiber on the camera is required for a stable close loop operation with a residual phase error of λ/20rms, which demonstrates the scalability of this concept.

  11. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  12. A novel automated method for doing registration and 3D reconstruction from multi-modal RGB/IR image sequences

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2016-09-01

    In recent years, the use of multi-modal camera rigs consisting of an RGB sensor and an infrared (IR) sensor have become increasingly popular for use in surveillance and robotics applications. The advantages of using multi-modal camera rigs include improved foreground/background segmentation, wider range of lighting conditions under which the system works, and richer information (e.g. visible light and heat signature) for target identification. However, the traditional computer vision method of mapping pairs of images using pixel intensities or image features is often not possible with an RGB/IR image pair. We introduce a novel method to overcome the lack of common features in RGB/IR image pairs by using a variational methods optimization algorithm to map the optical flow fields computed from different wavelength images. This results in the alignment of the flow fields, which in turn produce correspondences similar to those found in a stereo RGB/RGB camera rig using pixel intensities or image features. In addition to aligning the different wavelength images, these correspondences are used to generate dense disparity and depth maps. We obtain accuracies similar to other multi-modal image alignment methodologies as long as the scene contains sufficient depth variations, although a direct comparison is not possible because of the lack of standard image sets from moving multi-modal camera rigs. We test our method on synthetic optical flow fields and on real image sequences that we created with a multi-modal binocular stereo RGB/IR camera rig. We determine our method's accuracy by comparing against a ground truth.

  13. Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery.

    PubMed

    Penza, Veronica; Ortiz, Jesús; Mattos, Leonardo S; Forgione, Antonello; De Momi, Elena

    2016-02-01

    Single-incision laparoscopic surgery decreases postoperative infections, but introduces limitations in the surgeon's maneuverability and in the surgical field of view. This work aims at enhancing intra-operative surgical visualization by exploiting the 3D information about the surgical site. An interactive guidance system is proposed wherein the pose of preoperative tissue models is updated online. A critical process involves the intra-operative acquisition of tissue surfaces. It can be achieved using stereoscopic imaging and 3D reconstruction techniques. This work contributes to this process by proposing new methods for improved dense 3D reconstruction of soft tissues, which allows a more accurate deformation identification and facilitates the registration process. Two methods for soft tissue 3D reconstruction are proposed: Method 1 follows the traditional approach of the block matching algorithm. Method 2 performs a nonparametric modified census transform to be more robust to illumination variation. The simple linear iterative clustering (SLIC) super-pixel algorithm is exploited for disparity refinement by filling holes in the disparity images. The methods were validated using two video datasets from the Hamlyn Centre, achieving an accuracy of 2.95 and 1.66 mm, respectively. A comparison with ground-truth data demonstrated the disparity refinement procedure: (1) increases the number of reconstructed points by up to 43 % and (2) does not affect the accuracy of the 3D reconstructions significantly. Both methods give results that compare favorably with the state-of-the-art methods. The computational time constraints their applicability in real time, but can be greatly improved by using a GPU implementation.

  14. Hierarchical imaging of the human knee

    NASA Astrophysics Data System (ADS)

    Schulz, Georg; Götz, Christian; Deyhle, Hans; Müller-Gerbl, Magdalena; Zanette, Irene; Zdora, Marie-Christine; Khimchenko, Anna; Thalmann, Peter; Rack, Alexander; Müller, Bert

    2016-10-01

    Among the clinically relevant imaging techniques, computed tomography (CT) reaches the best spatial resolution. Sub-millimeter voxel sizes are regularly obtained. For investigations on true micrometer level lab-based μCT has become gold standard. The aim of the present study is the hierarchical investigation of a human knee post mortem using hard X-ray μCT. After the visualization of the entire knee using a clinical CT with a spatial resolution on the sub-millimeter range, a hierarchical imaging study was performed using a laboratory μCT system nanotom m. Due to the size of the whole knee the pixel length could not be reduced below 65 μm. These first two data sets were directly compared after a rigid registration using a cross-correlation algorithm. The μCT data set allowed an investigation of the trabecular structures of the bones. The further reduction of the pixel length down to 25 μm could be achieved by removing the skin and soft tissues and measuring the tibia and the femur separately. True micrometer resolution could be achieved after extracting cylinders of several millimeters diameters from the two bones. The high resolution scans revealed the mineralized cartilage zone including the tide mark line as well as individual calcified chondrocytes. The visualization of soft tissues including cartilage, was arranged by X-ray grating interferometry (XGI) at ESRF and Diamond Light Source. Whereas the high-energy measurements at ESRF allowed the simultaneous visualization of soft and hard tissues, the low-energy results from Diamond Light Source made individual chondrocytes within the cartilage visual.

  15. Graphical user interface for a dual-module EMCCD x-ray detector array

    NASA Astrophysics Data System (ADS)

    Wang, Weiyuan; Ionita, Ciprian; Kuhls-Gilcrist, Andrew; Huang, Ying; Qu, Bin; Gupta, Sandesh K.; Bednarek, Daniel R.; Rudin, Stephen

    2011-03-01

    A new Graphical User Interface (GUI) was developed using Laboratory Virtual Instrumentation Engineering Workbench (LabVIEW) for a high-resolution, high-sensitivity Solid State X-ray Image Intensifier (SSXII), which is a new x-ray detector for radiographic and fluoroscopic imaging, consisting of an array of Electron-Multiplying CCDs (EMCCDs) each having a variable on-chip electron-multiplication gain of up to 2000x to reduce the effect of readout noise. To enlarge the field-of-view (FOV), each EMCCD sensor is coupled to an x-ray phosphor through a fiberoptic taper. Two EMCCD camera modules are used in our prototype to form a computer-controlled array; however, larger arrays are under development. The new GUI provides patient registration, EMCCD module control, image acquisition, and patient image review. Images from the array are stitched into a 2kx1k pixel image that can be acquired and saved at a rate of 17 Hz (faster with pixel binning). When reviewing the patient's data, the operator can select images from the patient's directory tree listed by the GUI and cycle through the images using a slider bar. Commonly used camera parameters including exposure time, trigger mode, and individual EMCCD gain can be easily adjusted using the GUI. The GUI is designed to accommodate expansion of the EMCCD array to even larger FOVs with more modules. The high-resolution, high-sensitivity EMCCD modular-array SSXII imager with the new user-friendly GUI should enable angiographers and interventionalists to visualize smaller vessels and endovascular devices, helping them to make more accurate diagnoses and to perform more precise image-guided interventions.

  16. Surging Across the Rings

    NASA Image and Video Library

    2007-07-26

    A surge in brightness appears on the rings directly opposite the Sun from the Cassini spacecraft. This "opposition surge" travels across the rings as the spacecraft watches. This view looks toward the sunlit side of the rings from about 9 degrees below the ringplane. The image was taken in visible light with the Cassini spacecraft wide-angle camera on June 12, 2007 using a spectral filter sensitive to wavelengths of infrared light centered at 853 nanometers. The view was acquired at a distance of approximately 524,374 kilometers (325,830 miles) from Saturn. Image scale is 31 kilometers (19 miles) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA08992

  17. Belle II silicon vertex detector

    NASA Astrophysics Data System (ADS)

    Adamczyk, K.; Aihara, H.; Angelini, C.; Aziz, T.; Babu, V.; Bacher, S.; Bahinipati, S.; Barberio, E.; Baroncelli, Ti.; Baroncelli, To.; Basith, A. K.; Batignani, G.; Bauer, A.; Behera, P. K.; Bergauer, T.; Bettarini, S.; Bhuyan, B.; Bilka, T.; Bosi, F.; Bosisio, L.; Bozek, A.; Buchsteiner, F.; Casarosa, G.; Ceccanti, M.; Červenkov, D.; Chendvankar, S. R.; Dash, N.; Divekar, S. T.; Doležal, Z.; Dutta, D.; Enami, K.; Forti, F.; Friedl, M.; Hara, K.; Higuchi, T.; Horiguchi, T.; Irmler, C.; Ishikawa, A.; Jeon, H. B.; Joo, C. W.; Kandra, J.; Kang, K. H.; Kato, E.; Kawasaki, T.; Kodyš, P.; Kohriki, T.; Koike, S.; Kolwalkar, M. M.; Kvasnička, P.; Lanceri, L.; Lettenbicher, J.; Maki, M.; Mammini, P.; Mayekar, S. N.; Mohanty, G. B.; Mohanty, S.; Morii, T.; Nakamura, K. R.; Natkaniec, Z.; Negishi, K.; Nisar, N. K.; Onuki, Y.; Ostrowicz, W.; Paladino, A.; Paoloni, E.; Park, H.; Pilo, F.; Profeti, A.; Rashevskaya, I.; Rao, K. K.; Rizzo, G.; Rozanska, M.; Sandilya, S.; Sasaki, J.; Sato, N.; Schultschik, S.; Schwanda, C.; Seino, Y.; Shimizu, N.; Stypula, J.; Suzuki, J.; Tanaka, S.; Tanida, K.; Taylor, G. N.; Thalmeier, R.; Thomas, R.; Tsuboyama, T.; Uozumi, S.; Urquijo, P.; Vitale, L.; Volpi, M.; Watanuki, S.; Watson, I. J.; Webb, J.; Wiechczynski, J.; Williams, S.; Würkner, B.; Yamamoto, H.; Yin, H.; Yoshinobu, T.; Belle II SVD Collaboration

    2016-09-01

    The Belle II experiment at the SuperKEKB collider in Japan is designed to indirectly probe new physics using approximately 50 times the data recorded by its predecessor. An accurate determination of the decay-point position of subatomic particles such as beauty and charm hadrons as well as a precise measurement of low-momentum charged particles will play a key role in this pursuit. These will be accomplished by an inner tracking device comprising two layers of pixelated silicon detector and four layers of silicon vertex detector based on double-sided microstrip sensors. We describe herein the design, prototyping and construction efforts of the Belle-II silicon vertex detector.

  18. Smectites on Cape York, Matijevic Hill, Mars, Observed and Characterized by Crism and Opportunity

    NASA Technical Reports Server (NTRS)

    Arvidson, R.; Bennett, K.; Catalano, J.; Fraeman, A.; Gellert, R.; Guinness, E.; Morris, R.; Murchie, S.; Smith, M.; Squyres, S.; hide

    2013-01-01

    Opportunity has conducted an extensive "walk-about" and set of in-situ measurements on strata exposed on the inboard side of Cape York, a segment of the dissected rim of the Noachian-age approx.22 km wide Endeavour crater [1] (Fig. 1). The specific region for the observations (Matijevic Hill) was chosen based on along track oversampled (ATO) CRISM hyperspectral observations (processed to 5 m/pixel) that showed the presence of exposures of Fe/Mg smectite phyllosilicates. We describe the first ground-based observations of phyllosilicates on Mars and discuss implications based on the combined CRISM and Opportunity measurements.

  19. SU-D-BRB-02: Investigations of Secondary Ion Distributions in Carbon Ion Therapy Using the Timepix Detector.

    PubMed

    Gwosch, K; Hartmann, B; Jakubek, J; Granja, C; Soukup, P; Jaekel, O; Martisikova, M

    2012-06-01

    Due to the high conformity of carbon ion therapy, unpredictable changes in the patient's geometry or deviations from the planned beam properties can result in changes of the dose distribution. PET has been used successfully to monitor the actual dose distribution in the patient. However, it suffers from biological washout processes and low detection efficiency. The purpose of this contribution is to investigate the potential of beam monitoring by detection of prompt secondary ions emerging from a homogeneous phantom, simulating a patient's head. Measurements were performed at the Heidelberg Ion-Beam Therapy Center (Germany) using a carbon ion pencil beam irradiated on a cylindrical PMMA phantom (16cm diameter). For registration of the secondary ions, the Timepix detector was used. This pixelated silicon detector allows position-resolved measurements of individual ions (256×256 pixels, 55μm pitch). To track the secondary ions we used several parallel detectors (3D voxel detector). For monitoring of the beam in the phantom, we analyzed the directional distribution of the registered ions. This distribution shows a clear dependence on the initial beam energy, width and position. Detectable were range differences of 1.7mm, as well as vertical and horizontal shifts of the beam position by 1mm. To estimate the clinical potential of this method, we measured the yield of secondary ions emerging from the phantom for a beam energy of 226MeV/u. The differential distribution of secondary ions as a function of the angle from the beam axis for angles between 0 and 90° will be presented. In this setup the total yield in the forward hemisphere was found to be in the order of 10 -1 secondary ions per primary carbon ion. The presented measurements show that tracking of secondary ions provides a promising method for non-invasive monitoring of ion beam parameters for clinical relevant carbon ion fluences. Research with the pixel detectors was carried out in frame of the Medipix Collaboration. © 2012 American Association of Physicists in Medicine.

  20. Improvement in Recursive Hierarchical Segmentation of Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2006-01-01

    A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.

  1. A Dragonfly-Shaped Crater

    NASA Image and Video Library

    2017-02-10

    The broader scene for this image is the fluidized ejecta from Bakhuysen Crater to the southwest, but there's something very interesting going on here on a much smaller scale. A small impact crater, about 25 meters in diameter, with a gouged-out trench extends to the south. The ejecta (rocky material ejected from the crater) mostly extends to the east and west of the crater. This "butterfly" ejecta is very common for craters formed at low impact angles. Taken together, these observations suggest that the crater-forming impactor came in at a low angle from the north, hit the ground and ejected material to the sides. The top of the impactor may have sheared off ("decapitating" the impactor) and continued downrange, forming the trench. We can't prove that's what happened, but this explanation is consistent with the observations. Regardless of how it formed, it's quite an interesting-looking "dragonfly" crater. The map is projected here at a scale of 50 centimeters (19.69 inches) per pixel. [The original image scale is 55.7 centimeters (21.92 inches) per pixel (with 2 x 2 binning); objects on the order of 167 centimeters (65.7 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21454

  2. Efficient reversible data hiding in encrypted image with public key cryptosystem

    NASA Astrophysics Data System (ADS)

    Xiang, Shijun; Luo, Xinrong

    2017-12-01

    This paper proposes a new reversible data hiding scheme for encrypted images by using homomorphic and probabilistic properties of Paillier cryptosystem. The proposed method can embed additional data directly into encrypted image without any preprocessing operations on original image. By selecting two pixels as a group for encryption, data hider can retrieve the absolute differences of groups of two pixels by employing a modular multiplicative inverse method. Additional data can be embedded into encrypted image by shifting histogram of the absolute differences by using the homomorphic property in encrypted domain. On the receiver side, legal user can extract the marked histogram in encrypted domain in the same way as data hiding procedure. Then, the hidden data can be extracted from the marked histogram and the encrypted version of original image can be restored by using inverse histogram shifting operations. Besides, the marked absolute differences can be computed after decryption for extraction of additional data and restoration of original image. Compared with previous state-of-the-art works, the proposed scheme can effectively avoid preprocessing operations before encryption and can efficiently embed and extract data in encrypted domain. The experiments on the standard image files also certify the effectiveness of the proposed scheme.

  3. Track reconstruction in the inhomogeneous magnetic field for Vertex Detector of NA61/SHINE experiment at CERN SPS

    NASA Astrophysics Data System (ADS)

    Merzlaya, Anastasia; NA61/SHINE Collaboration

    2017-01-01

    The heavy-ion programme of the NA61/SHINE experiment at CERN SPS is expanding to allow precise measurements of exotic particles with lifetime few hundred microns. A Vertex Detector for open charm measurements at the SPS is being constructed by the NA61/SHINE Collaboration to meet the challenges of high spatial resolution of secondary vertices and efficiency of track registration. This task is solved by the application of the coordinate sensitive CMOS Monolithic Active Pixel Sensors with extremely low material budget in the new Vertex Detector. A small-acceptance version of the Vertex Detector is being tested this year, later it will be expanded to a large-acceptance version. Simulation studies will be presented. A method of track reconstruction in the inhomogeneous magnetic field for the Vertex Detector was developed and implemented. Numerical calculations show the possibility of high precision measurements in heavy ion collisions of strange and multi strange particles, as well as heavy flavours, like charmed particles.

  4. Results of calibrations of the NOAA-11 AVHRR made by reference to calibrated SPOT imagery at White Sands, N.M

    NASA Technical Reports Server (NTRS)

    Nianzeng, Che; Grant, Barbara G.; Flittner, David E.; Slater, Philip N.; Biggar, Stuart F.; Jackson, Ray D.; Moran, M. S.

    1991-01-01

    The calibration method reported here makes use of the reflectances of several large, uniform areas determined from calibrated and atmospherically corrected SPOT Haute Resolution Visible (HRV) scenes of White Sands, New Mexico. These reflectances were used to predict the radiances in the first two channels of the NOAA-11 Advanced Very High Resolution Radiometer (AVHRR). The digital counts in the AVHRR image corresponding to these known reflectance areas were determined by the use of two image registration techniques. The plots of digital counts versus pixel radiance provided the calibration gains and offsets for the AVHRR. A reduction in the gains of 4 and 13 percent in channels 1 and 2 respectively was found during the period 1988-11-19 to 1990-6-21. An error budget is presented for the method and is extended to the case of cross-calibrating sensors on the same orbital platform in the Earth Observing System (EOS) era.

  5. Using WorldView-2 Imagery to Track Flooding in Thailand in a Multi-Asset Sensorweb

    NASA Technical Reports Server (NTRS)

    McLaren, David; Doubleday, Joshua; Chien, Steve

    2012-01-01

    For the flooding seasons of 2011-2012 multiple space assets were used in a "sensorweb" to track major flooding in Thailand. Worldview-2 multispectral data was used in this effort and provided extremely high spatial resolution (2m / pixel) multispectral (8 bands at 0.45-1.05 micrometer spectra) data from which mostly automated workflows derived surface water extent and volumetric water information for use by a range of NGO and national authorities. We first describe how Worldview-2 and its data was integrated into the overall flood tracking sensorweb. We next describe the use of Support Vector Machine learning techniques that were used to derive surface water extent classifiers. Then we describe the fusion of surface water extent and digital elevation map (DEM) data to derive volumetric water calculations. Finally we discuss key future work such as speeding up the workflows and automating the data registration process (the only portion of the workflow requiring human input).

  6. Extracting flat-field images from scene-based image sequences using phase correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less

  7. Scalable Algorithms for Global Scale Remote Sensing Applications

    NASA Astrophysics Data System (ADS)

    Vatsavai, R. R.; Bhaduri, B. L.; Singh, N.

    2015-12-01

    Recent decade has witnessed major changes on the Earth, for example, deforestation, varying cropping and human settlement patterns, and crippling damages due to disasters. Accurate damage assessment caused by major natural and anthropogenic disasters is becoming critical due to increases in human and economic loss. This increase in loss of life and severe damages can be attributed to the growing population, as well as human migration to the disaster prone regions of the world. Rapid assessment of these changes and dissemination of accurate information is critical for creating an effective emergency response. Change detection using high-resolution satellite images is a primary tool in assessing damages, monitoring biomass and critical infrastructures, and identifying new settlements. Existing change detection methods suffer from registration errors and often based on pixel (location) wise comparison of spectral observations from single sensor. In this paper we present a novel probabilistic change detection framework based on patch comparison and a GPU implementation that supports near real-time rapid damage exploration capability.

  8. Analysis on the use of Multi-Sequence MRI Series for Segmentation of Abdominal Organs

    NASA Astrophysics Data System (ADS)

    Selver, M. A.; Selvi, E.; Kavur, E.; Dicle, O.

    2015-01-01

    Segmentation of abdominal organs from MRI data sets is a challenging task due to various limitations and artefacts. During the routine clinical practice, radiologists use multiple MR sequences in order to analyze different anatomical properties. These sequences have different characteristics in terms of acquisition parameters (such as contrast mechanisms and pulse sequence designs) and image properties (such as pixel spacing, slice thicknesses and dynamic range). For a complete understanding of the data, computational techniques should combine the information coming from these various MRI sequences. These sequences are not acquired in parallel but in a sequential manner (one after another). Therefore, patient movements and respiratory motions change the position and shape of the abdominal organs. In this study, the amount of these effects is measured using three different symmetric surface distance metrics performed to three dimensional data acquired from various MRI sequences. The results are compared to intra and inter observer differences and discussions on using multiple MRI sequences for segmentation and the necessities for registration are presented.

  9. Diffraction-limited lucky imaging with a 12" commercial telescope

    NASA Astrophysics Data System (ADS)

    Baptista, Brian J.

    2014-08-01

    Here we demonstrate a novel lucky imaging camera which is designed to produce diffraction-limited imaging using small telescopes similar to ones used by many academic institutions for outreach and/or student training. We present a design that uses a Meade 12" SCT paired with an Andor iXon fast readout EMCCD. The PSF of the telescope is matched to the pixel size of the EMCCD by adding a simple, custom-fabricated, intervening optical system. We demonstrate performance of the system by observing both astronomical and terrestrial targets. The astronomical application requires simpler data reconstruction techniques as compared to the terrestrial case. We compare different lucky imaging registration and reconstruction algorithms for use with this imager for both astronomical and terrestrial targets. We also demonstrate how this type of instrument would be useful for both undergraduate and graduate student training. As an instructional aide, the instrument can provide a hands-on approach for teaching instrument design, standard data reduction techniques, lucky imaging data processing, and high resolution imaging concepts.

  10. Automated corresponding point candidate selection for image registration using wavelet transformation neurla network with rotation invariant inputs and context information about neighboring candidates

    NASA Astrophysics Data System (ADS)

    Okumura, Hiroshi; Suezaki, Masashi; Sueyasu, Hideki; Arai, Kohei

    2003-03-01

    An automated method that can select corresponding point candidates is developed. This method has the following three features: 1) employment of the RIN-net for corresponding point candidate selection; 2) employment of multi resolution analysis with Haar wavelet transformation for improvement of selection accuracy and noise tolerance; 3) employment of context information about corresponding point candidates for screening of selected candidates. Here, the 'RIN-net' means the back-propagation trained feed-forward 3-layer artificial neural network that feeds rotation invariants as input data. In our system, pseudo Zernike moments are employed as the rotation invariants. The RIN-net has N x N pixels field of view (FOV). Some experiments are conducted to evaluate corresponding point candidate selection capability of the proposed method by using various kinds of remotely sensed images. The experimental results show the proposed method achieves fewer training patterns, less training time, and higher selection accuracy than conventional method.

  11. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  12. Spatial clustering of pixels of a multispectral image

    DOEpatents

    Conger, James Lynn

    2014-08-19

    A method and system for clustering the pixels of a multispectral image is provided. A clustering system computes a maximum spectral similarity score for each pixel that indicates the similarity between that pixel and the most similar neighboring. To determine the maximum similarity score for a pixel, the clustering system generates a similarity score between that pixel and each of its neighboring pixels and then selects the similarity score that represents the highest similarity as the maximum similarity score. The clustering system may apply a filtering criterion based on the maximum similarity score so that pixels with similarity scores below a minimum threshold are not clustered. The clustering system changes the current pixel values of the pixels in a cluster based on an averaging of the original pixel values of the pixels in the cluster.

  13. Dental non-linear image registration and collection method with 3D reconstruction and change detection

    NASA Astrophysics Data System (ADS)

    Rahmes, Mark; Fagan, Dean; Lemieux, George

    2017-03-01

    The capability of a software algorithm to automatically align same-patient dental bitewing and panoramic x-rays over time is complicated by differences in collection perspectives. We successfully used image correlation with an affine transform for each pixel to discover common image borders, followed by a non-linear homography perspective adjustment to closely align the images. However, significant improvements in image registration could be realized if images were collected from the same perspective, thus facilitating change analysis. The perspective differences due to current dental image collection devices are so significant that straightforward change analysis is not possible. To address this, a new custom dental tray could be used to provide the standard reference needed for consistent positioning of a patient's mouth. Similar to sports mouth guards, the dental tray could be fabricated in standard sizes from plastic and use integrated electronics that have been miniaturized. In addition, the x-ray source needs to be consistently positioned in order to collect images with similar angles and scales. Solving this pose correction is similar to solving for collection angle in aerial imagery for change detection. A standard collection system would provide a method for consistent source positioning using real-time sensor position feedback from a digital x-ray image reference. Automated, robotic sensor positioning could replace manual adjustments. Given an image set from a standard collection, a disparity map between images can be created using parallax from overlapping viewpoints to enable change detection. This perspective data can be rectified and used to create a three-dimensional dental model reconstruction.

  14. Change detection of medical images using dictionary learning techniques and PCA

    NASA Astrophysics Data System (ADS)

    Nika, Varvara; Babyn, Paul; Zhu, Hongmei

    2014-03-01

    Automatic change detection methods for identifying the changes of serial MR images taken at different times are of great interest to radiologists. The majority of existing change detection methods in medical imaging, and those of brain images in particular, include many preprocessing steps and rely mostly on statistical analysis of MRI scans. Although most methods utilize registration software, tissue classification remains a difficult and overwhelming task. Recently, dictionary learning techniques are used in many areas of image processing, such as image surveillance, face recognition, remote sensing, and medical imaging. In this paper we present the Eigen-Block Change Detection algorithm (EigenBlockCD). It performs local registration and identifies the changes between consecutive MR images of the brain. Blocks of pixels from baseline scan are used to train local dictionaries that are then used to detect changes in the follow-up scan. We use PCA to reduce the dimensionality of the local dictionaries and the redundancy of data. Choosing the appropriate distance measure significantly affects the performance of our algorithm. We examine the differences between L1 and L2 norms as two possible similarity measures in the EigenBlockCD. We show the advantages of L2 norm over L1 norm theoretically and numerically. We also demonstrate the performance of the EigenBlockCD algorithm for detecting changes of MR images and compare our results with those provided in recent literature. Experimental results with both simulated and real MRI scans show that the EigenBlockCD outperforms the previous methods. It detects clinical changes while ignoring the changes due to patient's position and other acquisition artifacts.

  15. A radiation detector design mitigating problems related to sawed edges

    NASA Astrophysics Data System (ADS)

    Aurola, A.; Marochkin, V.; Tuuva, T.

    2014-12-01

    In pixelated silicon radiation detectors that are utilized for the detection of UV, visible, and in particular Near Infra-Red (NIR) light it is desirable to utilize a relatively thick fully depleted Back-Side Illuminated (BSI) detector design providing 100% Fill Factor (FF), low Cross-Talk (CT), and high Quantum Efficiency (QE). The optimal thickness of such detectors is typically less than 300μm and above 40μm and thus it is more or less mandatory to thin the detector wafer from the backside after the front side of the detector has been processed and before a conductive layer is formed on the backside. A TAIKO thinning process is optimal for such a thickness range since neither a support substrate on the front side nor lithographic steps on the backside are required. The conductive backside layer should, however, be homogenous throughout the wafer and it should be biased from the front side of the detector. In order to provide good QE for blue and UV light the conductive backside layer should be of opposite doping type than the substrate. The problem with a homogeneous backside layer being of opposite doping type than the substrate is that a lot of leakage current is typically generated at the sawed chip edges, which may increase the dark noise and the power consumption. These problems are substantially mitigated with a proposed detector edge arrangement which 2D simulation results are presented in this paper.

  16. Reflective photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lentine, Anthony L.; Nielson, Gregory N.; Cruz-Campa, Jose Luis

    A photovoltaic module includes colorized reflective photovoltaic cells that act as pixels. The colorized reflective photovoltaic cells are arranged so that reflections from the photovoltaic cells or pixels visually combine into an image on the photovoltaic module. The colorized photovoltaic cell or pixel is composed of a set of 100 to 256 base color sub-pixel reflective segments or sub-pixels. The color of each pixel is determined by the combination of base color sub-pixels forming the pixel. As a result, each pixel can have a wide variety of colors using a set of base colors, which are created, from sub-pixel reflectivemore » segments having standard film thicknesses.« less

  17. Improving the Space Surveillance Telescope's Performance Using Multi-Hypothesis Testing

    NASA Astrophysics Data System (ADS)

    Zingarelli, J. Chris; Pearce, Eric; Lambour, Richard; Blake, Travis; Peterson, Curtis J. R.; Cain, Stephen

    2014-05-01

    The Space Surveillance Telescope (SST) is a Defense Advanced Research Projects Agency program designed to detect objects in space like near Earth asteroids and space debris in the geosynchronous Earth orbit (GEO) belt. Binary hypothesis test (BHT) methods have historically been used to facilitate the detection of new objects in space. In this paper a multi-hypothesis detection strategy is introduced to improve the detection performance of SST. In this context, the multi-hypothesis testing (MHT) determines if an unresolvable point source is in either the center, a corner, or a side of a pixel in contrast to BHT, which only tests whether an object is in the pixel or not. The images recorded by SST are undersampled such as to cause aliasing, which degrades the performance of traditional detection schemes. The equations for the MHT are derived in terms of signal-to-noise ratio (S/N), which is computed by subtracting the background light level around the pixel being tested and dividing by the standard deviation of the noise. A new method for determining the local noise statistics that rejects outliers is introduced in combination with the MHT. An experiment using observations of a known GEO satellite are used to demonstrate the improved detection performance of the new algorithm over algorithms previously reported in the literature. The results show a significant improvement in the probability of detection by as much as 50% over existing algorithms. In addition to detection, the S/N results prove to be linearly related to the least-squares estimates of point source irradiance, thus improving photometric accuracy. The views expressed are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.

  18. First results on DEPFET Active Pixel Sensors fabricated in a CMOS foundry—a promising approach for new detector development and scientific instrumentation

    NASA Astrophysics Data System (ADS)

    Aschauer, S.; Majewski, P.; Lutz, G.; Soltau, H.; Holl, P.; Hartmann, R.; Schlosser, D.; Paschen, U.; Weyers, S.; Dreiner, S.; Klusmann, M.; Hauser, J.; Kalok, D.; Bechteler, A.; Heinzinger, K.; Porro, M.; Titze, B.; Strüder, L.

    2017-11-01

    DEPFET Active Pixel Sensors (APS) have been introduced as focal plane detectors for X-ray astronomy already in 1996. Fabricated on high resistivity, fully depleted silicon and back-illuminated they can provide high quantum efficiency and low noise operation even at very high read rates. In 2009 a new type of DEPFET APS, the DSSC (DEPFET Sensor with Signal Compression) was developed, which is dedicated to high-speed X-ray imaging at the European X-ray free electron laser facility (EuXFEL) in Hamburg. In order to resolve the enormous contrasts occurring in Free Electron Laser (FEL) experiments, this new DSSC-DEPFET sensor has the capability of nonlinear amplification, that is, high gain for low intensities in order to obtain single-photon detection capability, and reduced gain for high intensities to achieve high dynamic range for several thousand photons per pixel and frame. We call this property "signal compression". Starting in 2015, we have been fabricating DEPFET sensors in an industrial scale CMOS foundry maintaining the outstanding proven DEPFET properties and adding new capabilities due to the industrial-scale CMOS process. We will highlight these additional features and describe the progress achieved so far. In a first attempt on double-sided polished 725 μm thick 200 mm high resistivity float zone silicon wafers all relevant device related properties have been measured, such as leakage current, depletion voltage, transistor characteristics, noise and energy resolution for X-rays and the nonlinear response. The smaller feature size provided by the new technology allows for an advanced design and significant improvements in device performance. A brief summary of the present status will be given as well as an outlook on next steps and future perspectives.

  19. Recent developments in OLED-based chemical and biological sensors

    NASA Astrophysics Data System (ADS)

    Shinar, Joseph; Zhou, Zhaoqun; Cai, Yuankun; Shinar, Ruth

    2007-09-01

    Recent developments in the structurally integrated OLED-based platform of luminescent chemical and biological sensors are reviewed. In this platform, an array of OLED pixels, which is structurally integrated with the sensing elements, is used as the photoluminescence (PL) excitation source. The structural integration is achieved by fabricating the OLED array and the sensing element on opposite sides of a common glass substrate or on two glass substrates that are attached back-to-back. As it does not require optical fibers, lens, or mirrors, it results in a uniquely simple, low-cost, and potentially rugged geometry. The recent developments on this platform include the following: (1) Enhancing the performance of gas-phase and dissolved oxygen sensors. This is achieved by (a) incorporating high-dielectric TiO II nanoparticles in the oxygen-sensitive Pt and Pd octaethylporphyrin (PtOEP and PdOEP, respectively)- doped polystyrene (PS) sensor films, and (b) embedding the oxygen-sensitive dyes in a matrix of polymer blends such as PS:polydimethylsiloxane (PDMS). (2) Developing sensor arrays for simultaneous detection of multiple serum analytes, including oxygen, glucose, lactate, and alcohol. The sensing element for each analyte consists of a PtOEP-doped PS oxygen sensor, and a solution containing the oxidase enzyme specific to the analyte. Each sensing element is coupled to two individually addressable OLED pixels and a Si photodiode photodetector (PD). (3) Enhancing the integration of the platform, whereby a PD array is also structurally integrated with the OLED array and sensing elements. This enhanced integration is achieved by fabricating an array of amorphous or nanocrystalline Si-based PDs, followed by fabrication of the OLED pixels in the gaps between these Si PDs.

  20. Correlation of Condylar Guidance Determined by Panoramic Radiographs to One Determined by Conventional Methods.

    PubMed

    Godavarthi, A Sowjanya; Sajjan, M C Suresh; Raju, A V Rama; Rajeshkumar, P; Premalatha, Averneni; Chava, Narayana

    2015-08-01

    To evaluate the feasibility of using panoramic radiographs as an alternative to an interocclusal recording method for determining the condylar guidance in dentate and edentulous conditions. 20 dentulous individuals with an age range of 20-30 years and 20 edentulous patients of 40-65 years were selected. An interocclusal bite registration was done in protrusive position for all the subjects. Orthopantomographs were made for all patients in open mouth position. Hanau articulator was modified to record the angulations to the accuracy of 1°. Tracing of glenoid fossa on radiograph was done to measure the condylar guidance angles. Readings were recorded and analyzed by Freidman's test and t-test. Condylar guidance values obtained by the interocclusal method and radiographic method in dentate individuals on the right side and left side 40.55°, and 37.1°, and 40.15°, and 34.75°, respectively. In the edentulous individuals, the values on the right side and left side was 36.7° and 36.1° and 35.95° and 33.6,° respectively. The difference was statistically significant (P = < 0.001) in dentate group and was not statistically significant (P = 0.6493) in edentulous group. Panoramic radiograph can be used as an alternative to interocclusal technique only in edentulous patients. Further studies comparing panoramic radiograph to jaw tracking devices would substantiate the results of this study.

Top