Sample records for space-based multi-angle imaging

  1. Imaging of earthquake faults using small UAVs as a pathfinder for air and space observations

    USGS Publications Warehouse

    Donnellan, Andrea; Green, Joseph; Ansar, Adnan; Aletky, Joseph; Glasscoe, Margaret; Ben-Zion, Yehuda; Arrowsmith, J. Ramón; DeLong, Stephen B.

    2017-01-01

    Large earthquakes cause billions of dollars in damage and extensive loss of life and property. Geodetic and topographic imaging provide measurements of transient and long-term crustal deformation needed to monitor fault zones and understand earthquakes. Earthquake-induced strain and rupture characteristics are expressed in topographic features imprinted on the landscapes of fault zones. Small UAVs provide an efficient and flexible means to collect multi-angle imagery to reconstruct fine scale fault zone topography and provide surrogate data to determine requirements for and to simulate future platforms for air- and space-based multi-angle imaging.

  2. Sua Pan surface bidirectional reflectance: a validation experiment of the Multi-angle Imaging SpectroRadiometer (MISR) during SAFARI 2000

    NASA Technical Reports Server (NTRS)

    Abdou, Wedad A.; Pilorz, Stuart H.; Helmlinger, Mark C.; Diner, David J.; Conel, James E.; Martonchik, John V.; Gatebe, Charles K.; King, Michael D.; Hobbs, Peter V.

    2004-01-01

    The Southern Africa Regional Science Initiative (SAFARI 2000) dray deason campaign was carried out during August and September 2000 at the peak of biomass burning. The intensive ground-based and airborne measurements in this campaign provided a unique opportunity to validate space sensors, such as the Multi-angle Imaging SpectroRadiometer (MISR), onboard NASA's EOS Terra platform.

  3. The Physics of Imaging with Remote Sensors : Photon State Space & Radiative Transfer

    NASA Technical Reports Server (NTRS)

    Davis, Anthony B.

    2012-01-01

    Standard (mono-pixel/steady-source) retrieval methodology is reaching its fundamental limit with access to multi-angle/multi-spectral photo- polarimetry. Next... Two emerging new classes of retrieval algorithm worth nurturing: multi-pixel time-domain Wave-radiometry transition regimes, and more... Cross-fertilization with bio-medical imaging. Physics-based remote sensing: - What is "photon state space?" - What is "radiative transfer?" - Is "the end" in sight? Two wide-open frontiers! center dot Examples (with variations.

  4. Eyjafjallajokull Volcano Plume Particle-Type Characterization from Space-Based Multi-angle Imaging

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.; Limbacher, James

    2012-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) Research Aerosol algorithm makes it possible to study individual aerosol plumes in considerable detail. From the MISR data for two optically thick, near-source plumes from the spring 2010 eruption of the Eyjafjallaj kull volcano, we map aerosol optical depth (AOD) gradients and changing aerosol particle types with this algorithm; several days downwind, we identify the occurrence of volcanic ash particles and retrieve AOD, demonstrating the extent and the limits of ash detection and mapping capability with the multi-angle, multi-spectral imaging data. Retrieved volcanic plume AOD and particle microphysical properties are distinct from background values near-source, as well as for overwater cases several days downwind. The results also provide some indication that as they evolve, plume particles brighten, and average particle size decreases. Such detailed mapping offers context for suborbital plume observations having much more limited sampling. The MISR Standard aerosol product identified similar trends in plume properties as the Research algorithm, though with much smaller differences compared to background, and it does not resolve plume structure. Better optical analogs of non-spherical volcanic ash, and coincident suborbital data to validate the satellite retrieval results, are the factors most important for further advancing the remote sensing of volcanic ash plumes from space.

  5. Multi-Angle View of the Canary Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    A multi-angle view of the Canary Islands in a dust storm, 29 February 2000. At left is a true-color image taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. This image was captured by the MISR camera looking at a 70.5-degree angle to the surface, ahead of the spacecraft. The middle image was taken by the MISR downward-looking (nadir) camera, and the right image is from the aftward 70.5-degree camera. The images are reproduced using the same radiometric scale, so variations in brightness, color, and contrast represent true variations in surface and atmospheric reflectance with angle. Windblown dust from the Sahara Desert is apparent in all three images, and is much brighter in the oblique views. This illustrates how MISR's oblique imaging capability makes the instrument a sensitive detector of dust and other particles in the atmosphere. Data for all channels are presented in a Space Oblique Mercator map projection to facilitate their co-registration. The images are about 400 km (250 miles)wide, with a spatial resolution of about 1.1 kilometers (1,200 yards). North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  6. Space-Based Remote Sensing of Atmospheric Aerosols: The Multi-Angle Spectro-Polarimetric Frontier

    NASA Technical Reports Server (NTRS)

    Kokhanovsky, A. A.; Davis, A. B.; Cairns, B.; Dubovik, O.; Hasekamp, O. P.; Sano, I.; Mukai, S.; Rozanov, V. V.; Litvinov, P.; Lapyonok, T.; hide

    2015-01-01

    The review of optical instrumentation, forward modeling, and inverse problem solution for the polarimetric aerosol remote sensing from space is presented. The special emphasis is given to the description of current airborne and satellite imaging polarimeters and also to modern satellite aerosol retrieval algorithms based on the measurements of the Stokes vector of reflected solar light as detected on a satellite. Various underlying surface reflectance models are discussed and evaluated.

  7. Distinguishing remobilized ash from erupted volcanic plumes using space-borne multi-angle imaging.

    PubMed

    Flower, Verity J B; Kahn, Ralph A

    2017-10-28

    Volcanic systems are comprised of a complex combination of ongoing eruptive activity and secondary hazards, such as remobilized ash plumes. Similarities in the visual characteristics of remobilized and erupted plumes, as imaged by satellite-based remote sensing, complicate the accurate classification of these events. The stereo imaging capabilities of the Multi-angle Imaging SpectroRadiometer (MISR) were used to determine the altitude and distribution of suspended particles. Remobilized ash shows distinct dispersion, with particles distributed within ~1.5 km of the surface. Particle transport is consistently constrained by local topography, limiting dispersion pathways downwind. The MISR Research Aerosol (RA) retrieval algorithm was used to assess plume particle microphysical properties. Remobilized ash plumes displayed a dominance of large particles with consistent absorption and angularity properties, distinct from emitted plumes. The combination of vertical distribution, topographic control, and particle microphysical properties makes it possible to distinguish remobilized ash flows from eruptive plumes, globally.

  8. Up Close to Mimas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    This image is a narrow angle clear-filter image which was processed to enhance the contrast in brightness and sharpness of visible features.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of this image.

    This image was obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top.

    The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

    For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

  9. Integrated large view angle hologram system with multi-slm

    NASA Astrophysics Data System (ADS)

    Yang, ChengWei; Liu, Juan

    2017-10-01

    Recently holographic display has attracted much attention for its ability to generate real-time 3D reconstructed image. CGH provides an effective way to produce hologram, and spacial light modulator (SLM) is used to reconstruct the image. However the reconstructing system is usually very heavy and complex, and the view-angle is limited by the pixel size and spatial bandwidth product (SBP) of the SLM. In this paper a light portable holographic display system is proposed by integrating the optical elements and host computer units.Which significantly reduces the space taken in horizontal direction. CGH is produced based on the Fresnel diffraction and point source method. To reduce the memory usage and image distortion, we use an optimized accurate compressed look up table method (AC-LUT) to compute the hologram. In the system, six SLMs are concatenated to a curved plane, each one loading the phase-only hologram in a different angle of the object, the horizontal view-angle of the reconstructed image can be expanded to about 21.8°.

  10. An empirical study on the utility of BRDF model parameters and topographic parameters for mapping vegetation in a semi-arid region with MISR imagery

    USDA-ARS?s Scientific Manuscript database

    Multi-angle remote sensing has been proved useful for mapping vegetation community types in desert regions. Based on Multi-angle Imaging Spectro-Radiometer (MISR) multi-angular images, this study compares roles played by Bidirectional Reflectance Distribution Function (BRDF) model parameters with th...

  11. Hawaii

    Atmospheric Science Data Center

    2014-05-15

    article title:  Big Island, Hawaii     View Larger ... Multi-angle Imaging SpectroRadiometer (MISR) images of the Big Island of Hawaii, April - June 2000. The images have been rotated so that ... NASA's Goddard Space Flight Center, Greenbelt, MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science ...

  12. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  13. An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images

    NASA Astrophysics Data System (ADS)

    Zhang, Ka; Sheng, Yehua; Wang, Meizhen; Fu, Suxia

    2018-05-01

    The traditional multi-view vertical line locus (TMVLL) matching method is an object-space-based method that is commonly used to directly acquire spatial 3D coordinates of ground objects in photogrammetry. However, the TMVLL method can only obtain one elevation and lacks an accurate means of validating the matching results. In this paper, we propose an enhanced multi-view vertical line locus (EMVLL) matching algorithm based on positioning consistency for aerial or space images. The algorithm involves three components: confirming candidate pixels of the ground primitive in the base image, multi-view image matching based on the object space constraints for all candidate pixels, and validating the consistency of the object space coordinates with the multi-view matching result. The proposed algorithm was tested using actual aerial images and space images. Experimental results show that the EMVLL method successfully solves the problems associated with the TMVLL method, and has greater reliability, accuracy and computing efficiency.

  14. Validation of multi-angle imaging spectroradiometer aerosol products in China

    Treesearch

    J. Liu; X. Xia; Z. Li; P. Wang; M. Min; WeiMin Hao; Y. Wang; J. Xin; X. Li; Y. Zheng; Z. Chen

    2010-01-01

    Based on AErosol RObotic NETwork and Chinese Sun Hazemeter Network data, the Multi-angle Imaging SpectroRadiometer (MISR) level 2 aerosol optical depth (AOD) products are evaluated in China. The MISR retrievals depict well the temporal aerosol trend in China with correlation coefficients exceeding 0.8 except for stations located in northeast China and at the...

  15. Desert Dust Aerosol Air Mass Mapping in the Western Sahara, Using Particle Properties Derived from Space-Based Multi-Angle Imaging

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Petzold, Andreas; Wendisch, Manfred; Bierwirth, Eike; Dinter, Tilman; Esselborn, Michael; Fiebig, Marcus; Heese, Birgit; Knippertz, Peter; Mueller, Detlef; hide

    2008-01-01

    Coincident observations made over the Moroccan desert during the Sahara mineral dust experiment (SAMUM) 2006 field campaign are used both to validate aerosol amount and type retrieved from multi-angle imaging spectroradiometer (MISR) observations, and to place the suborbital aerosol measurements into the satellite s larger regional context. On three moderately dusty days during which coincident observations were made, MISR mid-visible aerosol optical thickness (AOT) agrees with field measurements point-by-point to within 0.05 0.1. This is about as well as can be expected given spatial sampling differences; the space-based observations capture AOT trends and variability over an extended region. The field data also validate MISR s ability to distinguish and to map aerosol air masses, from the combination of retrieved constraints on particle size, shape and single-scattering albedo. For the three study days, the satellite observations (1) highlight regional gradients in the mix of dust and background spherical particles, (2) identify a dust plume most likely part of a density flow and (3) show an aerosol air mass containing a higher proportion of small, spherical particles than the surroundings, that appears to be aerosol pollution transported from several thousand kilometres away.

  16. Desert Dust Air Mass Mapping in the Western Sahara, using Particle Properties Derived from Space-based Multi-angle Imaging

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Petzold, Andreas; Wendisch, Manfred; Bierwirth, Eike; Dinter, Tilman; Fiebig, Marcus; Schladitz, Alexander; von Hoyningen-Huene, Wolfgang

    2008-01-01

    Coincident observations made over the Moroccan desert during the SAhara Mineral dUst experiMent (SAMUM) 2006 field campaign are used both to validate aerosol amount and type retrieved from Multi-angle Imaging SpectroRadiometer (MISR) observations, and to place the sub-orbital aerosol measurements into the satellite's larger regional context. On three moderately dusty days for which coincident observations were made, MISR mid-visible aerosol optical thickness (AOT) agrees with field measurements point-by-point to within 0.05 to 0.1. This is about as well as can be expected given spatial sampling differences; the space-based observations capture AOT trends and variability over an extended region. The field data also validate MISR's ability to distinguish and to map aerosol air masses, from the combination of retrieved constraints on particle size, shape, and single-scattering albedo. For the three study days, the satellite observations (a) highlight regional gradients in the mix of dust and background spherical particles, (b) identify a dust plume most likely part of a density flow, and (c) show an air mass containing a higher proportion of small, spherical particles than the surroundings, that appears to be aerosol pollution transported from several thousand kilometers away.

  17. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  18. Development of the algorithm of measurement data and tomographic section reconstruction results processing for evaluating the respiratory activity of the lungs using the multi-angle electric impedance tomography

    NASA Astrophysics Data System (ADS)

    Aleksanyan, Grayr; Shcherbakov, Ivan; Kucher, Artem; Sulyz, Andrew

    2018-04-01

    Continuous monitoring of the patient's breathing by the method of multi-angle electric impedance tomography allows to obtain images of conduction change in the chest cavity during the monitoring. Direct analysis of images is difficult due to the large amount of information and low resolution images obtained by multi-angle electrical impedance tomography. This work presents a method for obtaining a graph of respiratory activity of the lungs based on the results of continuous lung monitoring using the multi-angle electrical impedance tomography method. The method makes it possible to obtain a graph of the respiratory activity of the left and right lungs separately, as well as a summary graph, to which it is possible to apply methods of processing the results of spirography.

  19. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed ADF features can effectively improve the accuracy of urban scene classification, with a significant increase in overall accuracy (3.8-11.7%) compared to using the spectral bands alone. Furthermore, the results indicated the superiority of the proposed ADFs in distinguishing between the spectrally similar and complex man-made classes, including roads and various types of buildings (e.g., high buildings, urban villages, and residential apartments).

  20. Stereoscopic Retrieval of Smoke Plume Heights and Motion from Space-Based Multi-Angle Imaging, Using the MISR INteractive eXplorer(MINX)

    NASA Technical Reports Server (NTRS)

    Nelson, David L.; Kahn, Ralph A.

    2014-01-01

    Airborne particles desert dust, wildfire smoke, volcanic effluent, urban pollution affect Earth's climate as well as air quality and health. They are found in the atmosphere all over the planet, but vary immensely in amount and properties with season and location. Most aerosol particles are injected into the near-surface boundary layer, but some, especially wildfire smoke, desert dust and volcanic ash, can be injected higher into the atmosphere, where they can stay aloft longer, travel farther, produce larger climate effects, and possibly affect human and ecosystem health far downwind. So monitoring aerosol injection height globally can make important contributions to climate science and air quality studies. The Multi-angle Imaging Spectro-Radiometer (MISR) is a space borne instrument designed to study Earths clouds, aerosols, and surface. Since late February 2000 it has been retrieving aerosol particle amount and properties, as well as cloud height and wind data, globally, about once per week. The MINX visualization and analysis tool complements the operational MISR data products, enabling users to retrieve heights and winds locally for detailed studies of smoke plumes, at higher spatial resolution and with greater precision than the operational product and other space-based, passive remote sensing techniques. MINX software is being used to provide plume height statistics for climatological studies as well as to investigate the dynamics of individual plumes, and to provide parameterizations for climate modeling.

  1. Design of a multi-channel free space optical interconnection component

    NASA Astrophysics Data System (ADS)

    Jia, Da-Gong; Zhang, Pei-Song; Jing, Wen-Cai; Tan, Jun; Zhang, Hong-Xia; Zhang, Yi-Mo

    2008-11-01

    A multi-channel free space optical interconnection component, fiber optic rotary joint, was designed using a Dove prism. When the Dove prism is rotated an angle of α around the longitudinal axis, the image rotates an angle of 2 α. The optical interconnection component consists of the signal transmission system, Dove prim and driving mechanism. The planetary gears are used to achieve the speed ratio of 2:1 between the total optical interconnection component and the Dove prism. The C-lenses are employed to couple different optical signals in the signal transmission system. The coupling loss between the receiving fiber of stationary part and the transmitting fiber of rotary part is measured.

  2. Analysis of the multigroup model for muon tomography based threat detection

    NASA Astrophysics Data System (ADS)

    Perry, J. O.; Bacon, J. D.; Borozdin, K. N.; Fabritius, J. M.; Morris, C. L.

    2014-02-01

    We compare different algorithms for detecting a 5 cm tungsten cube using cosmic ray muon technology. In each case, a simple tomographic technique was used for position reconstruction, but the scattering angles were used differently to obtain a density signal. Receiver operating characteristic curves were used to compare images made using average angle squared, median angle squared, average of the squared angle, and a multi-energy group fit of the angular distributions for scenes with and without a 5 cm tungsten cube. The receiver operating characteristic curves show that the multi-energy group treatment of the scattering angle distributions is the superior method for image reconstruction.

  3. Shape and rotational elements of comet 67P/ Churyumov-Gerasimenko derived by stereo-photogrammetric analysis of OSIRIS NAC image data

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Scholten, Frank; Matz, Klaus-Dieter; Roatsch, Thomas; Willner, Konrad; Hviid, Stubbe; Knollenberg, Jörg; Kührt, Ekkehard; Sierks, Holger

    2015-04-01

    The European Space Agency's Rosetta spacecraft is equipped with the OSIRIS imaging system which consists of a wide-angle and a narrow-angle camera (WAC and NAC). After the approach phase, Rosetta was inserted into a descent trajectory of comet 67P/Churyumov-Gerasimenko (C-G) in early August 2014. Until early September, OSIRIS acquired several hundred NAC images of C-G's surface at different scales (from ~5 m/pixel during approach to ~0.9 m/pixel during descent). In that one month observation period, the surface was imaged several times within different mapping sequences. With the comet's rotation period of ~12.4 h and the low spacecraft velocity (< 1 m/s), the entire NAC dataset provides multiple NAC stereo coverage, adequate for stereo-photogrammetric (SPG) analysis towards the derivation of 3D surface models. We constrained the OSIRIS NAC images with our stereo requirements (15° < stereo angles < 45°, incidence angles <85°, emission angles <45°, differences in illumination < 10°, scale better than 5 m/pixel) and extracted about 220 NAC images that provide at least triple stereo image coverage for the entire illuminated surface in about 250 independent multi-stereo image combinations. For each image combination we determined tie points by multi-image matching in order to set-up a 3D control network and a dense surface point cloud for the precise reconstruction of C-G's shape. The control point network defines the input for a stereo-photogrammetric least squares adjustment. Based on the statistical analysis of adjustments we first refined C-G's rotational state (pole orientation and rotational period) and its behavior over time. Based upon this description of the orientation of C-G's body-fixed reference frame, we derived corrections for the nominal navigation data (pointing and position) within a final stereo-photogrammetric block adjustment where the mean 3D point accuracy of more than 100 million surface points has been improved from ~10 m to the sub-meter range. We finally applied point filtering and interpolation techniques to these surface 3D points and show the resulting SPG-based 3D surface model with a lateral sampling rate of about 2 m.

  4. Snowstorm Along the China-Mongolia-Russia Borders

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Heavy snowfall on March 12, 2004, across north China's Inner Mongolia Autonomous Region, Mongolia and Russia, caused train and highway traffic to stop for several days along the Russia-China border. This pair of images from the Multi-angle Imaging SpectroRadiometer (MISR) highlights the snow and surface properties across the region on March 13. The left-hand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The right-hand image is a multi-angle false-color view made from the red band data of the 46-degree aftward camera, the nadir camera, and the 46-degree forward camera.

    About midway between the frozen expanse of China's Hulun Nur Lake (along the right-hand edge of the images) and Russia's Torey Lakes (above image center) is a dark linear feature that corresponds with the China-Mongolia border. In the upper portion of the images, many small plumes of black smoke rise from coal and wood fires and blow toward the southeast over the frozen lakes and snow-covered grasslands. Along the upper left-hand portion of the images, in Russia's Yablonovyy mountain range and the Onon River Valley, the terrain becomes more hilly and forested. In the nadir image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the multi-angle composite, open-canopy forested areas are indicated by green hues. Since this is a multi-angle composite, the green color arises not from the color of the leaves but from the architecture of the surface cover. The green areas appear brighter at the nadir angle than at the oblique angles because more of the snow-covered surface in the gaps between the trees is visible. Color variations in the multi-angle composite also indicate angular reflectance properties for areas covered by snow and ice. The light blue color of the frozen lakes is due to the increased forward scattering of smooth ice, and light orange colors indicate rougher ice or snow, which scatters more light in the backward direction.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire Earth between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 22525. The panels cover an area of about 355 kilometers x 380 kilometers, and utilize data from blocks 50 to 52 within World Reference System-2 path 126.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  5. Method of lungs regional ventilation function assessment on the basis of continuous lung monitoring results using multi-angle electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Aleksanyan, Grayr; Shcherbakov, Ivan; Kucher, Artem; Sulyz, Andrew

    2018-04-01

    With continuous monitoring of the lungs using multi-angle electric impedance tomography method, a large array of images of impedance changes in the patient's chest cavity is accumulated. This article proposes a method for evaluating the regional ventilation function of lungs based on the results of continuous monitoring using the multi-angle electric impedance tomography method, which allows one image of the thoracic cavity to be formed on the basis of a large array of images of the impedance change in the patient's chest cavity. In the presence of pathologies in the lungs (neoplasms, fluid, pneumothorax, etc.) in these areas, air filling will be disrupted, which will be displayed on the generated image. When conducting continuous monitoring in several sections, a three-dimensional pattern of air filling of the thoracic cavity is possible.

  6. Assessing the Tundra-taiga Boundary with Multi-Sensor Satellite Data

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Sun, G.; Kharuk, V. I.; Kovacs, K.

    2004-01-01

    Monitoring the dynamics of the circumpolar boreal forest (taiga) and Arctic tundra boundary is important for understanding the causes and consequences of changes observed in these areas. This ecotone, the world's largest, stretches for over 13,400 km and marks the transition between the northern limits of forests and the southern margin of the tundra. Because of the inaccessibility and large extent of this zone, remote sensing data can play an important role for mapping the characteristics and monitoring the dynamics. Basic understanding of the capabilities of existing space borne instruments for these purposes is required. In this study we examined the use of several remote sensing techniques for identifying the existing tundra- taiga ecotone. These include Landsat-7, MISR, MODIS and RADARSAT data. Historical cover maps, recent forest stand measurements and high-resolution IKONOS images were used for local ground truth. It was found that a tundra-taiga transitional area can be characterized using multi- spectral Landsat ETM+ summer images, multi-angle MISR red band reflectance images, RADARSAT images with larger incidence angle, or multi-temporal and multi-spectral MODIS data. Because of different resolutions and spectral regions covered, the transition zone maps derived from different data types were not identical, but the general patterns were consistent.

  7. Coupled Retrieval of Aerosol Properties and Surface Reflection Using the Airborne Multi-angle SpectroPolarimetric Imager (AirMSPI)

    NASA Astrophysics Data System (ADS)

    Xu, F.; van Harten, G.; Kalashnikova, O. V.; Diner, D. J.; Seidel, F. C.; Garay, M. J.; Dubovik, O.

    2016-12-01

    The Airborne Multi-angle SpectroPolarimetric Imager (AirMSPI) [1] has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI acquires radiance and polarization data at 355, 380, 445, 470*, 555, 660*, 865*, and 935 nm (* denotes polarimetric bands). The imaged area covers about 10 km by 10 km and is observed from 9 view angles between ±67° off of nadir. We have developed an efficient and flexible code that uses the information content of AirMSPI data for a coupled retrieval of aerosol properties and surface reflection. The retrieval was built based on the multi-pixel optimization concept [2], with the use of a hybrid radiative transfer model [3] that combines the Markov Chain [4] and adding/doubling methods [5]. The convergence and robustness of our algorithm is ensured by applying constraints on (a) the spectral variation of the Bidirectional Polarization Distribution Function (BPDF) and angular shape of the Bidirectional Reflectance Distribution Function (BRDF); (b) the spectral variation of aerosol optical properties; and (c) the spatial variation of aerosol parameters across neighboring image pixels. Our retrieval approach has been tested using over 20 AirMSPI datasets having low to moderately high aerosol loadings ( 0.02550-nm< 0.45) and acquired during several field campaigns. Results are compared with AERONET aerosol reference data. We also explore the benefits of AirMSPI's ultraviolet and polarimetric bands as well as the use of multiple view angles. References[1]. D. J. Diner, et al. Atmos. Meas. Tech. 6, 1717 (2013). [2]. O. Dubovik et al. Atmos. Meas. Tech. 4, 975 (2011). [3]. F. Xu et al. Atmos. Meas. Tech. 9, 2877 (2016). [4]. F. Xu et al. Opt. Lett. 36, 2083 (2011). [5]. J. E. Hansen and L.D. Travis. Space Sci. Rev. 16, 527 (1974).

  8. Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images

    NASA Astrophysics Data System (ADS)

    Awumah, Anna; Mahanti, Prasun; Robinson, Mark

    2016-10-01

    Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).

  9. An all-reflective wide-angle flat-field telescope for space

    NASA Technical Reports Server (NTRS)

    Hallam, K. L.; Howell, B. J.; Wilson, M. E.

    1984-01-01

    An all-reflective wide-angle flat-field telescope (WAFFT) designed and built at Goddard Space Flight Center demonstrates the markedly improved wide-angle imaging capability which can be achieved with a design based on a recently announced class of unobscured 3-mirror optical systems. Astronomy and earth observation missions in space dictate the necessity or preference for wide-angle all-reflective systems which can provide UV through IR wavelength coverage and tolerate the space environment. An initial prototype unit has been designed to meet imaging requirements suitable for monitoring the ultraviolet sky from space. The unobscured f/4, 36 mm efl system achieves a full 20 x 30 deg field of view with resolution over a flat focal surface that is well matched for use with advanced ultraviolet image array detectors. Aspects of the design and fabrication approach, which have especially important bearing on the system solution, are reviewed; and test results are compared with the analytic performance predictions. Other possible applications of the WAFFT class of imaging system are briefly discussed. The exceptional wide-angle, high quality resolution, and very wide spectral coverage of the WAFFT-type optical system could make it a very important tool for future space research.

  10. A target field design of open multi-purpose RF coil for musculoskeletal MR imaging at 3T.

    PubMed

    Gao, Fei; Zhang, Rui; Zhou, Diange; Wang, Xiaoying; Huang, Kefu; Zhang, Jue

    2016-10-01

    Musculoskeletal MR imaging under multi-angle situations plays an increasingly important role in assessing joint and muscle tissues system. However, there are still limitations due to the closed structures of most conventional RF coils. In this study, a time-harmonic target-field method was employed to design open multi-purpose coil (OMC) for multi-angle musculoskeletal MR imaging. The phantom imaging results suggested that the proposed OMC could achieve homogeneously distributed magnetic field and high signal-to-noise ratio (SNR) of 239.04±0.83 in the region of interest (ROI). The maximum temperature in the heating hazard test was 16°C lower than the standard regulation, which indicated the security of the designed OMC. Furthermore, to demonstrate the effectiveness of the proposed OMC for musculoskeletal MR imaging, especially for multi-angle imaging, a healthy volunteer was examined for MR imaging of elbow, ankle and knee using OMC. The in vivo imaging results showed that the proposed OMC is effective for MR imaging of musculoskeletal tissues at different body parts, with satisfied B1 field homogeneity and SNR. Moreover, the open structure of the OMC could provide a large joint movement region. The proposed open multi-purpose coil is feasible for musculoskeletal MR imaging, and potentially, it is more suitable for the evaluation of musculoskeletal tissues under multi-angle conditions. Copyright © 2016. Published by Elsevier Inc.

  11. Compensating Atmospheric Turbulence Effects at High Zenith Angles with Adaptive Optics Using Advanced Phase Reconstructors

    NASA Astrophysics Data System (ADS)

    Roggemann, M.; Soehnel, G.; Archer, G.

    Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.

  12. MISR Where on Earth…? Mystery Image Quiz #29

    Atmospheric Science Data Center

    2017-09-07

    ... ready for a challenge? Become a geographical detective and solve the latest mystery quiz from NASA’s MISR (Multi-angle Imaging ... ready for a challenge? Become a geographical detective and solve the latest mystery quiz from NASA’s MISR (Multi-angle Imaging ...

  13. Development of an Aerosol Opacity Retrieval Algorithm for Use with Multi-Angle Land Surface Images

    NASA Technical Reports Server (NTRS)

    Diner, D.; Paradise, S.; Martonchik, J.

    1994-01-01

    In 1998, the Multi-angle Imaging SpectroRadiometer (MISR) will fly aboard the EOS-AM1 spacecraft. MISR will enable unique methods for retrieving the properties of atmospheric aerosols, by providing global imagery of the Earth at nine viewing angles in four visible and near-IR spectral bands. As part of the MISR algorithm development, theoretical methods of analyzing multi-angle, multi-spectral data are being tested using images acquired by the airborne Advanced Solid-State Array Spectroradiometer (ASAS). In this paper we derive a method to be used over land surfaces for retrieving the change in opacity between spectral bands, which can then be used in conjunction with an aerosol model to derive a bound on absolute opacity.

  14. Mystery #28

    Atmospheric Science Data Center

    2017-06-14

    ... ready for a challenge? Become a geographical detective and solve the latest mystery quiz from NASA’s MISR (Multi-angle Imaging ... ready for a challenge? Become a geographical detective and solve the latest mystery quiz from NASA’s MISR (Multi-angle Imaging ...

  15. Image degradation characteristics and restoration based on regularization for diffractive imaging

    NASA Astrophysics Data System (ADS)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  16. Multi-angle Imaging Spectro Radiometer (MISR) Design Issues Influened by Performance Requirements

    NASA Technical Reports Server (NTRS)

    Bruegge, C. J.; White, M. L.; Chrien, N. C. L.; Villegas, E. B.; Raouf, N.

    1993-01-01

    The design of an Earth Remote Sensing Sensor, such as the Multi-angle Imaging SpectroRadiometer (MISR), begins with a set of science requirements and is quickly followed by a set of instrument specifications.

  17. The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation

    NASA Astrophysics Data System (ADS)

    Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.

    2018-04-01

    The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.

  18. IoSiS: a radar system for imaging of satellites in space

    NASA Astrophysics Data System (ADS)

    Jirousek, M.; Anger, S.; Dill, S.; Schreiber, E.; Peichl, M.

    2017-05-01

    Space debris nowadays is one of the main threats for satellite systems especially in low earth orbit (LEO). More than 700,000 debris objects with potential to destroy or damage a satellite are estimated. The effects of an impact often are not identifiable directly from ground. High-resolution radar images are helpful in analyzing a possible damage. Therefor DLR is currently developing a radar system called IoSiS (Imaging of Satellites in Space), being based on an existing steering antenna structure and our multi-purpose high-performance radar system GigaRad for experimental investigations. GigaRad is a multi-channel system operating at X band and using a bandwidth of up to 4.4 GHz in the IoSiS configuration, providing fully separated transmit (TX) and receive (RX) channels, and separated antennas. For the observation of small satellites or space debris a highpower traveling-wave-tube amplifier (TWTA) is mounted close to the TX antenna feed. For the experimental phase IoSiS uses a 9 m TX and a 1 m RX antenna mounted on a common steerable positioner. High-resolution radar images are obtained by using Inverse Synthetic Aperture Radar (ISAR) techniques. The guided tracking of known objects during overpass allows here wide azimuth observation angles. Thus high azimuth resolution comparable to the range resolution can be achieved. This paper outlines technical main characteristics of the IoSiS radar system including the basic setup of the antenna, the radar instrument with the RF error correction, and the measurement strategy. Also a short description about a simulation tool for the whole instrument and expected images is shown.

  19. A Radiative Analysis of Angular Signatures and Oblique Radiance Retrievals over the Polar Regions from the Multi-Angle Imaging Spectroradiometer

    ERIC Educational Resources Information Center

    Wilson, Michael Jason

    2009-01-01

    This dissertation studies clouds over the polar regions using the Multi-angle Imaging SpectroRadiometer (MISR) on-board EOS-Terra. Historically, low thin clouds have been problematic for satellite detection, because these clouds have similar brightness and temperature properties to the surface they overlay. However, the oblique angles of MISR…

  20. Multi-Feature Based Information Extraction of Urban Green Space Along Road

    NASA Astrophysics Data System (ADS)

    Zhao, H. H.; Guan, H. Y.

    2018-04-01

    Green space along road of QuickBird image was studied in this paper based on multi-feature-marks in frequency domain. The magnitude spectrum of green along road was analysed, and the recognition marks of the tonal feature, contour feature and the road were built up by the distribution of frequency channels. Gabor filters in frequency domain were used to detect the features based on the recognition marks built up. The detected features were combined as the multi-feature-marks, and watershed based image segmentation were conducted to complete the extraction of green space along roads. The segmentation results were evaluated by Fmeasure with P = 0.7605, R = 0.7639, F = 0.7622.

  1. Multi exposure image fusion algorithm based on YCbCr space

    NASA Astrophysics Data System (ADS)

    Yang, T. T.; Fang, P. Y.

    2018-05-01

    To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.

  2. Physical Interpretation of the Correlation Between Multi-Angle Spectral Data and Canopy Height

    NASA Technical Reports Server (NTRS)

    Schull, M. A.; Ganguly, S.; Samanta, A.; Huang, D.; Shabanov, N. V.; Jenkins, J. P.; Chiu, J. C.; Marshak, A.; Blair, J. B.; Myneni, R. B.; hide

    2007-01-01

    Recent empirical studies have shown that multi-angle spectral data can be useful for predicting canopy height, but the physical reason for this correlation was not understood. We follow the concept of canopy spectral invariants, specifically escape probability, to gain insight into the observed correlation. Airborne Multi-Angle Imaging Spectrometer (AirMISR) and airborne Laser Vegetation Imaging Sensor (LVIS) data acquired during a NASA Terrestrial Ecology Program aircraft campaign underlie our analysis. Two multivariate linear regression models were developed to estimate LVIS height measures from 28 AirMISR multi-angle spectral reflectances and from the spectrally invariant escape probability at 7 AirMISR view angles. Both models achieved nearly the same accuracy, suggesting that canopy spectral invariant theory can explain the observed correlation. We hypothesize that the escape probability is sensitive to the aspect ratio (crown diameter to crown height). The multi-angle spectral data alone therefore may not provide enough information to retrieve canopy height globally

  3. Description of patellar movement by 3D parameters obtained from dynamic CT acquisition

    NASA Astrophysics Data System (ADS)

    de Sá Rebelo, Marina; Moreno, Ramon Alfredo; Gobbi, Riccardo Gomes; Camanho, Gilberto Luis; de Ávila, Luiz Francisco Rodrigues; Demange, Marco Kawamura; Pecora, Jose Ricardo; Gutierrez, Marco Antonio

    2014-03-01

    The patellofemoral joint is critical in the biomechanics of the knee. The patellofemoral instability is one condition that generates pain, functional impairment and often requires surgery as part of orthopedic treatment. The analysis of the patellofemoral dynamics has been performed by several medical image modalities. The clinical parameters assessed are mainly based on 2D measurements, such as the patellar tilt angle and the lateral shift among others. Besides, the acquisition protocols are mostly performed with the leg laid static at fixed angles. The use of helical multi slice CT scanner can allow the capture and display of the joint's movement performed actively by the patient. However, the orthopedic applications of this scanner have not yet been standardized or widespread. In this work we present a method to evaluate the biomechanics of the patellofemoral joint during active contraction using multi slice CT images. This approach can greatly improve the analysis of patellar instability by displaying the physiology during muscle contraction. The movement was evaluated by computing its 3D displacements and rotations from different knee angles. The first processing step registered the images in both angles based on the femuŕs position. The transformation matrix of the patella from the images was then calculated, which provided the rotations and translations performed by the patella from its position in the first image to its position in the second image. Analysis of these parameters for all frames provided real 3D information about the patellar displacement.

  4. TRIO (Triplet Ionospheric Observatory) Mission

    NASA Astrophysics Data System (ADS)

    Lee, D.; Seon, J.; Jin, H.; Kim, K.; Lee, J.; Jang, M.; Pak, S.; Kim, K.; Lin, R. P.; Parks, G. K.; Halekas, J. S.; Larson, D. E.; Eastwood, J. P.; Roelof, E. C.; Horbury, T. S.

    2009-12-01

    Triplets of identical cubesats will be built to carry out the following scientific objectives: i) multi-observations of ionospheric ENA (Energetic Neutral Atom) imaging, ii) ionospheric signature of suprathermal electrons and ions associated with auroral acceleration as well as electron microbursts, and iii) complementary measurements of magnetic fields for particle data. Each satellite, a cubesat for ion, neutral, electron, and magnetic fields (CINEMA), is equipped with a suprathermal electron, ion, neutral (STEIN) instrument and a 3-axis magnetometer of magnetoresistive sensors. TRIO is developed by three institutes: i) two CINEMA by Kyung Hee University (KHU) under the WCU program, ii) one CINEMA by UC Berkeley under the NSF support, and iii) three magnetometers by Imperial College, respectively. Multi-spacecraft observations in the STEIN instruments will provide i) stereo ENA imaging with a wide angle in local times, which are sensitive to the evolution of ring current phase space distributions, ii) suprathermal electron measurements with narrow spacings, which reveal the differential signature of accelerated electrons driven by Alfven waves and/or double layer formation in the ionosphere between the acceleration region and the aurora, and iii) suprathermal ion precipitation when the storm-time ring current appears. In addition, multi-spacecraft magnetic field measurements in low earth orbits will allow the tracking of the phase fronts of ULF waves, FTEs, and quasi-periodic reconnection events between ground-based magnetometer data and upstream satellite data.

  5. Ten Years of MISR Observations from Terra: Looking Back, Ahead, and in Between

    NASA Technical Reports Server (NTRS)

    Diner, David J.; Ackerman, Thomas P.; Braverman, Amy J.; Bruegge, Carol J.; Chopping, Mark J.; Clothiaux, Eugene E.; Davies, Roger; Di Girolamo, Larry; Kahn, Ralph A.; Knyazikhin, Yuri; hide

    2010-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument has been collecting global Earth data from NASA's Terra satellite since February 2000. With its nine along-track view angles, four visible/near-infrared spectral bands, intrinsic spatial resolution of 275 m, and stable radiometric and geometric calibration, no instrument that combines MISR's attributes has previously flown in space. The more than 10-year (and counting) MISR data record provides unprecedented opportunities for characterizing long-term trends in aerosol, cloud, and surface properties, and includes 3-D textural information conventionally thought to be accessible only to active sensors.

  6. The effects of navigator distortion and noise level on interleaved EPI DWI reconstruction: a comparison between image- and k-space-based method.

    PubMed

    Dai, Erpeng; Zhang, Zhe; Ma, Xiaodong; Dong, Zijing; Li, Xuesong; Xiong, Yuhui; Yuan, Chun; Guo, Hua

    2018-03-23

    To study the effects of 2D navigator distortion and noise level on interleaved EPI (iEPI) DWI reconstruction, using either the image- or k-space-based method. The 2D navigator acquisition was adjusted by reducing its echo spacing in the readout direction and undersampling in the phase encoding direction. A POCS-based reconstruction using image-space sampling function (IRIS) algorithm (POCSIRIS) was developed to reduce the impact of navigator distortion. POCSIRIS was then compared with the original IRIS algorithm and a SPIRiT-based k-space algorithm, under different navigator distortion and noise levels. Reducing the navigator distortion can improve the reconstruction of iEPI DWI. The proposed POCSIRIS and SPIRiT-based algorithms are more tolerable to different navigator distortion levels, compared to the original IRIS algorithm. SPIRiT may be hindered by low SNR of the navigator. Multi-shot iEPI DWI reconstruction can be improved by reducing the 2D navigator distortion. Different reconstruction methods show variable sensitivity to navigator distortion or noise levels. Furthermore, the findings can be valuable in applications such as simultaneous multi-slice accelerated iEPI DWI and multi-slab diffusion imaging. © 2018 International Society for Magnetic Resonance in Medicine.

  7. Improved estimation of leaf area index and leaf chlorophyll content of a potato crop using multi-angle spectral data - potential of unmanned aerial vehicle imagery

    NASA Astrophysics Data System (ADS)

    Roosjen, Peter P. J.; Brede, Benjamin; Suomalainen, Juha M.; Bartholomeus, Harm M.; Kooistra, Lammert; Clevers, Jan G. P. W.

    2018-04-01

    In addition to single-angle reflectance data, multi-angular observations can be used as an additional information source for the retrieval of properties of an observed target surface. In this paper, we studied the potential of multi-angular reflectance data for the improvement of leaf area index (LAI) and leaf chlorophyll content (LCC) estimation by numerical inversion of the PROSAIL model. The potential for improvement of LAI and LCC was evaluated for both measured data and simulated data. The measured data was collected on 19 July 2016 by a frame-camera mounted on an unmanned aerial vehicle (UAV) over a potato field, where eight experimental plots of 30 × 30 m were designed with different fertilization levels. Dozens of viewing angles, covering the hemisphere up to around 30° from nadir, were obtained by a large forward and sideways overlap of collected images. Simultaneously to the UAV flight, in situ measurements of LAI and LCC were performed. Inversion of the PROSAIL model was done based on nadir data and based on multi-angular data collected by the UAV. Inversion based on the multi-angular data performed slightly better than inversion based on nadir data, indicated by the decrease in RMSE from 0.70 to 0.65 m2/m2 for the estimation of LAI, and from 17.35 to 17.29 μg/cm2 for the estimation of LCC, when nadir data were used and when multi-angular data were used, respectively. In addition to inversions based on measured data, we simulated several datasets at different multi-angular configurations and compared the accuracy of the inversions of these datasets with the inversion based on data simulated at nadir position. In general, the results based on simulated (synthetic) data indicated that when more viewing angles, more well distributed viewing angles, and viewing angles up to larger zenith angles were available for inversion, the most accurate estimations were obtained. Interestingly, when using spectra simulated at multi-angular sampling configurations as were captured by the UAV platform (view zenith angles up to 30°), already a huge improvement could be obtained when compared to solely using spectra simulated at nadir position. The results of this study show that the estimation of LAI and LCC by numerical inversion of the PROSAIL model can be improved when multi-angular observations are introduced. However, for the potato crop, PROSAIL inversion for measured data only showed moderate accuracy and slight improvements.

  8. Polarimetric analysis of a CdZnTe spectro-imager under multi-pixel irradiation conditions

    NASA Astrophysics Data System (ADS)

    Pinto, M.; da Silva, R. M. Curado; Maia, J. M.; Simões, N.; Marques, J.; Pereira, L.; Trindade, A. M. F.; Caroli, E.; Auricchio, N.; Stephen, J. B.; Gonçalves, P.

    2016-12-01

    So far, polarimetry in high-energy astrophysics has been insufficiently explored due to the complexity of the required detection, electronic and signal processing systems. However, its importance is today largely recognized by the astrophysical community, therefore the next generation of high-energy space instruments will certainly provide polarimetric observations, contemporaneously with spectroscopy and imaging. We have been participating in high-energy observatory proposals submitted to ESA Cosmic Vision calls, such as GRI (Gamma-Ray Imager), DUAL and ASTROGAM, where the main instrument was a spectro-imager with polarimetric capabilities. More recently, the H2020 AHEAD project was launched with the objective to promote more coherent and mature future high-energy space mission proposals. In this context of high-energy proposal development, we have tested a CdZnTe detection plane prototype polarimeter under a partially polarized gamma-ray beam generated from an aluminum target irradiated by a 22Na (511 keV) radioactive source. The polarized beam cross section was 1 cm2, allowing the irradiation of a wide multi-pixelated area where all the pixels operate simultaneously as a scatterer and as an absorber. The methods implemented to analyze such multi-pixel irradiation are similar to those required to analyze a spectro-imager polarimeter operating in space, since celestial source photons should irradiate its full pixilated area. Correction methods to mitigate systematic errors inherent to CdZnTe and to the experimental conditions were also implemented. The polarization level ( 40%) and the polarization angle (precision of ±5° up to ±9°) obtained under multi-pixel irradiation conditions are presented and compared with simulated data.

  9. Kinematics of reflections in subsurface offset and angle-domain image gathers

    NASA Astrophysics Data System (ADS)

    Dafni, Raanan; Symes, William W.

    2018-05-01

    Seismic migration in the angle-domain generates multiple images of the earth's interior in which reflection takes place at different scattering-angles. Mechanically, the angle-dependent reflection is restricted to happen instantaneously and at a fixed point in space: Incident wave hits a discontinuity in the subsurface media and instantly generates a scattered wave at the same common point of interaction. Alternatively, the angle-domain image may be associated with space-shift (regarded as subsurface offset) extended migration that artificially splits the reflection geometry. Meaning that, incident and scattered waves interact at some offset distance. The geometric differences between the two approaches amount to a contradictory angle-domain behaviour, and unlike kinematic description. We present a phase space depiction of migration methods extended by the peculiar subsurface offset split and stress its profound dissimilarity. In spite of being in radical contradiction with the general physics, the subsurface offset reveals a link to some valuable angle-domain quantities, via post-migration transformations. The angle quantities are indicated by the direction normal to the subsurface offset extended image. They specifically define the local dip and scattering angles if the velocity at the split reflection coordinates is the same for incident and scattered wave pairs. Otherwise, the reflector normal is not a bisector of the opening angle, but of the corresponding slowness vectors. This evidence, together with the distinguished geometry configuration, fundamentally differentiates the angle-domain decomposition based on the subsurface offset split from the conventional decomposition at a common reflection point. An asymptotic simulation of angle-domain moveout curves in layered media exposes the notion of split versus common reflection point geometry. Traveltime inversion methods that involve the subsurface offset extended migration must accommodate the split geometry in the inversion scheme for a robust and successful convergence at the optimal velocity model.

  10. Multi-angle lensless digital holography for depth resolved imaging on a chip.

    PubMed

    Su, Ting-Wei; Isikman, Serhan O; Bishara, Waheb; Tseng, Derek; Erlinger, Anthony; Ozcan, Aydogan

    2010-04-26

    A multi-angle lensfree holographic imaging platform that can accurately characterize both the axial and lateral positions of cells located within multi-layered micro-channels is introduced. In this platform, lensfree digital holograms of the micro-objects on the chip are recorded at different illumination angles using partially coherent illumination. These digital holograms start to shift laterally on the sensor plane as the illumination angle of the source is tilted. Since the exact amount of this lateral shift of each object hologram can be calculated with an accuracy that beats the diffraction limit of light, the height of each cell from the substrate can be determined over a large field of view without the use of any lenses. We demonstrate the proof of concept of this multi-angle lensless imaging platform by using light emitting diodes to characterize various sized microparticles located on a chip with sub-micron axial and lateral localization over approximately 60 mm(2) field of view. Furthermore, we successfully apply this lensless imaging approach to simultaneously characterize blood samples located at multi-layered micro-channels in terms of the counts, individual thicknesses and the volumes of the cells at each layer. Because this platform does not require any lenses, lasers or other bulky optical/mechanical components, it provides a compact and high-throughput alternative to conventional approaches for cytometry and diagnostics applications involving lab on a chip systems.

  11. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  12. A method for automatic grain segmentation of multi-angle cross-polarized microscopic images of sandstone

    NASA Astrophysics Data System (ADS)

    Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian

    2018-06-01

    Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.

  13. Quantifying Astronaut Tasks: Robotic Technology and Future Space Suit Design

    NASA Technical Reports Server (NTRS)

    Newman, Dava

    2003-01-01

    The primary aim of this research effort was to advance the current understanding of astronauts' capabilities and limitations in space-suited EVA by developing models of the constitutive and compatibility relations of a space suit, based on experimental data gained from human test subjects as well as a 12 degree-of-freedom human-sized robot, and utilizing these fundamental relations to estimate a human factors performance metric for space suited EVA work. The three specific objectives are to: 1) Compile a detailed database of torques required to bend the joints of a space suit, using realistic, multi- joint human motions. 2) Develop a mathematical model of the constitutive relations between space suit joint torques and joint angular positions, based on experimental data and compare other investigators' physics-based models to experimental data. 3) Estimate the work envelope of a space suited astronaut, using the constitutive and compatibility relations of the space suit. The body of work that makes up this report includes experimentation, empirical and physics-based modeling, and model applications. A detailed space suit joint torque-angle database was compiled with a novel experimental approach that used space-suited human test subjects to generate realistic, multi-joint motions and an instrumented robot to measure the torques required to accomplish these motions in a space suit. Based on the experimental data, a mathematical model is developed to predict joint torque from the joint angle history. Two physics-based models of pressurized fabric cylinder bending are compared to experimental data, yielding design insights. The mathematical model is applied to EVA operations in an inverse kinematic analysis coupled to the space suit model to calculate the volume in which space-suited astronauts can work with their hands, demonstrating that operational human factors metrics can be predicted from fundamental space suit information.

  14. A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm

    PubMed Central

    You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei

    2011-01-01

    With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770

  15. Science Discoveries Enabled by Hosting Optical Imagers on Commercial Satellite Constellations

    NASA Astrophysics Data System (ADS)

    Erlandson, R. E.; Kelly, M. A.; Hibbitts, C.; Kumar, C.; Dyrud, L. P.

    2012-12-01

    The advent of commercial space activities that utilize large space-based constellations provide a new and cost effective opportunity to acquire multi-point observations. Previously, a custom designed space-based constellation, while technically feasible, would require a substantial monetary investment. However, commercial industry has now been entertaining the concept of hosting payloads on their space-based constellations resulting in low-cost access to space. Examples, include the low Earth orbit Iridium Next constellation as well as communication satellites in geostationary. In some of these constellations data distribution can be provided in real time, a feature relevant to applications in the areas of space weather and disaster monitoring. From the perspective of new scientific discoveries enabled by low cost access to space, the cost and thus value proposition is dramatically changed. For example, a constellation of sixty-six satellites (Iridium Next), hosting a single band or multi-spectral imager can now provide observations of the aurora with a spatial resolution of a few hundred meters at all local times and in both hemispheres simultaneously. Remote sensing of clouds is another example where it is now possible to acquire global imagery at resolutions between 100-1000m. Finally, land use imagery is another example where one can use either imaging or spectrographic imagers to solve a multitude of problems. In this work, we will discuss measurement architectures and the multi-disciplinary scientific discoveries that are enable by large space based constellations.

  16. On-Line Multi-Damage Scanning Spatial-Wavenumber Filter Based Imaging Method for Aircraft Composite Structure.

    PubMed

    Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao

    2017-05-11

    Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.

  17. On-Line Multi-Damage Scanning Spatial-Wavenumber Filter Based Imaging Method for Aircraft Composite Structure

    PubMed Central

    Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao

    2017-01-01

    Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures. PMID:28772879

  18. Multi-Spectral Stereo Atmospheric Remote Sensing (STARS) for Retrieval of Cloud Properties and Cloud-Motion Vectors

    NASA Astrophysics Data System (ADS)

    Kelly, M. A.; Boldt, J.; Wilson, J. P.; Yee, J. H.; Stoffler, R.

    2017-12-01

    The multi-spectral STereo Atmospheric Remote Sensing (STARS) concept has the objective to provide high-spatial and -temporal-resolution observations of 3D cloud structures related to hurricane development and other severe weather events. The rapid evolution of severe weather demonstrates a critical need for mesoscale observations of severe weather dynamics, but such observations are rare, particularly over the ocean where extratropical and tropical cyclones can undergo explosive development. Coincident space-based measurements of wind velocity and cloud properties at the mesoscale remain a great challenge, but are critically needed to improve the understanding and prediction of severe weather and cyclogenesis. STARS employs a mature stereoscopic imaging technique on two satellites (e.g. two CubeSats, two hosted payloads) to simultaneously retrieve cloud motion vectors (CMVs), cloud-top temperatures (CTTs), and cloud geometric heights (CGHs) from multi-angle, multi-spectral observations of cloud features. STARS is a pushbroom system based on separate wide-field-of-view co-boresighted multi-spectral cameras in the visible, midwave infrared (MWIR), and longwave infrared (LWIR) with high spatial resolution (better than 1 km). The visible system is based on a pan-chromatic, low-light imager to resolve cloud structures under nighttime illumination down to ¼ moon. The MWIR instrument, which is being developed as a NASA ESTO Instrument Incubator Program (IIP) project, is based on recent advances in MWIR detector technology that requires only modest cooling. The STARS payload provides flexible options for spaceflight due to its low size, weight, power (SWaP) and very modest cooling requirements. STARS also meets AF operational requirements for cloud characterization and theater weather imagery. In this paper, an overview of the STARS concept, including the high-level sensor design, the concept of operations, and measurement capability will be presented.

  19. Multi-viewer tracking integral imaging system and its viewing zone analysis.

    PubMed

    Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho

    2009-09-28

    We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.

  20. Large-viewing-angle electroholography by space projection

    NASA Astrophysics Data System (ADS)

    Sato, Koki; Obana, Kazuki; Okumura, Toshimichi; Kanaoka, Takumi; Nishikawa, Satoko; Takano, Kunihiko

    2004-06-01

    The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel ( time shared CGH of RGB three colors ). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.

  1. Wavelength scanning achieves pixel super-resolution in holographic on-chip microscopy

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Göröcs, Zoltan; Zhang, Yibo; Feizi, Alborz; Greenbaum, Alon; Ozcan, Aydogan

    2016-03-01

    Lensfree holographic on-chip imaging is a potent solution for high-resolution and field-portable bright-field imaging over a wide field-of-view. Previous lensfree imaging approaches utilize a pixel super-resolution technique, which relies on sub-pixel lateral displacements between the lensfree diffraction patterns and the image sensor's pixel-array, to achieve sub-micron resolution under unit magnification using state-of-the-art CMOS imager chips, commonly used in e.g., mobile-phones. Here we report, for the first time, a wavelength scanning based pixel super-resolution technique in lensfree holographic imaging. We developed an iterative super-resolution algorithm, which generates high-resolution reconstructions of the specimen from low-resolution (i.e., under-sampled) diffraction patterns recorded at multiple wavelengths within a narrow spectral range (e.g., 10-30 nm). Compared with lateral shift-based pixel super-resolution, this wavelength scanning approach does not require any physical shifts in the imaging setup, and the resolution improvement is uniform in all directions across the sensor-array. Our wavelength scanning super-resolution approach can also be integrated with multi-height and/or multi-angle on-chip imaging techniques to obtain even higher resolution reconstructions. For example, using wavelength scanning together with multi-angle illumination, we achieved a halfpitch resolution of 250 nm, corresponding to a numerical aperture of 1. In addition to pixel super-resolution, the small scanning steps in wavelength also enable us to robustly unwrap phase, revealing the specimen's optical path length in our reconstructed images. We believe that this new wavelength scanning based pixel super-resolution approach can provide competitive microscopy solutions for high-resolution and field-portable imaging needs, potentially impacting tele-pathology applications in resource-limited-settings.

  2. Image quality improvement in MDCT cardiac imaging via SMART-RECON method

    NASA Astrophysics Data System (ADS)

    Li, Yinsheng; Cao, Ximiao; Xing, Zhanfeng; Sun, Xuguang; Hsieh, Jiang; Chen, Guang-Hong

    2017-03-01

    Coronary CT angiography (CCTA) is a challenging imaging task currently limited by the achievable temporal resolution of modern Multi-Detector CT (MDCT) scanners. In this paper, the recently proposed SMARTRECON method has been applied in MDCT-based CCTA imaging to improve the image quality without any prior knowledge of cardiac motion. After the prospective ECG-gated data acquisition from a short-scan angular span, the acquired data were sorted into several sub-sectors of view angles; each corresponds to a 1/4th of the short-scan angular range. Information of the cardiac motion was thus encoded into the data in each view angle sub-sector. The SMART-RECON algorithm was then applied to jointly reconstruct several image volumes, each of which is temporally consistent with the data acquired in the corresponding view angle sub-sector. Extensive numerical simulations were performed to validate the proposed technique and investigate the performance dependence.

  3. Aerosol Airmass Type Mapping Over the Urban Mexico City Region From Space-based Multi-angle Imaging

    NASA Technical Reports Server (NTRS)

    Patadia, F.; Kahn, R. A.; Limbacher, J. A.; Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Hair, J. W.

    2013-01-01

    Using Multi-angle Imaging SpectroRadiometer (MISR) and sub-orbital measurements from the 2006 INTEX-B/MILAGRO field campaign, in this study we explore MISR's ability to map different aerosol air mass types over the Mexico City metropolitan area. The aerosol air mass distinctions are based on shape, size and single scattering albedo retrievals from the MISR Research Aerosol Retrieval algorithm. In this region, the research algorithm identifies dust-dominated aerosol mixtures based on non-spherical particle shape, whereas spherical biomass burning and urban pollution particles are distinguished by particle size. Two distinct aerosol air mass types based on retrieved particle microphysical properties, and four spatially distributed aerosol air masses, are identified in the MISR data on 6 March 2006. The aerosol air mass type identification results are supported by coincident, airborne high-spectral-resolution lidar (HSRL) measurements. Aerosol optical depth (AOD) gradients are also consistent between the MISR and sub-orbital measurements, but particles having single-scattering albedo of approx. 0.7 at 558 nm must be included in the retrieval algorithm to produce good absolute AOD comparisons over pollution-dominated aerosol air masses. The MISR standard V22 AOD product, at 17.6 km resolution, captures the observed AOD gradients qualitatively, but retrievals at this coarse spatial scale and with limited spherical absorbing particle options underestimate AOD and do not retrieve particle properties adequately over this complex urban region. However, we demonstrate how AOD and aerosol type mapping can be accomplished with MISR data over complex urban regions, provided the retrieval is performed at sufficiently high spatial resolution, and with a rich enough set of aerosol components and mixtures.

  4. Simulation of Tomographic Reconstruction of Magnetosphere Plasma Distribution By Multi-spacecraft Systems.

    NASA Astrophysics Data System (ADS)

    Kunitsyn, V.; Nesterov, I.; Andreeva, E.; Zelenyi, L.; Veselov, M.; Galperin, Y.; Buchner, J.

    A satellite radiotomography method for electron density distributions was recently proposed for closely-space multi-spacecraft group of high-altitude satellites to study the physics of reconnection process. The original idea of the ROY project is to use a constellation of spacecrafts (one main and several sub-satellites) in order to carry out closely-spaced multipoint measurements and 2D tomographic reconstruction of elec- tron density in the space between the main satellite and the subsatellites. The distances between the satellites were chosen to vary from dozens to few hundreds of kilometers. The easiest data interpretation is achieved when the subsatellites are placed along the plasma streamline. Then, whenever a plasma density irregularity moves between the main satellite and the subsatellites it will be scanned in different directions and we can get 2D distribution of plasma using these projections. However in general sub- satellites are not placed exactly along the plasma streamline. The method of plasma velocity determination relative to multi-spacecraft systems is considered. Possibilities of 3D tomographic imaging using multi-spacecraft systems are analyzed. The model- ing has shown that efficient scheme for 3D tomographic imaging would be to place spacecrafts in different planes so that the angle between the planes would make not more then ten degrees. Work is supported by INTAS PROJECT 2000-465.

  5. A calibrated iterative reconstruction for quantitative photoacoustic tomography using multi-angle light-sheet illuminations

    NASA Astrophysics Data System (ADS)

    Wang, Yihan; Lu, Tong; Zhang, Songhe; Song, Shaoze; Wang, Bingyuan; Li, Jiao; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Quantitative photoacoustic tomography (q-PAT) is a nontrivial technique can be used to reconstruct the absorption image with a high spatial resolution. Several attempts have been investigated by setting point sources or fixed-angle illuminations. However, in practical applications, these schemes normally suffer from low signal-to-noise ratio (SNR) or poor quantification especially for large-size domains, due to the limitation of the ANSI-safety incidence and incompleteness in the data acquisition. We herein present a q-PAT implementation that uses multi-angle light-sheet illuminations and a calibrated iterative multi-angle reconstruction. The approach can acquire more complete information on the intrinsic absorption and SNR-boosted photoacoustic signals at selected planes from the multi-angle wide-field excitations of light-sheet. Therefore, the sliced absorption maps over whole body can be recovered in a measurementflexible, noise-robust and computation-economic way. The proposed approach is validated by the phantom experiment, exhibiting promising performances in image fidelity and quantitative accuracy.

  6. Russian Arctic

    Atmospheric Science Data Center

    2013-04-16

    ... faint greenish hue in the multi-angle composite. This subtle effect suggests that the nadir camera is observing more of the brighter ... energy and water at the Earth's surface, and for preserving biodiversity. The Multi-angle Imaging SpectroRadiometer observes the daylit ...

  7. Combined Infrared Stereo and Laser Ranging Cloud Measurements from Shuttle Mission STS-85

    NASA Technical Reports Server (NTRS)

    Lancaster, Redgie S.; Spinhirne, James D.; OCStarr, David (Technical Monitor)

    2001-01-01

    Multi-angle remote sensing provides a wealth of information for earth and climate monitoring. And, as technology advances so do the options for developing instrumentation versatile enough to meet the demands associated with these types of measurements. In the current work, the multiangle measurement capability of the Infrared Spectral Imaging Radiometer is demonstrated. This instrument flew as part of mission STS-85 of the space shuttle Columbia in 1997 and was the first earth-observing radiometer to incorporate an uncooled microbolometer array detector as its image sensor. Specifically, a method for computing cloud-top height from the multi-spectral stereo measurements acquired during this flight has been developed and the results demonstrate that a vertical precision of 10.6 km was achieved. Further, the accuracy of these measurements is confirmed by comparison with coincident direct laser ranging measurements from the Shuttle Laser Altimeter. Mission STS-85 was the first space flight to combine laser ranging and thermal IR camera systems for cloud remote sensing.

  8. A classification model of Hyperion image base on SAM combined decision tree

    NASA Astrophysics Data System (ADS)

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model heightens 9.9%.

  9. Extra Solar Planet Science With a Non Redundant Mask

    NASA Astrophysics Data System (ADS)

    Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi

    2017-01-01

    To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.

  10. Large area mapping of southwestern forest crown cover, canopy height, and biomass using the NASA Multiangle Imaging Spectro-Radiometer

    Treesearch

    Mark Chopping; Gretchen G. Moisen; Lihong Su; Andrea Laliberte; Albert Rango; John V. Martonchik; Debra P. C. Peters

    2008-01-01

    A rapid canopy reflectance model inversion experiment was performed using multi-angle reflectance data from the NASA Multi-angle Imaging Spectro-Radiometer (MISR) on the Earth Observing System Terra satellite, with the goal of obtaining measures of forest fractional crown cover, mean canopy height, and aboveground woody biomass for large parts of south-eastern Arizona...

  11. Stereoscopic Height and Wind Retrievals for Aerosol Plumes with the MISR INteractive eXplorer (MINX)

    NASA Technical Reports Server (NTRS)

    Nelson, D.L.; Garay, M.J.; Kahn, Ralph A.; Dunst, Ben A.

    2013-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard the Terra satellite acquires imagery at 275-m resolution at nine angles ranging from 0deg (nadir) to 70deg off-nadir. This multi-angle capability facilitates the stereoscopic retrieval of heights and motion vectors for clouds and aerosol plumes. MISR's operational stereo product uses this capability to retrieve cloud heights and winds for every satellite orbit, yielding global coverage every nine days. The MISR INteractive eXplorer (MINX) visualization and analysis tool complements the operational stereo product by providing users the ability to retrieve heights and winds locally for detailed studies of smoke, dust and volcanic ash plumes, as well as clouds, at higher spatial resolution and with greater precision than is possible with the operational product or with other space-based, passive, remote sensing instruments. This ability to investigate plume geometry and dynamics is becoming increasingly important as climate and air quality studies require greater knowledge about the injection of aerosols and the location of clouds within the atmosphere. MINX incorporates features that allow users to customize their stereo retrievals for optimum results under varying aerosol and underlying surface conditions. This paper discusses the stereo retrieval algorithms and retrieval options in MINX, and provides appropriate examples to explain how the program can be used to achieve the best results.

  12. Hurricane Earl Multi-level Winds

    NASA Image and Video Library

    2010-09-02

    NASA Multi-angle Imaging SpectroRadiometer instrument captured this image of Hurricane Earl Aug. 30, 2010. At this time, Hurricane Earl was a Category 3 storm. The hurricane eye is just visible on the right edge of the MISR image swath.

  13. Ocean Remote Sensing from Chinese Spaceborne Microwave Sensors

    NASA Astrophysics Data System (ADS)

    Yang, J.

    2017-12-01

    GF-3 (GF stands for GaoFen, which means High Resolution in Chinese) is the China's first C band multi-polarization high resolution microwave remote sensing satellite. It was successfully launched on Aug. 10, 2016 in Taiyuan satellite launch center. The synthetic aperture radar (SAR) on board GF-3 works at incidence angles ranging from 20 to 50 degree with several polarization modes including single-polarization, dual-polarization and quad-polarization. GF-3 SAR is also the world's most imaging modes SAR satellite, with 12 imaging modes consisting of some traditional ones like stripmap and scanSAR modes and some new ones like spotlight, wave and global modes. GF-3 SAR is thus a multi-functional satellite for both land and ocean observation by switching the different imaging modes. TG-2 (TG stands for TianGong, which means Heavenly Palace in Chinese) is a Chinese space laboratory which was launched on 15 Sep. 2016 from Jiuquan Satellite Launch Centre aboard a Long March 2F rocket. The onboard Interferometric Imaging Radar Altimeter (InIRA) is a new generation radar altimeter developed by China and also the first on orbit wide swath imaging radar altimeter, which integrates interferometry, synthetic aperture, and height tracking techniques at small incidence angles and a swath of 30 km. The InIRA was switch on to acquire data during this mission on 22 September. This paper gives some preliminary results for the quantitative remote sensing of ocean winds and waves from the GF-3 SAR and the TG-2 InIRA. The quantitative analysis and ocean wave spectra retrieval have been given from the SAR imagery. The image spectra which contain ocean wave information are first estimated from image's modulation using fast Fourier transform. Then, the wave spectra are retrieved from image spectra based on Hasselmann's classical quasi-linear SAR-ocean wave mapping model and the estimation of three modulation transfer functions (MTFs) including tilt, hydrodynamic and velocity bunching modulation. The wind speed is retrieved from InIRA data using a Ku-band low incidence backscatter model (KuLMOD), which relates the backscattering coefficients to the wind speeds and incidence angles. The ocean wave spectra are retrieved linearly from image spectra which extracted first from InIRA data, using a similar procedure for GF-3 SAR data.

  14. MALIBU: A High Spatial Resolution Multi-Angle Imaging Unmanned Airborne System to Validate Satellite-derived BRDF/Albedo Products

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Roman, M. O.; Pahlevan, N.; Stachura, M.; McCorkel, J.; Bland, G.; Schaaf, C.

    2016-12-01

    Albedo is a key climate forcing variable that governs the absorption of incoming solar radiation and its ultimate transfer to the atmosphere. Albedo contributes significant uncertainties in the simulation of climate changes; and as such, it is defined by the Global Climate Observing System (GCOS) as a terrestrial essential climate variable (ECV) required by global and regional climate and biogeochemical models. NASA's Goddard Space Flight Center's Multi AngLe Imaging Bidirectional Reflectance Distribution Function small-UAS (MALIBU) is part of a series of pathfinder missions to develop enhanced multi-angular remote sensing techniques using small Unmanned Aircraft Systems (sUAS). The MALIBU instrument package includes two multispectral imagers oriented at two different viewing geometries (i.e., port and starboard sides) capture vegetation optical properties and structural characteristics. This is achieved by analyzing the surface reflectance anisotropy signal (i.e., BRDF shape) obtained from the combination of surface reflectance from different view-illumination angles and spectral channels. Satellite measures of surface albedo from MODIS, VIIRS, and Landsat have been evaluated by comparison with spatially representative albedometer data from sparsely distributed flux towers at fixed heights. However, the mismatch between the footprint of ground measurements and the satellite footprint challenges efforts at validation, especially for heterogeneous landscapes. The BRDF (Bidirectional Reflectance Distribution Function) models of surface anisotropy have only been evaluated with airborne BRDF data over a very few locations. The MALIBU platform that acquires extremely high resolution sub-meter measures of surface anisotropy and surface albedo, can thus serve as an important source of reference data to enable global land product validation efforts, and resolve the errors and uncertainties in the various existing products generated by NASA and its national and international partners.

  15. Karymsky volcano eruptive plume properties based on MISR multi-angle imagery and the volcanological implications

    NASA Astrophysics Data System (ADS)

    Flower, Verity J. B.; Kahn, Ralph A.

    2018-03-01

    Space-based operational instruments are in unique positions to monitor volcanic activity globally, especially in remote locations or where suborbital observing conditions are hazardous. The Multi-angle Imaging SpectroRadiometer (MISR) provides hyper-stereo imagery, from which the altitude and microphysical properties of suspended atmospheric aerosols can be derived. These capabilities are applied to plumes emitted at Karymsky volcano from 2000 to 2017. Observed plumes from Karymsky were emitted predominantly to an altitude of 2-4 km, with occasional events exceeding 6 km. MISR plume observations were most common when volcanic surface manifestations, such as lava flows, were identified by satellite-based thermal anomaly detection. The analyzed plumes predominantly contained large (1.28 µm effective radius), strongly absorbing particles indicative of ash-rich eruptions. Differences between the retrievals for Karymsky volcano's ash-rich plumes and the sulfur-rich plumes emitted during the 2014-2015 eruption of Holuhraun (Iceland) highlight the ability of MISR to distinguish particle types from such events. Observed plumes ranged from 30 to 220 km in length and were imaged at a spatial resolution of 1.1 km. Retrieved particle properties display evidence of downwind particle fallout, particle aggregation and chemical evolution. In addition, changes in plume properties retrieved from the remote-sensing observations over time are interpreted in terms of shifts in eruption dynamics within the volcano itself, corroborated to the extent possible with suborbital data. Plumes emitted at Karymsky prior to 2010 display mixed emissions of ash and sulfate particles. After 2010, all plumes contain consistent particle components, indicative of entering an ash-dominated regime. Post-2010 event timing, relative to eruption phase, was found to influence the optical properties of observed plume particles, with light absorption varying in a consistent sequence as each respective eruption phase progressed.

  16. Karymsky volcano eruptive plume properties based on MISR multi-angle imagery, and volcanological implications.

    PubMed

    Flower, Verity J B; Kahn, Ralph A

    2018-01-01

    Space-based, operational instruments are in unique positions to monitor volcanic activity globally, especially in remote locations or where suborbital observing conditions are hazardous. The Multi-angle Imaging SpectroRadiometer (MISR) provides hyper-stereo imagery, from which the altitude and microphysical properties of suspended atmospheric aerosols can be derived. These capabilities are applied to plumes emitted at Karymsky volcano from 2000 to 2017. Observed plumes from Karymsky were emitted predominantly to an altitude of 2-4 km, with occasional events exceeding 6 km. MISR plume observations were most common when volcanic surface manifestations, such as lava flows, were identified by satellite-based thermal anomaly detection. The analyzed plumes predominantly contained large (1.28 µm effective radius), strongly absorbing particles indicative of ash-rich eruptions. Differences between the retrievals for Karymsky volcano's ash-rich plumes and the sulfur-rich plumes emitted during the 2014-2015 eruption of Holuhraun (Iceland) highlight the ability of MISR to distinguish particle types from such events. Observed plumes ranged from 30 to 220 km in length, and were imaged at a spatial resolution of 1.1 km. Retrieved particle properties display evidence of downwind particle fallout, particle aggregation and chemical evolution. In addition, changes in plume properties retrieved from the remote-sensing observations over time are interpreted in terms of shifts in eruption dynamics within the volcano itself, corroborated to the extent possible with suborbital data. Plumes emitted at Karymsky prior to 2010 display mixed emissions of ash and sulfate particles. After 2010, all plumes contain consistent particle components, indicative of entering an ash-dominated regime. Post-2010 event timing, relative to eruption phase, was found to influence the optical properties of observed plume particles, with light-absorption varying in a consistent sequence as each respective eruption phase progressed.

  17. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil.

    The images were obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top.

    The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

    For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

  18. Effects of anode geometry on forward wide-angle neon ion emissions in 3.5 kJ plasma focus device by novel mega-size panorama polycarbonate image detectors

    NASA Astrophysics Data System (ADS)

    Sohrabi, M.; Soltani, Z.; Sarlak, Z.

    2018-03-01

    Forward wide-angle neon ion emissions in a 3.5 kJ plasma focus device (PFD) were studied using 5 different anode top geometries; hollow-end cylinder, solid triangle, solid hemisphere, hollow-end cone and flat-end cone. Position-sensitive mega-size panorama polycarbonate ion image detectors (MS-PCID) developed by dual-cell circular mega-size electrochemical etching (MS-ECE) systems were applied for processesing wide-angle neon ion images on MS-PCIDs exposed on the PFD cylinder top base under a single pinch shot. The images can be simply observed, analyzed and relatively quantified in terms of ion emission angular distributions even by the unaided eyes. By analysis of the forward neon ion emission images, the ion emission yields, ion emission angular distributions, iso-fluence ion contours and solid angles of ion emissions in 4π PFD space were determined. The neon ion emission yields on the PFD cylinder top base are in an increasing order ~2.1×109, ~2.2 ×109, ~2.8×109, ~2.9×109, and ~3.5×109 neon ions/shot for the 5 stated anode top geometries respectively. The panorama neon ion images as diagnosed even by the unaided eyes demonstrate the lowest and highest ion yields from the hollow-end cylinder and flat-end cone anode tops respectively. Relative dynamic qualitative neon ion spectrometry was made by the unaided eyes demonstrating relative neon ion energy as they appear. The study also demonstrates the unique power of the MS-PCID/MS-ECE imaging system as an advanced state-of-the-art ion imaging method for wide-angle dynamic parametric studies in PFD space and other ion study applications.

  19. A multi-mode manipulator display system for controlling remote robotic systems

    NASA Technical Reports Server (NTRS)

    Massimino, Michael J.; Meschler, Michael F.; Rodriguez, Alberto A.

    1994-01-01

    The objective and contribution of the research presented in this paper is to provide a Multi-Mode Manipulator Display System (MMDS) to assist a human operator with the control of remote manipulator systems. Such systems include space based manipulators such as the space shuttle remote manipulator system (SRMS) and future ground controlled teleoperated and telescience space systems. The MMDS contains a number of display modes and submodes which display position control cues position data in graphical formats, based primarily on manipulator position and joint angle data. Therefore the MMDS is not dependent on visual information for input and can assist the operator especially when visual feedback is inadequate. This paper provides descriptions of the new modes and experiment results to date.

  20. Smoke from Fires in Southern Mexico

    NASA Technical Reports Server (NTRS)

    2002-01-01

    On May 2, 2002, numerous fires in southern Mexico sent smoke drifting northward over the Gulf of Mexico. These views from the Multi-angle Imaging SpectroRadiometer illustrate the smoke extent over parts of the Gulf and the southern Mexican states of Tabasco, Campeche and Chiapas. At the same time, dozens of other fires were also burning in the Yucatan Peninsula and across Central America. A similar situation occurred in May and June of 1998, when Central American fires resulted in air quality warnings for several U.S. States.

    The image on the left is a natural color view acquired by MISR's vertical-viewing (nadir) camera. Smoke is visible, but sunglint in some ocean areas makes detection difficult. The middle image, on the other hand, is a natural color view acquired by MISR's 70-degree backward-viewing camera; its oblique view angle simultaneously suppresses sunglint and enhances the smoke. A map of aerosol optical depth, a measurement of the abundance of atmospheric particulates, is provided on the right. This quantity is retrieved using an automated computer algorithm that takes advantage of MISR's multi-angle capability. Areas where no retrieval occurred are shown in black.

    The images each represent an area of about 380 kilometers x 1550 kilometers and were captured during Terra orbit 12616.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  1. New Satellite Project Aerosol-UA: Remote Sensing of Aerosols in the Terrestrial Atmosphere

    NASA Technical Reports Server (NTRS)

    Milinevsky, G.; Yatskiv, Ya.; Degtyaryov, O.; Syniavskyi, I.; Mishchenko, Michael I.; Rosenbush, V.; Ivanov, Yu.; Makarov, A.; Bovchaliuk, A.; Danylevsky, V.; hide

    2016-01-01

    We discuss the development of the Ukrainian space project Aerosol-UA which has the following three main objectives: (1) to monitor the spatial distribution of key characteristics of terrestrial tropospheric and stratospheric aerosols; (2) to provide a comprehensive observational database enabling accurate quantitative estimates of the aerosol contribution to the energy budget of the climate system; and (3) quantify the contribution of anthropogenic aerosols to climate and ecological processes. The remote sensing concept of the project is based on precise orbital measurements of the intensity and polarization of sunlight scattered by the atmosphere and the surface with a scanning polarimeter accompanied by a wide-angle multispectral imager-polarimeter. Preparations have already been made for the development of the instrument suite for the Aerosol-UA project, in particular, of the multi-channel scanning polarimeter (ScanPol) designed for remote sensing studies of the global distribution of aerosol and cloud properties (such as particle size, morphology, and composition) in the terrestrial atmosphere by polarimetric and spectrophotometric measurements of the scattered sunlight in a wide range of wavelengths and viewing directions from which a scene location is observed. ScanPol is accompanied by multispectral wide-angle imager-polarimeter (MSIP) that serves to collect information on cloud conditions and Earths surface image. Various components of the polarimeter ScanPol have been prototyped, including the opto-mechanical and electronic assemblies and the scanning mirror controller. Preliminary synthetic data simulations for the retrieval of aerosol parameters over land surfaces have been performed using the Generalized Retrieval of Aerosol and Surface Properties (GRASP) algorithm. Methods for the validation of satellite data using ground-based observations of aerosol properties are also discussed. We assume that designing, building, and launching into orbit a multi-functional high-precision scanning polarimeter and an imager-polarimeter should make a significant contribution to the study of natural and anthropogenic aerosols and their climatic and ecological effects.

  2. Overcoming turbulence-induced space-variant blur by using phase-diverse speckle.

    PubMed

    Thelen, Brian J; Paxman, Richard G; Carrara, David A; Seldin, John H

    2009-01-01

    Space-variant blur occurs when imaging through volume turbulence over sufficiently large fields of view. Space-variant effects are particularly severe in horizontal-path imaging, slant-path (air-to-ground or ground-to-air) geometries, and ground-based imaging of low-elevation satellites or astronomical objects. In these geometries, the isoplanatic angle can be comparable to or even smaller than the diffraction-limited resolution angle. We report on a postdetection correction method that seeks to correct for the effects of space-variant aberrations, with the goal of reconstructing near-diffraction-limited imagery. Our approach has been to generalize the method of phase-diverse speckle (PDS) by using a physically motivated distributed-phase-screen model. Simulation results are presented that demonstrate the reconstruction of near-diffraction-limited imagery under both matched and mismatched model assumptions. In addition, we present evidence that PDS could be used as a beaconless wavefront sensor in a multiconjugate adaptive optics system when imaging extended scenes.

  3. Eyjafjallajökull Ash Plume Particle Properties

    NASA Image and Video Library

    2010-04-21

    As NASA Terra satellite flew over Iceland erupting Eyjafjallajökull volcano, its Multi-angle Imaging SpectroRadiometer instrument acquired 36 near-simultaneous images of the ash plume, covering nine view angles in each of four wavelengths.

  4. Mimas Showing False Colors #2

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This false color image of Saturn's moon Mimas reveals variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    This image is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined with a single black and white picture that isolates and maps regional color differences to create the final product.

    Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of the image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in the surface composition or the sizes of grains making up the icy soil.

    This image was obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top.

    The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

    For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

  5. 3D Cloud Tomography, Followed by Mean Optical and Microphysical Properties, with Multi-Angle/Multi-Pixel Data

    NASA Astrophysics Data System (ADS)

    Davis, A. B.; von Allmen, P. A.; Marshak, A.; Bal, G.

    2010-12-01

    The geometrical assumption in all operational cloud remote sensing algorithms is that clouds are plane-parallel slabs, which applies relatively well to the most uniform stratus layers. Its benefit is to justify using classic 1D radiative transfer (RT) theory, where angular details (solar, viewing, azimuthal) are fully accounted for and precise phase functions can be used, to generate the look-up tables used in the retrievals. Unsurprisingly, these algorithms catastrophically fail when applied to cumulus-type clouds, which are highly 3D. This is unfortunate for the cloud-process modeling community that may thrive on in situ airborne data, but would very much like to use satellite data for more than illustrations in their presentations and publications. So, how can we obtain quantitative information from space-based observations of finite aspect ratio clouds? Cloud base/top heights, vertically projected area, mean liquid water content (LWC), and volume-averaged droplet size would be a good start. Motivated by this science need, we present a new approach suitable for sparse cumulus fields where we turn the tables on the standard procedure in cloud remote sensing. We make no a priori assumption about cloud shape, save an approximately flat base, but use brutal approximations about the RT that is necessarily 3D. Indeed, the first order of business is to roughly determine the cloud's outer shape in one of two ways, which we will frame as competing initial guesses for the next phase of shape refinement and volume-averaged microphysical parameter estimation. Both steps use multi-pixel/multi-angle techniques amenable to MISR data, the latter adding a bi-spectral dimension using collocated MODIS data. One approach to rough cloud shape determination is to fit the multi-pixel/multi-angle data with a geometric primitive such as a scalene hemi-ellipsoid with 7 parameters (translation in 3D space, 3 semi-axes, 1 azimuthal orientation); for the radiometry, a simple radiosity-type model is used where the cloud surface "emits" either reflected (sunny-side) or transmitted (shady-side) light at different levels. As it turns out, the reflected/transmitted light ratio yields an approximate cloud optical thickness. Another approach is to invoke tomography techniques to define the volume occupied by the cloud using, as it were, cloud masks for each direction of observation. In the shape and opacity refinement phase, initial guesses along with solar and viewing geometry information are used to predict radiance in each pixel using a fast diffusion model for the 3D RT in MISR's non-absorbing red channel (275 m resolution). Refinement is constrained and stopped when optimal resolution is reached. Finally, multi-pixel/mono-angle MODIS data for the same cloud (at comparable 250 m resolution) reveals the desired droplet size information, hence the volume-averaged LWC. This is an ambitious remote sensing science project drawing on cross-disciplinary expertise gained in medical imaging using both X-ray and near-IR sources and detectors. It is high risk but with potentially high returns not only for the cloud modeling community but also aerosol and surface characterization in the presence of broken 3D clouds.

  6. Lensless transport-of-intensity phase microscopy and tomography with a color LED matrix

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Sun, Jiasong; Zhang, Jialin; Hu, Yan; Chen, Qian

    2015-07-01

    We demonstrate lens-less quantitative phase microscopy and diffraction tomography based on a compact on-chip platform, using only a CMOS image sensor and a programmable color LED array. Based on multi-wavelength transport-of- intensity phase retrieval and multi-angle illumination diffraction tomography, this platform offers high quality, depth resolved images with a lateral resolution of ˜3.7μm and an axial resolution of ˜5μm, over wide large imaging FOV of 24mm2. The resolution and FOV can be further improved by using a larger image sensors with small pixels straightforwardly. This compact, low-cost, robust, portable platform with a decent imaging performance may offer a cost-effective tool for telemedicine needs, or for reducing health care costs for point-of-care diagnostics in resource-limited environments.

  7. Joint aerosol and water-leaving radiance retrieval from Airborne Multi-angle SpectroPolarimeter Imager

    NASA Astrophysics Data System (ADS)

    Xu, F.; Dubovik, O.; Zhai, P.; Kalashnikova, O. V.; Diner, D. J.

    2015-12-01

    The Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) [1] has been flying aboard the NASA ER-2 high altitude aircraft since October 2010. In step-and-stare operation mode, AirMSPI typically acquires observations of a target area at 9 view angles between ±67° off the nadir. Its spectral channels are centered at 355, 380, 445, 470*, 555, 660*, and 865* nm, where the asterisk denotes the polarimetric bands. In order to retrieve information from the AirMSPI observations, we developed a efficient and flexible retrieval code that can jointly retrieve aerosol and water leaving radiance simultaneously. The forward model employs a coupled Markov Chain (MC) [2] and adding/doubling [3] radiative transfer method which is fully linearized and integrated with a multi-patch retrieval algorithm to obtain aerosol and water leaving radiance/Chl-a information. Various constraints are imposed to improve convergence and retrieval stability. We tested the aerosol and water leaving radiance retrievals using the AirMSPI radiance and polarization measurements by comparing to the retrieved aerosol concentration, size distribution, water-leaving radiance, and chlorophyll concentration to the values reported by the USC SeaPRISM AERONET-OC site off the coast of Southern California. In addition, the MC-based retrievals of aerosol properties were compared with GRASP ([4-5]) retrievals for selected cases. The MC-based retrieval approach was then used to systematically explore the benefits of AirMSPI's ultraviolet and polarimetric channels, the use of multiple view angles, and constraints provided by inclusion of bio-optical models of the water-leaving radiance. References [1]. D. J. Diner, et al. Atmos. Meas. Tech. 6, 1717 (2013). [2]. F. Xu et al. Opt. Lett. 36, 2083 (2011). [3]. J. E. Hansen and L.D. Travis. Space Sci. Rev. 16, 527 (1974). [4]. O. Dubovik et al. Atmos. Meas. Tech., 4, 975 (2011). [5]. O. Dubovik et al. SPIE: Newsroom, DOI:10.1117/2.1201408.005558 (2014).

  8. Uniting Satellite Data With Health Records to Address the Societal Impacts of Particulate Air Pollution: NASA's Multi-Angle Imager for Aerosols

    NASA Astrophysics Data System (ADS)

    Nastan, A.; Diner, D. J.

    2017-12-01

    Epidemiological studies have demonstrated convincingly that airborne particulate matter has a major impact on human health, particularly in urban areas. However, providing an accurate picture of the health effects of various particle mixtures — distinguished by size, shape, and composition — is difficult due to the constraints of currently available measurement tools and the heterogeneity of atmospheric chemistry and human activities over space and time. The Multi-Angle Imager for Aerosols (MAIA) investigation, currently in development as part of NASA's Earth Venture Instrument Program, will address this issue through a powerful combination of technologies and informatics. Atmospheric measurements collected by the MAIA satellite instrument featuring multiangle and innovative polarimetric imaging capabilities will be combined with available ground monitor data and a chemical transport model to produce maps of speciated particulate matter at 1 km spatial resolution for a selected set of globally distributed cities. The MAIA investigation is also original in integrating data providers (atmospheric scientists), data users (epidemiologists), and stakeholders (public health experts) into a multidisciplinary science team that will tailor the observation and analysis strategy within each target area to improve our understanding of the linkages between different particle types and adverse human health outcomes.

  9. Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) for spaceborne measurements of CO

    NASA Astrophysics Data System (ADS)

    Johnson, Brian R.; Kampe, Thomas U.; Cook, William B.; Miecznik, Grzegorz; Novelli, Paul C.; Snell, Hilary E.; Turner-Valle, Jennifer A.

    2003-11-01

    An instrument concept for an Imaging Multi-Order Fabry-Perot Spectrometer (IMOFPS) has been developed for measuring tropospheric carbon monoxide (CO) from space. The concept is based upon a correlation technique similar in nature to multi-order Fabry-Perot (FP) interferometer or gas filter radiometer techniques, which simultaneously measure atmospheric emission from several infrared vibration-rotation lines of CO. Correlation techniques provide a multiplex advantage for increased throughput, high spectral resolution and selectivity necessary for profiling tropospheric CO. Use of unconventional multilayer interference filter designs leads to improvement in CO spectral line correlation compared with the traditional FP multi-order technique, approaching the theoretical performance of gas filter correlation radiometry. In this implementation, however, the gas cell is replaced with a simple, robust solid interference filter. In addition to measuring CO, the correlation filter technique can be applied to measurements of other important gases such as carbon dioxide, nitrous oxide and methane. Imaging the scene onto a 2-D detector array enables a limited range of spectral sampling owing to the field-angle dependence of the filter transmission function. An innovative anamorphic optical system provides a relatively large instrument field-of-view for imaging along the orthogonal direction across the detector array. An important advantage of the IMOFPS concept is that it is a small, low mass and high spectral resolution spectrometer having no moving parts. A small, correlation spectrometer like IMOFPS would be well suited for global observations of CO2, CO, and CH4 from low Earth or regional observations from Geostationary orbit. A prototype instrument is in development for flight demonstration on an airborne platform with potential applications to atmospheric chemistry, wild fire and biomass burning, and chemical dispersion monitoring.

  10. Super-resolution mapping using multi-viewing CHRIS/PROBA data

    NASA Astrophysics Data System (ADS)

    Dwivedi, Manish; Kumar, Vinay

    2016-04-01

    High-spatial resolution Remote Sensing (RS) data provides detailed information which ensures high-definition visual image analysis of earth surface features. These data sets also support improved information extraction capabilities at a fine scale. In order to improve the spatial resolution of coarser resolution RS data, the Super Resolution Reconstruction (SRR) technique has become widely acknowledged which focused on multi-angular image sequences. In this study multi-angle CHRIS/PROBA data of Kutch area is used for SR image reconstruction to enhance the spatial resolution from 18 m to 6m in the hope to obtain a better land cover classification. Various SR approaches like Projection onto Convex Sets (POCS), Robust, Iterative Back Projection (IBP), Non-Uniform Interpolation and Structure-Adaptive Normalized Convolution (SANC) chosen for this study. Subjective assessment through visual interpretation shows substantial improvement in land cover details. Quantitative measures including peak signal to noise ratio and structural similarity are used for the evaluation of the image quality. It was observed that SANC SR technique using Vandewalle algorithm for the low resolution image registration outperformed the other techniques. After that SVM based classifier is used for the classification of SRR and data resampled to 6m spatial resolution using bi-cubic interpolation. A comparative analysis is carried out between classified data of bicubic interpolated and SR derived images of CHRIS/PROBA and SR derived classified data have shown a significant improvement of 10-12% in the overall accuracy. The results demonstrated that SR methods is able to improve spatial detail of multi-angle images as well as the classification accuracy.

  11. Space-based infrared scanning sensor LOS determination and calibration using star observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Xu, Zhan; An, Wei; Deng, Xin-Pu; Yang, Jun-Gang

    2015-10-01

    This paper provides a novel methodology for removing sensor bias from a space based infrared (IR) system (SBIRS) through the use of stars detected in the background field of the sensor. Space based IR system uses the LOS (line of sight) of target for target location. LOS determination and calibration is the key precondition of accurate location and tracking of targets in Space based IR system and the LOS calibration of scanning sensor is one of the difficulties. The subsequent changes of sensor bias are not been taking into account in the conventional LOS determination and calibration process. Based on the analysis of the imaging process of scanning sensor, a theoretical model based on the estimation of bias angles using star observation is proposed. By establishing the process model of the bias angles and the observation model of stars, using an extended Kalman filter (EKF) to estimate the bias angles, and then calibrating the sensor LOS. Time domain simulations results indicate that the proposed method has a high precision and smooth performance for sensor LOS determination and calibration. The timeliness and precision of target tracking process in the space based infrared (IR) tracking system could be met with the proposed algorithm.

  12. Discriminative Multi-View Interactive Image Re-Ranking.

    PubMed

    Li, Jun; Xu, Chang; Yang, Wankou; Sun, Changyin; Tao, Dacheng

    2017-07-01

    Given an unreliable visual patterns and insufficient query information, content-based image retrieval is often suboptimal and requires image re-ranking using auxiliary information. In this paper, we propose a discriminative multi-view interactive image re-ranking (DMINTIR), which integrates user relevance feedback capturing users' intentions and multiple features that sufficiently describe the images. In DMINTIR, heterogeneous property features are incorporated in the multi-view learning scheme to exploit their complementarities. In addition, a discriminatively learned weight vector is obtained to reassign updated scores and target images for re-ranking. Compared with other multi-view learning techniques, our scheme not only generates a compact representation in the latent space from the redundant multi-view features but also maximally preserves the discriminative information in feature encoding by the large-margin principle. Furthermore, the generalization error bound of the proposed algorithm is theoretically analyzed and shown to be improved by the interactions between the latent space and discriminant function learning. Experimental results on two benchmark data sets demonstrate that our approach boosts baseline retrieval quality and is competitive with the other state-of-the-art re-ranking strategies.

  13. The extent of visual space inferred from perspective angles

    PubMed Central

    Erkelens, Casper J.

    2015-01-01

    Retinal images are perspective projections of the visual environment. Perspective projections do not explain why we perceive perspective in 3-D space. Analysis of underlying spatial transformations shows that visual space is a perspective transformation of physical space if parallel lines in physical space vanish at finite distance in visual space. Perspective angles, i.e., the angle perceived between parallel lines in physical space, were estimated for rails of a straight railway track. Perspective angles were also estimated from pictures taken from the same point of view. Perspective angles between rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points of visual space. All computed distances were shorter than 6 m. The shallow depth of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. Incongruity between the perceived shape of a railway line on the one hand and the experienced ratio between width and length of the line on the other hand is huge, but apparently so unobtrusive that it has remained unnoticed. The incompatibility between perspective angles and perceived distances casts doubt on evidence for a curved visual space that has been presented in the literature and was obtained from combining judgments of distances and angles with physical positions. PMID:26034567

  14. Fluctuations of Lake Eyre, South Australia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Lake Eyre is a large salt lake situated between two deserts in one of Australia's driest regions. However, this low-lying lake attracts run-off from one of the largest inland drainage systems in the world. The drainage basin is very responsive to rainfall variations, and changes dramatically with Australia's inter-annual weather fluctuations. When Lake Eyre fills,as it did in 1989, it is temporarily Australia's largest lake, and becomes dense with birds, frogs and colorful plant life. The Lake responds to extended dry periods (often associated with El Nino events) by drying completely.

    These four images from the Multi-angle Imaging SpectroRadiometer contrast the lake area at the start of the austral summers of 2000 and 2002. The top two panels portray the region as it appeared on December 9, 2000. Heavy rains in the first part of 2000 caused both the north and south sections of the lake to fill partially and the northern part of the lake still contained significant standing water by the time these data were acquired. The bottom panels were captured on November 29, 2002. Rainfall during 2002 was significantly below average ( http://www.bom.gov.au/ ), although showers occurring in the week before the image was acquired helped alleviate this condition slightly.

    The left-hand panels portray the area as it appeared to MISR's vertical-viewing (nadir) camera, and are false-color views comprised of data from the near-infrared, green and blue channels. Here, wet and/or moist surfaces appear blue-green, since water selectively absorbs longer wavelengths such as near-infrared. The right-hand panels are multi-angle composites created with red band data from MISR's 60-degree forward, nadir and 60-degree backward-viewing cameras, displayed as red, green and blue, respectively. In these multi-angle composites, color variations serve as a proxy for changes in angular reflectance, and indicate textural properties of the surface related to roughness and/or moisture content.Data from the two dates were processed identically to preserve relative variations in brightness between them. Wet surfaces or areas with standing water appear green due to the effect of sunglint at the nadir camera view angle. Dry, salt encrusted parts of the lake appear bright white or gray. Purple areas have enhanced forward scattering, possibly as a result of surface moistness. Some variations exhibited by the multi-angle composites are not discernible in the nadir multi-spectral images and vice versa, suggesting that the combination of angular and spectral information is a more powerful diagnostic of surface conditions than either technique by itself.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 5194 and 15679. The panels cover an area of 146 kilometers x 122 kilometers, and utilize data from blocks 113 to 114 within World Reference System-2 path 100.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  15. Atherosclerosis imaging using 3D black blood TSE SPACE vs 2D TSE

    PubMed Central

    Wong, Stephanie K; Mobolaji-Iawal, Motunrayo; Arama, Leron; Cambe, Joy; Biso, Sylvia; Alie, Nadia; Fayad, Zahi A; Mani, Venkatesh

    2014-01-01

    AIM: To compare 3D Black Blood turbo spin echo (TSE) sampling perfection with application-optimized contrast using different flip angle evolution (SPACE) vs 2D TSE in evaluating atherosclerotic plaques in multiple vascular territories. METHODS: The carotid, aortic, and femoral arterial walls of 16 patients at risk for cardiovascular or atherosclerotic disease were studied using both 3D black blood magnetic resonance imaging SPACE and conventional 2D multi-contrast TSE sequences using a consolidated imaging approach in the same imaging session. Qualitative and quantitative analyses were performed on the images. Agreement of morphometric measurements between the two imaging sequences was assessed using a two-sample t-test, calculation of the intra-class correlation coefficient and by the method of linear regression and Bland-Altman analyses. RESULTS: No statistically significant qualitative differences were found between the 3D SPACE and 2D TSE techniques for images of the carotids and aorta. For images of the femoral arteries, however, there were statistically significant differences in all four qualitative scores between the two techniques. Using the current approach, 3D SPACE is suboptimal for femoral imaging. However, this may be due to coils not being optimized for femoral imaging. Quantitatively, in our study, higher mean total vessel area measurements for the 3D SPACE technique across all three vascular beds were observed. No significant differences in lumen area for both the right and left carotids were observed between the two techniques. Overall, a significant-correlation existed between measures obtained between the two approaches. CONCLUSION: Qualitative and quantitative measurements between 3D SPACE and 2D TSE techniques are comparable. 3D-SPACE may be a feasible approach in the evaluation of cardiovascular patients. PMID:24876923

  16. The application analysis of the multi-angle polarization technique for ocean color remote sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Yongchao; Zhu, Jun; Yin, Huan; Zhang, Keli

    2017-02-01

    The multi-angle polarization technique, which uses the intensity of polarized radiation as the observed quantity, is a new remote sensing means for earth observation. With this method, not only can the multi-angle light intensity data be provided, but also the multi-angle information of polarized radiation can be obtained. So, the technique may solve the problems, those could not be solved with the traditional remote sensing methods. Nowadays, the multi-angle polarization technique has become one of the hot topics in the field of the international quantitative research on remote sensing. In this paper, we firstly introduce the principles of the multi-angle polarization technique, then the situations of basic research and engineering applications are particularly summarized and analysed in 1) the peeled-off method of sun glitter based on polarization, 2) the ocean color remote sensing based on polarization, 3) oil spill detection using polarization technique, 4) the ocean aerosol monitoring based on polarization. Finally, based on the previous work, we briefly present the problems and prospects of the multi-angle polarization technique used in China's ocean color remote sensing.

  17. SU-E-J-110: A Novel Level Set Active Contour Algorithm for Multimodality Joint Segmentation/Registration Using the Jensen-Rényi Divergence.

    PubMed

    Markel, D; Naqa, I El; Freeman, C; Vallières, M

    2012-06-01

    To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. To present a novel joint segmentation/registration for multimodality image-guided and adaptive radiotherapy. A major challenge to this framework is the sensitivity of many segmentation or registration algorithms to noise. Presented is a level set active contour based on the Jensen-Renyi (JR) divergence to achieve improved noise robustness in a multi-modality imaging space. It was found that JR divergence when used for segmentation has an improved robustness to noise compared to using mutual information, or other entropy-based metrics. The MI metric failed at around 2/3 the noise power than the JR divergence. The JR divergence metric is useful for the task of joint segmentation/registration of multimodality images and shows improved results compared entropy based metric. The algorithm can be easily modified to incorporate non-intensity based images, which would allow applications into multi-modality and texture analysis. © 2012 American Association of Physicists in Medicine.

  18. Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.

    PubMed

    Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo

    2018-05-03

    Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.

  19. The Beagle 2 Stereo Camera System: Scientific Objectives and Design Characteristics

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Josset, J.; Paar, G.; Sims, M.

    2003-04-01

    The Stereo Camera System (SCS) will provide wide-angle (48 degree) multi-spectral stereo imaging of the Beagle 2 landing site in Isidis Planitia with an angular resolution of 0.75 milliradians. Based on the SpaceX Modular Micro-Imager, the SCS is composed of twin cameras (with 1024 by 1024 pixel frame transfer CCD) and twin filter wheel units (with a combined total of 24 filters). The primary mission objective is to construct a digital elevation model of the area in reach of the lander’s robot arm. The SCS specifications and following baseline studies are described: Panoramic RGB colour imaging of the landing site and panoramic multi-spectral imaging at 12 distinct wavelengths to study the mineralogy of landing site. Solar observations to measure water vapour absorption and the atmospheric dust optical density. Also envisaged are multi-spectral observations of Phobos &Deimos (observations of the moons relative to background stars will be used to determine the lander’s location and orientation relative to the Martian surface), monitoring of the landing site to detect temporal changes, observation of the actions and effects of the other PAW experiments (including rock texture studies with a close-up-lens) and collaborative observations with the Mars Express orbiter instrument teams. Due to be launched in May of this year, the total system mass is 360 g, the required volume envelope is 747 cm^3 and the average power consumption is 1.8 W. A 10Mbit/s RS422 bus connects each camera to the lander common electronics.

  20. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  1. Methods, systems and apparatus for controlling third harmonic voltage when operating a multi-space machine in an overmodulation region

    DOEpatents

    Perisic, Milun; Kinoshita, Michael H; Ranson, Ray M; Gallegos-Lopez, Gabriel

    2014-06-03

    Methods, system and apparatus are provided for controlling third harmonic voltages when operating a multi-phase machine in an overmodulation region. The multi-phase machine can be, for example, a five-phase machine in a vector controlled motor drive system that includes a five-phase PWM controlled inverter module that drives the five-phase machine. Techniques for overmodulating a reference voltage vector are provided. For example, when the reference voltage vector is determined to be within the overmodulation region, an angle of the reference voltage vector can be modified to generate a reference voltage overmodulation control angle, and a magnitude of the reference voltage vector can be modified, based on the reference voltage overmodulation control angle, to generate a modified magnitude of the reference voltage vector. By modifying the reference voltage vector, voltage command signals that control a five-phase inverter module can be optimized to increase output voltages generated by the five-phase inverter module.

  2. Cloud information content analysis of multi-angular measurements in the oxygen A-band: application to 3MI and MSPI

    NASA Astrophysics Data System (ADS)

    Merlin, G.; Riedi, J.; Labonnote, L. C.; Cornet, C.; Davis, A. B.; Dubuisson, P.; Desmons, M.; Ferlay, N.; Parol, F.

    2015-12-01

    The vertical distribution of cloud cover has a significant impact on a large number of meteorological and climatic processes. Cloud top altitude and cloud geometrical thickness are then essential. Previous studies established the possibility of retrieving those parameters from multi-angular oxygen A-band measurements. Here we perform a study and comparison of the performances of future instruments. The 3MI (Multi-angle, Multi-channel and Multi-polarization Imager) instrument developed by EUMETSAT, which is an extension of the POLDER/PARASOL instrument, and MSPI (Multi-angles Spectro-Polarimetric Imager) develoloped by NASA's Jet Propulsion Laboratory will measure total and polarized light reflected by the Earth's atmosphere-surface system in several spectral bands (from UV to SWIR) and several viewing geometries. Those instruments should provide opportunities to observe the links between the cloud structures and the anisotropy of the reflected solar radiation into space. Specific algorithms will need be developed in order to take advantage of the new capabilities of this instrument. However, prior to this effort, we need to understand, through a theoretical Shannon information content analysis, the limits and advantages of these new instruments for retrieving liquid and ice cloud properties, and especially, in this study, the amount of information coming from the A-Band channel on the cloud top altitude (CTOP) and geometrical thickness (CGT). We compare the information content of 3MI A-Band in two configurations and that of MSPI. Quantitative information content estimates show that the retrieval of CTOP with a high accuracy is possible in almost all cases investigated. The retrieval of CGT seems less easy but possible for optically thick clouds above a black surface, at least when CGT > 1-2 km.

  3. Russia

    Atmospheric Science Data Center

    2013-04-16

    article title:  Smoke and Clouds over Russia     View Larger Image ... of Multi-angle Imaging SpectroRadiometer (MISR) images of Russia's far east Khabarovsk region. The images were acquired on May 13, 2001 ...

  4. A fast screening protocol for carotid plaques imaging using 3D multi-contrast MRI without contrast agent.

    PubMed

    Zhang, Na; Zhang, Lei; Yang, Qi; Pei, Anqi; Tong, Xiaoxin; Chung, Yiu-Cho; Liu, Xin

    2017-06-01

    To implement a fast (~15min) MRI protocol for carotid plaque screening using 3D multi-contrast MRI sequences without contrast agent on a 3Tesla MRI scanner. 7 healthy volunteers and 25 patients with clinically confirmed transient ischemic attack or suspected cerebrovascular ischemia were included in this study. The proposed protocol, including 3D T1-weighted and T2-weighted SPACE (variable-flip-angle 3D turbo spin echo), and T1-weighted magnetization prepared rapid acquisition gradient echo (MPRAGE) was performed first and was followed by 2D T1-weighted and T2-weighted turbo spin echo, and post-contrast T1-weighted SPACE sequences. Image quality, number of plaques, and vessel wall thicknesses measured at the intersection of the plaques were evaluated and compared between sequences. Average examination time of the proposed protocol was 14.6min. The average image quality scores of 3D T1-weighted, T2-weighted SPACE, and T1-weighted magnetization prepared rapid acquisition gradient echo were 3.69, 3.75, and 3.48, respectively. There was no significant difference in detecting the number of plaques and vulnerable plaques using pre-contrast 3D images with or without post-contrast T1-weighted SPACE. The 3D SPACE and 2D turbo spin echo sequences had excellent agreement (R=0.96 for T1-weighted and 0.98 for T2-weighted, p<0.001) regarding vessel wall thickness measurements. The proposed protocol demonstrated the feasibility of attaining carotid plaque screening within a 15-minute scan, which provided sufficient anatomical coverage and critical diagnostic information. This protocol offers the potential for rapid and reliable screening for carotid plaques without contrast agent. Copyright © 2016. Published by Elsevier Inc.

  5. California Fires

    Atmospheric Science Data Center

    2014-05-15

    ...     View Larger Image Lightning strikes sparked more than a thousand fires in northern California. This image was captured by the Multi-angle Imaging SpectroRadiometer (MISR) instrument's nadir ...

  6. Multi-layer Clouds Over the South Indian Ocean

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The complex structure and beauty of polar clouds are highlighted by these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 23, 2003. These clouds occur at multiple altitudes and exhibit a noticeable cyclonic circulation over the Southern Indian Ocean, to the north of Enderbyland, East Antarctica.

    The image at left was created by overlying a natural-color view from MISR's downward-pointing (nadir) camera with a color-coded stereo height field. MISR retrieves heights by a pattern recognition algorithm that utilizes multiple view angles to derive cloud height and motion. The opacity of the height field was then reduced until the field appears as a translucent wash over the natural-color image. The resulting purple, cyan and green hues of this aesthetic display indicate low, medium or high altitudes, respectively, with heights ranging from less than 2 kilometers (purple) to about 8 kilometers (green). In the lower right corner, the edge of the Antarctic coastline and some sea ice can be seen through some thin, high cirrus clouds.

    The right-hand panel is a natural-color image from MISR's 70-degree backward viewing camera. This camera looks backwards along the path of Terra's flight, and in the southern hemisphere the Sun is in front of this camera. This perspective causes the cloud-tops to be brightly outlined by the sun behind them, and enhances the shadows cast by clouds with significant vertical structure. An oblique observation angle also enhances the reflection of light by atmospheric particles, and accentuates the appearance of polar clouds. The dark ocean and sea ice that were apparent through the cirrus clouds at the bottom right corner of the nadir image are overwhelmed by the brightness of these clouds at the oblique view.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17794. The panels cover an area of 335 kilometers x 605 kilometers, and utilize data from blocks 142 to 145 within World Reference System-2 path 155.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  7. Spiral Flow Phantom for Ultrasound Flow Imaging Experimentation.

    PubMed

    Yiu, Billy Y S; Yu, Alfred C H

    2017-12-01

    As new ultrasound flow imaging methods are being developed, there is a growing need to devise appropriate flow phantoms that can holistically assess the accuracy of the derived flow estimates. In this paper, we present a novel spiral flow phantom design whose Archimedean spiral lumen naturally gives rise to multi-directional flow over all possible angles (i.e., from 0° to 360°). Developed using lost-core casting principles, the phantom geometry comprised a three-loop spiral (4-mm diameter and 5-mm pitch), and it was set to operate in steady flow mode (3 mL/s flow rate). After characterizing the flow pattern within the spiral vessel using computational fluid dynamics (CFD) simulations, the phantom was applied to evaluate the performance of color flow imaging (CFI) and high-frame-rate vector flow imaging. Significant spurious coloring artifacts were found when using CFI to visualize flow in the spiral phantom. In contrast, using vector flow imaging (least-squares multi-angle Doppler based on a three-transmit and three-receive configuration), we observed consistent depiction of flow velocity magnitude and direction within the spiral vessel lumen. The spiral flow phantom was also found to be a useful tool in facilitating demonstration of dynamic flow visualization based on vector projectile imaging. Overall, these results demonstrate the spiral flow phantom's practical value in analyzing the efficacy of ultrasound flow estimation methods.

  8. The high resolution stereo camera (HRSC): acquisition of multi-spectral 3D-data and photogrammetric processing

    NASA Astrophysics Data System (ADS)

    Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus

    2017-11-01

    At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.

  9. Height and Motion of the Chikurachki Eruption Plume

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The height and motion of the ash and gas plume from the April 22, 2003, eruption of the Chikurachki volcano is portrayed in these views from the Multi-angle Imaging SpectroRadiometer (MISR). Situated within the northern portion of the volcanically active Kuril Island group, the Chikurachki volcano is an active stratovolcano on Russia's Paramushir Island (just south of the Kamchatka Peninsula).

    In the upper panel of the still image pair, this scene is displayed as a natural-color view from MISR's vertical-viewing (nadir) camera. The white and brownish-grey plume streaks several hundred kilometers from the eastern edge of Paramushir Island toward the southeast. The darker areas of the plume typically indicate volcanic ash, while the white portions of the plume indicate entrained water droplets and ice. According to the Kamchatkan Volcanic Eruptions Response Team (KVERT), the temperature of the plume near the volcano on April 22 was -12o C.

    The lower panel shows heights derived from automated stereoscopic processing of MISR's multi-angle imagery, in which the plume is determined to reach heights of about 2.5 kilometers above sea level. Heights for clouds above and below the eruption plume were also retrieved, including the high-altitude cirrus clouds in the lower left (orange pixels). The distinctive patterns of these features provide sufficient spatial contrast for MISR's stereo height retrieval to perform automated feature matching between the images acquired at different view angles. Places where clouds or other factors precluded a height retrieval are shown in dark gray.

    The multi-angle 'fly-over' animation (below) allows the motion of the plume and of the surrounding clouds to be directly observed. The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with the view from the 70-degree backward camera.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17776. The panels cover an area of approximately 296 kilometers x 216 kilometers (still images) and 185 kilometers x 154 kilometers (animation), and utilize data from blocks 50 to 51 within World Reference System-2 path 100.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

    [figure removed for brevity, see original site

  10. A method of directly extracting multiwave angle-domain common-image gathers

    NASA Astrophysics Data System (ADS)

    Han, Jianguang; Wang, Yun

    2017-10-01

    Angle-domain common-image gathers (ADCIGs) can provide an effective way for migration velocity analysis and amplitude versus angle analysis in oil-gas seismic exploration. On the basis of multi-component Gaussian beam prestack depth migration (GB-PSDM), an alternative method of directly extracting multiwave ADCIGs is presented in this paper. We first introduce multi-component GB-PSDM, where a wavefield separation is proceeded to obtain the separated PP- and PS-wave seismic records before migration imaging for multiwave seismic data. Then, the principle of extracting PP- and PS-ADCIGs using GB-PSDM is presented. The propagation angle can be obtained using the real-value travel time of Gaussian beam in the course of GB-PSDM, which can be used to calculate the incidence and reflection angles. Two kinds of ADCIGs can be extracted for the PS-wave, one of which is P-wave incidence ADCIGs and the other one is S-wave reflection ADCIGs. In this paper, we use the incident angle to plot the ADCIGs for both PP- and PS-waves. Finally, tests of synthetic examples show that the method introduced here is accurate and effective.

  11. Multi-angle nuclear imaging apparatus and method

    DOEpatents

    Anger, Hal O. [Berkeley, CA

    1980-04-08

    Nuclear imaging apparatus for obtaining multi-plane readouts of radioactive material in a human or animal subject. A probe disposed in the vicinity of the subject is provided for receiving radiation from radiating sources in the subject and for forming a probe radiation image. The probe has a collimator with different portions thereof having holes disposed at different angles. A single scintillation crystal overlies the collimator for receiving radiation passing through the collimator and producing scintillations to provide the probe image. An array of photomultiplier tubes overlie the single crystal for observing the probe image and providing electrical outputs. Conversion apparatus is provided for converting the electrical outputs representing the probe image into optical images displayed on the screen of a cathode ray tube. Divider apparatus is provided for dividing the probe radiation image into a plurality of areas with the areas corresponding to different portions of the collimator having holes disposed at different angles. A light sensitive medium is provided for receiving optical images. Apparatus is provided for causing relative movement between the probe and the subject. Apparatus is also provided for causing relative movement between the optical image on the screen and the light sensitive medium which corresponds to the relative movement between the probe and the subject whereby there is produced on the light sensitive medium a plurality of images that portray the subject as seen from different angles corresponding to the portions of the collimator having holes at different angles.

  12. Multi-angle nuclear imaging apparatus and method

    DOEpatents

    Anger, H.O.

    1980-04-08

    A nuclear imaging apparatus is described for obtaining multi-plane readouts of radioactive material in a human or animal subject. A probe disposed in the vicinity of the subject is provided for receiving radiation from radiating sources in the subject and for forming a probe radiation image. The probe has a collimator with different portions having holes disposed at different angles. A single scintillation crystal overlies the collimator for receiving radiation passing through the collimator and producing scintillations to provide the probe image. An array of photomultiplier tubes overlie the single crystal for observing the probe image and providing electrical outputs. Conversion apparatus is provided for converting the electrical outputs representing the probe image into optical images displayed on the screen of a cathode ray tube. Divider apparatus is provided for dividing the probe radiation image into a plurality of areas with the areas corresponding to different portions of the collimator having holes disposed at different angles. A light sensitive medium is provided for receiving optical images. Apparatus is provided for causing relative movement between the probe and the subject. Apparatus is also provided for causing relative movement between the optical image on the screen and the light sensitive medium which corresponds to the relative movement between the probe and the subject whereby there is produced on the light sensitive medium a plurality of images that portray the subject as seen from different angles corresponding to the portions of the collimator having holes at different angles. 11 figs.

  13. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  14. Multi-access laser communications terminal

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The Optical Multi-Access (OMA) Terminal is capable of establishing up to six simultaneous high-data-rate communication links between low-Earth-orbit satellites and a host satellite at synchronous orbit with only one 16-inch-diameter antenna on the synchronous satellite. The advantage over equivalent RF systems in space weight, power, and swept volume is great when applied to NASA satellite communications networks. A photograph of the 3-channel prototype constructed under the present contract to demonstrate the feasibility of the concept is presented. The telescope has a 10-inch clear aperture and a 22 deg full field of view. It consists of 4 refractive elements to achieve a telecentric focus, i.e., the focused beam is normal to the focal plane at all field angles. This feature permits image pick-up optics in the focal plane to track satellite images without tilting their optic axes to accommodate field angle. The geometry of the imager-pick-up concept and the coordinate system of the swinging arm and disk mechanism for image pick-up are shown. Optics in the arm relay the telescope focus to a communications and tracking receiver and introduce the transmitted beacon beam on a path collinear with the receive path. The electronic circuits for the communications and tracking receivers are contained on the arm and disk assemblies and relay signals to an associated PC-based operator's console for control of the arm and disk motor drive through a flexible cable which permits +/- 240 deg travel for each arm and disk assembly. Power supplies and laser transmitters are mounted in the cradle for the telescope. A single-mode fiber in the cable is used to carry the laser transmitter signal to the arm optics. The promise of the optical multi-access terminal towards which the prototype effort worked is shown. The emphasis in the prototype development was the demonstration of the unique aspect of the concept, and where possible, cost avoidance compromises were implemented in areas already proven on other programs. The design details are described in section 2, the prototype test results in section 3, additional development required in section 4, and conclusions in section 5.

  15. Mystery #19 Answer

    Atmospheric Science Data Center

    2013-04-22

    ... article title:  MISR Mystery Image Quiz #19: Black Sea     View Larger Image This natural-color image of the Black Sea from the Multi-angle Imaging SpectroRadiometer (MISR) represents an area of ...

  16. Tropical Cyclone Monty Strikes Western Australia

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) acquired these natural color images and cloud top height measurements for Monty before and after the storm made landfall over the remote Pilbara region of Western Australia, on February 29 and March 2, 2004 (shown as the left and right-hand image sets, respectively). On February 29, Monty was upgraded to category 4 cyclone status. After traveling inland about 300 kilometers to the south, the cyclonic circulation had decayed considerably, although category 3 force winds were reported on the ground. Some parts of the drought-affected Pilbara region received more than 300 millimeters of rainfall, and serious and extensive flooding has occurred.

    The natural color images cover much of the same area, although the right-hand panels are offset slightly to the east. Automated stereoscopic processing of data from multiple MISR cameras was utilized to produce the cloud-top height fields. The distinctive spatial patterns of the clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. The height retrievals are at this stage uncorrected for the effects of the high winds associated with cyclone rotation. Areas where heights could not be retrieved are shown in dark gray.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 22335 and 22364. The panels cover an area of about 380 kilometers x 985 kilometers, and utilize data from blocks 105 to 111 within World Reference System-2 paths 115 and 113.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  17. A Spectralon BRF Data Base for MISR Calibration Application

    NASA Technical Reports Server (NTRS)

    Bruegge, C.; Chrien, N.; Haner, D.

    1999-01-01

    The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.

  18. ESTIMATING GROUND LEVEL PM 2.5 IN THE EASTERN UNITED STATES USING SATELLITE REMOTE SENSING

    EPA Science Inventory

    An empirical model based on the regression between daily average final particle (PM2.5) concentrations and aerosol optical thickness (AOT) measurements from the Multi-angle Imaging SpectroRadiometer (MISR) was developed and tested using data from the eastern United States during ...

  19. Study on pixel matching method of the multi-angle observation from airborne AMPR measurements

    NASA Astrophysics Data System (ADS)

    Hou, Weizhen; Qie, Lili; Li, Zhengqiang; Sun, Xiaobing; Hong, Jin; Chen, Xingfeng; Xu, Hua; Sun, Bin; Wang, Han

    2015-10-01

    For the along-track scanning mode, the same place along the ground track could be detected by the Advanced Multi-angular Polarized Radiometer (AMPR) with several different scanning angles from -55 to 55 degree, which provides a possible means to get the multi-angular detection for some nearby pixels. However, due to the ground sample spacing and spatial footprint of the detection, the different sizes of footprints cannot guarantee the spatial matching of some partly overlap pixels, which turn into a bottleneck for the effective use of the multi-angular detected information of AMPR to study the aerosol and surface polarized properties. Based on our definition and calculation of t he pixel coincidence rate for the multi-angular detection, an effective multi-angle observation's pixel matching method is presented to solve the spatial matching problem for airborne AMPR. Assuming the shape of AMPR's each pixel is an ellipse, and the major axis and minor axis depends on the flying attitude and each scanning angle. By the definition of coordinate system and origin of coordinate, the latitude and longitude could be transformed into the Euclidian distance, and the pixel coincidence rate of two nearby ellipses could be calculated. Via the traversal of each ground pixel, those pixels with high coincidence rate could be selected and merged, and with the further quality control of observation data, thus the ground pixels dataset with multi-angular detection could be obtained and analyzed, providing the support for the multi-angular and polarized retrieval algorithm research in t he next study.

  20. An algorithm for pavement crack detection based on multiscale space

    NASA Astrophysics Data System (ADS)

    Liu, Xiang-long; Li, Qing-quan

    2006-10-01

    Conventional human-visual and manual field pavement crack detection method and approaches are very costly, time-consuming, dangerous, labor-intensive and subjective. They possess various drawbacks such as having a high degree of variability of the measure results, being unable to provide meaningful quantitative information and almost always leading to inconsistencies in crack details over space and across evaluation, and with long-periodic measurement. With the development of the public transportation and the growth of the Material Flow System, the conventional method can far from meet the demands of it, thereby, the automatic pavement state data gathering and data analyzing system come to the focus of the vocation's attention, and developments in computer technology, digital image acquisition, image processing and multi-sensors technology made the system possible, but the complexity of the image processing always made the data processing and data analyzing come to the bottle-neck of the whole system. According to the above description, a robust and high-efficient parallel pavement crack detection algorithm based on Multi-Scale Space is proposed in this paper. The proposed method is based on the facts that: (1) the crack pixels in pavement images are darker than their surroundings and continuous; (2) the threshold values of gray-level pavement images are strongly related with the mean value and standard deviation of the pixel-grey intensities. The Multi-Scale Space method is used to improve the data processing speed and minimize the effectiveness caused by image noise. Experiment results demonstrate that the advantages are remarkable: (1) it can correctly discover tiny cracks, even from very noise pavement image; (2) the efficiency and accuracy of the proposed algorithm are superior; (3) its application-dependent nature can simplify the design of the entire system.

  1. Three-dimensional super-resolved live cell imaging through polarized multi-angle TIRF.

    PubMed

    Zheng, Cheng; Zhao, Guangyuan; Liu, Wenjie; Chen, Youhua; Zhang, Zhimin; Jin, Luhong; Xu, Yingke; Kuang, Cuifang; Liu, Xu

    2018-04-01

    Measuring three-dimensional nanoscale cellular structures is challenging, especially when the structure is dynamic. Owing to the informative total internal reflection fluorescence (TIRF) imaging under varied illumination angles, multi-angle (MA) TIRF has been examined to offer a nanoscale axial and a subsecond temporal resolution. However, conventional MA-TIRF still performs badly in lateral resolution and fails to characterize the depth image in densely distributed regions. Here, we emphasize the lateral super-resolution in the MA-TIRF, exampled by simply introducing polarization modulation into the illumination procedure. Equipped with a sparsity and accelerated proximal algorithm, we examine a more precise 3D sample structure compared with previous methods, enabling live cell imaging with a temporal resolution of 2 s and recovering high-resolution mitochondria fission and fusion processes. We also shared the recovery program, which is the first open-source recovery code for MA-TIRF, to the best of our knowledge.

  2. Using the Wiener estimator to determine optimal imaging parameters in a synthetic-collimator SPECT system used for small animal imaging

    NASA Astrophysics Data System (ADS)

    Lin, Alexander; Johnson, Lindsay C.; Shokouhi, Sepideh; Peterson, Todd E.; Kupinski, Matthew A.

    2015-03-01

    In synthetic-collimator SPECT imaging, two detectors are placed at different distances behind a multi-pinhole aperture. This configuration allows for image detection at different magnifications and photon energies, resulting in higher overall sensitivity while maintaining high resolution. Image multiplexing the undesired overlapping between images due to photon origin uncertainty may occur in both detector planes and is often present in the second detector plane due to greater magnification. However, artifact-free image reconstruction is possible by combining data from both the front detector (little to no multiplexing) and the back detector (noticeable multiplexing). When the two detectors are used in tandem, spatial resolution is increased, allowing for a higher sensitivity-to-detector-area ratio. Due to variability in detector distances and pinhole spacings found in synthetic-collimator SPECT systems, a large parameter space must be examined to determine optimal imaging configurations. We chose to assess image quality based on the task of estimating activity in various regions of a mouse brain. Phantom objects were simulated using mouse brain data from the Magnetic Resonance Microimaging Neurological Atlas (MRM NeAt) and projected at different angles through models of a synthetic-collimator SPECT system, which was developed by collaborators at Vanderbilt University. Uptake in the different brain regions was modeled as being normally distributed about predetermined means and variances. We computed the performance of the Wiener estimator for the task of estimating activity in different regions of the mouse brain. Our results demonstrate the utility of the method for optimizing synthetic-collimator system design.

  3. Florida

    Atmospheric Science Data Center

    2014-05-15

    ...     View Larger Image Multi-angle Imaging SpectroRadiometer (MISR) images of Florida ... Center Atmospheric Science Data Center in Hampton, VA. Photo credit: NASA/GSFC/LaRC/JPL, MISR Science Team Other formats ...

  4. Research on fusion algorithm of polarization image in tetrolet domain

    NASA Astrophysics Data System (ADS)

    Zhang, Dexiang; Yuan, BaoHong; Zhang, Jingjing

    2015-12-01

    Tetrolets are Haar-type wavelets whose supports are tetrominoes which are shapes made by connecting four equal-sized squares. A fusion method for polarization images based on tetrolet transform is proposed. Firstly, the magnitude of polarization image and angle of polarization image can be decomposed into low-frequency coefficients and high-frequency coefficients with multi-scales and multi-directions using tetrolet transform. For the low-frequency coefficients, the average fusion method is used. According to edge distribution differences in high frequency sub-band images, for the directional high-frequency coefficients are used to select the better coefficients by region spectrum entropy algorithm for fusion. At last the fused image can be obtained by utilizing inverse transform for fused tetrolet coefficients. Experimental results show that the proposed method can detect image features more effectively and the fused image has better subjective visual effect

  5. A Summer View of Russia's Lena Delta and Olenek

    NASA Technical Reports Server (NTRS)

    2004-01-01

    These views of the Russian Arctic were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on July 11, 2004, when the brief arctic summer had transformed the frozen tundra and the thousands of lakes, channels, and rivers of the Lena Delta into a fertile wetland, and when the usual blanket of thick snow had melted from the vast plains and taiga forests. This set of three images cover an area in the northern part of the Eastern Siberian Sakha Republic. The Olenek River wends northeast from the bottom of the images to the upper left, and the top portions of the images are dominated by the delta into which the mighty Lena River empties when it reaches the Laptev Sea. At left is a natural color image from MISR's nadir (vertical-viewing) camera, in which the rivers appear murky due to the presence of sediment, and photosynthetically-active vegetation appears green. The center image is also from MISR's nadir camera, but is a false color view in which the predominant red color is due to the brightness of vegetation at near-infrared wavelengths. The most photosynthetically active parts of this area are the Lena Delta, in the lower half of the image, and throughout the great stretch of land that curves across the Olenek River and extends northeast beyond the relatively barren ranges of the Volyoi mountains (the pale tan-colored area to the right of image center).

    The right-hand image is a multi-angle false-color view made from the red band data of the 60o backward, nadir, and 60o forward cameras, displayed as red, green and blue, respectively. Water appears blue in this image because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. Much of the landscape and many low clouds appear purple since these surfaces are both forward and backward scattering, and clouds that are further from the surface appear in a different spot for each view angle, creating a rainbow-like appearance. However, the vegetated region that is darker green in the natural color nadir image, also appears to exhibit a faint greenish hue in the multi-angle composite. A possible explanation for this subtle green effect is that the taiga forest trees (or dwarf-shrubs) are not too dense here. Since the the nadir camera is more likly to observe any gaps between the trees or shrubs, and since the vegetation is not as bright (in the red band) as the underlying soil or surface, the brighter underlying surface results in an area that is relatively brighter at the nadir view angle. Accurate maps of vegetation structural units are an essential part of understanding the seasonal exchanges of energy and water at the Earth's surface, and of preserving the biodiversity in these regions.

    The Multiangle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 24273. The panels cover an area of about 230 kilometers x 420 kilometers, and utilize data from blocks 30 to 34 within World Reference System-2 path 134.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  6. The Hyper-Angular Rainbow Polarimeter (HARP) CubeSat Observatory and the Characterization of Cloud Properties

    NASA Astrophysics Data System (ADS)

    Neilsen, T. L.; Martins, J. V.; Fernandez Borda, R. A.; Weston, C.; Frazier, C.; Cieslak, D.; Townsend, K.

    2015-12-01

    The Hyper-Angular Rainbow Polarimeter HARP instrument is a wide field-of-view imager that splits three spatially identical images into three independent polarizers and detector arrays.This technique achieves simultaneous imagery of the same ground target in three polarization states and is the key innovation to achieve high polarimetric accuracy with no moving parts. The spacecraft consists of a 3U CubeSat with 3-axis stabilization designed to keep the image optics pointing nadir during data collection but maximizing solar panel sun pointing otherwise. The hyper-angular capability is achieved by acquiring overlapping images at very fast speeds.An imaging polarimeter with hyper-angular capability can make a strong contribution to characterizing cloud properties. Non-polarized multi-angle measurements have been shown to besensitive to thin cirrus and can be used to provide climatology ofthese clouds. Adding polarization and increasing the number ofobservation angles allows for the retrieval of the complete sizedistribution of cloud droplets, including accurate information onthe width of the droplet distribution in addition to the currentlyretrieved effective radius.The HARP mission is funded by the NASA Earth Science Technology Office as part of In-Space Validation of Earth Science Technologies (InVEST) program. The HARP instrument is designed and built by a team of students and professionals lead by Dr. Vanderlei Martines at University of Maryland, Baltimore County. The HARP spacecraft is designed and built by a team of students and professionals and The Space Dynamics Laboratory.

  7. Multiangular Contributions for Discriminate Seasonal Structural Changes in the Amazon Rainforest Using MODIS MAIAC Data

    NASA Astrophysics Data System (ADS)

    Moura, Y. M.; Hilker, T.; Galvão, L. S.; Santos, J. R.; Lyapustin, A.; Sousa, C. H. R. D.; McAdam, E.

    2014-12-01

    The sensitivity of the Amazon rainforests to climate change has received great attention by the scientific community due to the important role that this vegetation plays in the global carbon, water and energy cycle. The spatial and temporal variability of tropical forests across Amazonia, and their phenological, ecological and edaphic cycles are still poorly understood. The objective of this work was to infer seasonal and spatial variability of forest structure in the Brazilian Amazon based on anisotropy of multi-angle satellite observations. We used observations from the Moderate Resolution Imaging Spectroradiometer (MODIS/Terra and Aqua) processed by a new Multi-Angle Implementation Atmospheric Correction Algorithm (MAIAC) to investigate how multi-angular spectral response from satellite imagery can be used to analyze structural variability of Amazon rainforests. We calculated differences acquired from forward and backscatter reflectance by modeling the bi-directional reflectance distribution function to infer seasonal and spatial changes in vegetation structure. Changes in anisotropy were larger during the dry season than during the wet season, suggesting intra-annual changes in vegetation structure and density. However, there were marked differences in timing and amplitude depending on forest type. For instance differences between reflectance hotspot and darkspot showed more anisotropy in the open Ombrophilous forest than in the dense Ombrophilous forest. Our results show that multi-angle data can be useful for analyzing structural differences in various forest types and for discriminating different seasonal effects within the Amazon basin. Also, multi-angle data could help solve uncertainties about sensitivity of different tropical forest types to light versus rainfall. In conclusion, multi-angular information, as expressed by the anisotropy of spectral reflectance, may complement conventional studies and provide significant improvements over approaches that are based on vegetation indices alone.

  8. James Bay

    Atmospheric Science Data Center

    2013-04-17

    ...     View Larger Image The first images taken by NASA's Multi-angle Imaging ... many of MISR's new and unique capabilities," said Dr. David J. Diner, MISR principal investigator of NASA's Jet Propulsion Laboratory, ...

  9. Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification

    NASA Astrophysics Data System (ADS)

    Gao, G.; Zhang, M.; Gu, Y.

    2017-05-01

    Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".

  10. Asteroid (4179) Toutatis size determination via optical images observed by the Chang'e-2 probe

    NASA Astrophysics Data System (ADS)

    Liu, P.; Huang, J.; Zhao, W.; Wang, X.; Meng, L.; Tang, X.

    2014-07-01

    This work is a physical and statistical study of the asteroid (4179) Toutatis using the optical images obtained by a solar panel monitor of the Chang'e-2 probe on Dec. 13, 2012 [1]. In the imaging strategy, the camera is focused at infinity. This is specially designed for the probe with its solar panels monitor's principle axis pointing to the relative velocity direction of the probe and Toutatis. The imaging strategy provides a dedicated way to resolve the size by multi-frame optical images. The inherent features of the data are: (1) almost no rotation was recorded because of the 5.41-7.35 Earth-day rotation period and the small amount of elapsed imaging time, only minutes, make the object stay in the images in a fixed position and orientation; (2) the sharpness of the upper left boundary and the vagueness of lower right boundary resulting from the direction of SAP (Sun-Asteroid-Probe angle) cause a varying accuracy in locating points at different parts of Toutatis. A common view is that direct, accurate measurements of asteroid shapes, sizes, and pole positions are now possible for larger asteroids that can be spatially resolved using the Hubble Space Telescope or large ground-based telescopes equipped with adaptive optics. For a quite complex planetary/asteroid probe study, these measurements certainly need continuous validation via a variety of ways [2]. Based on engineering parameters of the probe during the fly-by, the target spatial resolving and measuring procedures are described in the paper. Results estimated are optical perceptible size on the flyby epoch under the solar phase angles during the imaging. It is found that the perceptible size measured using the optical observations and the size derived from the radar observations by Ostro et al.~in 1995 [3], are close to one another.

  11. A state space based approach to localizing single molecules from multi-emitter images.

    PubMed

    Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J

    2017-01-28

    Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.

  12. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  13. EFFECTS OF X-RAY BEAM ANGLE AND GEOMETRIC DISTORTION ON WIDTH OF EQUINE THORACOLUMBAR INTERSPINOUS SPACES USING RADIOGRAPHY AND COMPUTED TOMOGRAPHY-A CADAVERIC STUDY.

    PubMed

    Djernaes, Julie D; Nielsen, Jon V; Berg, Lise C

    2017-03-01

    The widths of spaces between the thoracolumbar processi spinosi (interspinous spaces) are frequently assessed using radiography in sports horses; however effects of varying X-ray beam angles and geometric distortion have not been previously described. The aim of this prospective, observational study was to determine whether X-ray beam angle has an effect on apparent widths of interspinous spaces. Thoracolumbar spine specimens were collected from six equine cadavers and left-right lateral radiographs and sagittal and dorsal reconstructed computed tomographic (CT) images were acquired. Sequential radiographs were acquired with each interspinous space in focus. Measurements were performed for each interspinous space in the focus position and up to eight angled positions as the interspinous space moved away from focus (±). Focus position measurements were compared to matching sagittal CT measurements. Effect of geometric distortion was evaluated by comparing the interspinous space in radiographs with sagittal and dorsal reconstructed CT images. A total of 49 interspinous spaces were sampled, yielding 274 measurements. X-ray beam angle significantly affected measured width of interspinous spaces in position +3 (P = 0.038). Changes in width did not follow a consistent pattern. Interspinous space widths in focus position were significantly smaller in radiographs compared to matching reconstructed CT images for backs diagnosed with kissing spine syndrome (P < 0.001). Geometric distortion markedly affected appearance of interspinous space width between planes. In conclusion, X-ray beam angle and geometric distortion influence radiographically measured widths of interspinous spaces in the equine thoracolumbar spine, and this should be taken into consideration when evaluating sport horses. © 2016 American College of Veterinary Radiology.

  14. Arctic Refuge

    Atmospheric Science Data Center

    2014-05-15

    article title:  Summer in the Arctic National Wildlife Refuge     View Larger Image This colorful image of the Arctic National Wildlife Refuge and the Beaufort Sea was acquired by the Multi-angle Imaging ...

  15. Retrieval Algorithm for Broadband Albedo at the Top of the Atmosphere

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Ho; Lee, Kyu-Tae; Kim, Bu-Yo; Zo, ll-Sung; Jung, Hyun-Seok; Rim, Se-Hun

    2018-05-01

    The objective of this study is to develop an algorithm that retrieves the broadband albedo at the top of the atmosphere (TOA albedo) for radiation budget and climate analysis of Earth's atmosphere using Geostationary Korea Multi-Purse Satellite/Advanced Meteorological Imager (GK-2A/AMI) data. Because the GK-2A satellite will launch in 2018, we used data from the Japanese weather satellite Himawari-8 and onboard sensor Advanced Himawari Imager (AHI), which has similar sensor properties and observation area to those of GK-2A. TOA albedo was retrieved based on reflectance and regression coefficients of shortwave channels 1 to 6 of AHI. The regression coefficient was calculated using the results of the radiative transfer model (SBDART) and ridge regression. The SBDART used simulations of the correlation between TOA albedo and reflectance of each channel according to each atmospheric conditions (solar zenith angle, viewing zenith angle, relative azimuth angle, surface type, and absence/presence of clouds). The TOA albedo from Himawari-8/AHI were compared to that from the National Aeronautics and Space Administration (NASA) satellite Terra with onboard sensor Clouds and the Earth's Radiant Energy System (CERES). The correlation coefficients between the two datasets from the week containing the first day of every month between 1st August 2015 and 1st July 2016 were high, ranging between 0.934 and 0.955, with the root mean square error in the 0.053-0.068 range.

  16. Gravity Waves Ripple over Marine Stratocumulus Clouds

    NASA Technical Reports Server (NTRS)

    2004-01-01

    In this natural-color image from the Multi-angle Imaging SpectroRadiometer (MISR), a fingerprint-like gravity wave feature occurs over a deck of marine stratocumulus clouds. Similar to the ripples that occur when a pebble is thrown into a still pond, such 'gravity waves' sometimes appear when the relatively stable and stratified air masses associated with stratocumulus cloud layers are disturbed by a vertical trigger from the underlying terrain, or by a thunderstorm updraft or some other vertical wind shear. The stratocumulus cellular clouds that underlie the wave feature are associated with sinking air that is strongly cooled at the level of the cloud-tops -- such clouds are common over mid-latitude oceans when the air is unperturbed by cyclonic or frontal activity. This image is centered over the Indian Ocean (at about 38.9o South, 80.6o East), and was acquired on October 29, 2003.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 20545. The image covers an area of 245 kilometers x 378 kilometers, and uses data from blocks 121 to 122 within World Reference System-2 path 134.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  17. Mystery #11 Answer

    Atmospheric Science Data Center

    2013-04-22

    article title:  MISR Mystery Image Quiz #11: Queensland, Australia     View Larger Image These Multi-angle Imaging SpectroRadiometer (MISR) images of ... MISR Team. Text acknowledgment: Clare Averill, David J. Diner, Graham Bothwell (Jet Propulsion Laboratory). Other formats ...

  18. A multi-modal stereo microscope based on a spatial light modulator.

    PubMed

    Lee, M P; Gibson, G M; Bowman, R; Bernet, S; Ritsch-Marte, M; Phillips, D B; Padgett, M J

    2013-07-15

    Spatial Light Modulators (SLMs) can emulate the classic microscopy techniques, including differential interference (DIC) contrast and (spiral) phase contrast. Their programmability entails the benefit of flexibility or the option to multiplex images, for single-shot quantitative imaging or for simultaneous multi-plane imaging (depth-of-field multiplexing). We report the development of a microscope sharing many of the previously demonstrated capabilities, within a holographic implementation of a stereo microscope. Furthermore, we use the SLM to combine stereo microscopy with a refocusing filter and with a darkfield filter. The instrument is built around a custom inverted microscope and equipped with an SLM which gives various imaging modes laterally displaced on the same camera chip. In addition, there is a wide angle camera for visualisation of a larger region of the sample.

  19. Multi-Objective Optimization of Spacecraft Trajectories for Small-Body Coverage Missions

    NASA Technical Reports Server (NTRS)

    Hinckley, David, Jr.; Englander, Jacob; Hitt, Darren

    2017-01-01

    Visual coverage of surface elements of a small-body object requires multiple images to be taken that meet many requirements on their viewing angles, illumination angles, times of day, and combinations thereof. Designing trajectories capable of maximizing total possible coverage may not be useful since the image target sequence and the feasibility of said sequence given the rotation-rate limitations of the spacecraft are not taken into account. This work presents a means of optimizing, in a multi-objective manner, surface target sequences that account for such limitations.

  20. Saskatchewan and Manitoba

    NASA Technical Reports Server (NTRS)

    2001-01-01

    Surface brightness contrasts accentuated by a thin layer of snow enable a network of rivers, roads, and farmland boundaries to stand out clearly in these MISR images of southeastern Saskatchewan and southwestern Manitoba. The lefthand image is a multi-spectral false-color view made from the near-infrared, red, and green bands of MISR's vertical-viewing (nadir) camera. The righthand image is a multi-angle false-color view made from the red band data of the 60-degree aftward camera, the nadir camera, and the 60-degree forward camera. In each image, the selected channels are displayed as red, green, and blue, respectively. The data were acquired April 17, 2001 during Terra orbit 7083, and cover an area measuring about 285 kilometers x 400 kilometers. North is at the top.

    The junction of the Assiniboine and Qu'Apelle Rivers in the bottom part of the images is just east of the Saskatchewan-Manitoba border. During the growing season, the rich, fertile soils in this area support numerous fields of wheat, canola, barley, flaxseed, and rye. Beef cattle are raised in fenced pastures. To the north, the terrain becomes more rocky and forested. Many frozen lakes are visible as white patches in the top right. The narrow linear, north-south trending patterns about a third of the way down from the upper right corner are snow-filled depressions alternating with vegetated ridges, most probably carved by glacial flow.

    In the lefthand image, vegetation appears in shades of red, owing to its high near-infrared reflectivity. In the righthand image, several forested regions are clearly visible in green hues. Since this is a multi-angle composite, the green arises not from the color of the leaves but from the architecture of the surface cover. Progressing southeastward along the Manitoba Escarpment, the forested areas include the Pasquia Hills, the Porcupine Hills, Duck Mountain Provincial Park, and Riding Mountain National Park. The forests are brighter in the nadir than at the oblique angles, probably because more of the snow-covered surface is visible in the gaps between the trees. In contrast, the valley between the Pasquia and Porcupine Hills near the top of the images appears bright red in the lefthand image (indicating high vegetation abundance) but shows a mauve color in the multi-angle view. This means that it is darker in the nadir than at the oblique angles. Examination of imagery acquired after the snow has melted should establish whether this difference is related to the amount of snow on the surface or is indicative of a different type of vegetation structure.

    Saskatchewan and Manitoba are believed to derive their names from the Cree words for the winding and swift-flowing waters of the Saskatchewan River and for a narrows on Lake Manitoba where the roaring sound of wind and water evoked the voice of the Great Spirit. They are two of Canada's Prairie Provinces; Alberta is the third.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  1. Evaluation of AirMSPI photopolarimetric retrievals of smoke properties with in-situ observations collected during the ImPACT-PM field campaign

    NASA Astrophysics Data System (ADS)

    Kalashnikova, O. V.; Garay, M. J.; Xu, F.; Seidel, F.; Diner, D. J.; Seinfeld, J.; Bates, K. H.; Kong, W.; Kenseth, C.; Cappa, C. D.

    2017-12-01

    We introduce and evaluate an approach for obtaining closure between in situ and polarimetric remote sensing observations of smoke properties obtained during the collocated CIRPAS Twin Otter and ER-2 aircraft measurements of the Lebec fire event on July 8, 2016. We investigate the utility of multi-angle, spectropolarimetric remote sensing imagery to evaluate the relative contribution of organics, non-organic and black carbon particles to smoke particulate composition. The remote sensing data were collected during the Imaging Polarimetric and Characterization of Tropospheric Particular Matter (ImPACT-PM) field campaign by the Airborne Multiangle SpectroPolarimetric Imager (AirMSPI), which flew on NASA's high-altitude ER-2 aircraft. The ImPACT-PM field campaign was a joint JPL/Caltech effort to combine measurements from the Terra Multi-angle Imaging SpectroRadiometer (MISR), AirMSPI, in situ airborne measurements, and a chemical transport model to validate remote sensing retrievals of different types of airborne particulate matter with a particular emphasis on carbonaceous aerosols. The in-situ aerosol data were collected with a suite of Caltech instruments on board the CIRPAS Twin Otter aircraft and included the Aerosol Mass Spectrometer (AMS), the Differential Mobility Analyzer (DMA), and the Single Particle Soot Photometer (SP-2). The CIRPAS Twin Otter aircraft was also equipped with the Particle Soot Absorption Photometer (PSAP), nephelometer, a particle counter, and meteorological sensors. We found that the multi-angle polarimetric observations are capable of fire particulate emission monitoring by particle type as inferred from the in-situ airborne measurements. Modeling of retrieval sensitivities show that the characterization of black carbon is the most challenging. The work aims at evaluating multi-angle, spectropolarimetric capabilities for particulate matter characterization in support of the Multi-Angle Imager for Aerosols (MAIA) satellite investigation, which is currently in development under NASA's third Earth Venture Instrument Program.

  2. 3D imaging of translucent media with a plenoptic sensor based on phase space optics

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanzhe; Shu, Bohong; Du, Shaojun

    2015-05-01

    Traditional stereo imaging technology is not working for dynamical translucent media, because there are no obvious characteristic patterns on it and it's not allowed using multi-cameras in most cases, while phase space optics can solve the problem, extracting depth information directly from "space-spatial frequency" distribution of the target obtained by plenoptic sensor with single lens. This paper discussed the presentation of depth information in phase space data, and calculating algorithms with different transparency. A 3D imaging example of waterfall was given at last.

  3. Anthropometric body measurements based on multi-view stereo image reconstruction.

    PubMed

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  4. Anthropometric Body Measurements Based on Multi-View Stereo Image Reconstruction*

    PubMed Central

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting automatic anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of proposed system. PMID:24109700

  5. Automatic lumbar vertebrae detection based on feature fusion deep learning for partial occluded C-arm X-ray images.

    PubMed

    Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan

    2016-08-01

    Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.

  6. UAV-based urban structural damage assessment using object-based image analysis and semantic reasoning

    NASA Astrophysics Data System (ADS)

    Fernandez Galarreta, J.; Kerle, N.; Gerke, M.

    2015-06-01

    Structural damage assessment is critical after disasters but remains a challenge. Many studies have explored the potential of remote sensing data, but limitations of vertical data persist. Oblique imagery has been identified as more useful, though the multi-angle imagery also adds a new dimension of complexity. This paper addresses damage assessment based on multi-perspective, overlapping, very high resolution oblique images obtained with unmanned aerial vehicles (UAVs). 3-D point-cloud assessment for the entire building is combined with detailed object-based image analysis (OBIA) of façades and roofs. This research focuses not on automatic damage assessment, but on creating a methodology that supports the often ambiguous classification of intermediate damage levels, aiming at producing comprehensive per-building damage scores. We identify completely damaged structures in the 3-D point cloud, and for all other cases provide the OBIA-based damage indicators to be used as auxiliary information by damage analysts. The results demonstrate the usability of the 3-D point-cloud data to identify major damage features. Also the UAV-derived and OBIA-processed oblique images are shown to be a suitable basis for the identification of detailed damage features on façades and roofs. Finally, we also demonstrate the possibility of aggregating the multi-perspective damage information at building level.

  7. Non-astigmatic imaging with matched pairs of spherically bent reflectors

    DOEpatents

    Bitter, Manfred Ludwig [Princeton, NJ; Hill, Kenneth Wayne [Plainsboro, NJ; Scott, Steven Douglas [Wellesley, MA; Feder, Russell [Newton, PA; Ko, Jinseok [Cambridge, MA; Rice, John E [N. Billerica, MA; Ince-Cushman, Alexander Charles [New York, NY; Jones, Frank [Manalapan, NJ

    2012-07-10

    Arrangements for the point-to-point imaging of a broad spectrum of electromagnetic radiation and ultrasound at large angles of incidence employ matched pairs of spherically bent reflectors to eliminate astigmatic imaging errors. Matched pairs of spherically bent crystals or spherically bent multi-layers are used for X-rays and EUV radiation; and matched pairs of spherically bent mirrors that are appropriate for the type of radiation are used with microwaves, infrared and visible light, or ultrasound. The arrangements encompass the two cases, where the Bragg angle--the complement to the angle of incidence in optics--is between 45.degree. and 90.degree. on both crystals/mirrors or between 0.degree. and 45.degree. on the first crystal/mirror and between 45.degree. and 90.degree. on the second crystal/mirror, where the angles of convergence and divergence are equal. For x-rays and EUV radiation, also the Bragg condition is satisfied on both spherically bent crystals/multi-layers.

  8. Celtic Sea

    Atmospheric Science Data Center

    2013-04-17

    article title:  Coccoliths in the Celtic Sea     View Larger Image As ... This image is a natural-color view of the Celtic Sea and English Channel regions, and was acquired by the Multi-angle Imaging ...

  9. Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU

    NASA Astrophysics Data System (ADS)

    Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang

    2017-10-01

    Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.

  10. Status of the Multi-Angle SpectroRadiometer Instrument for EOS- AM1 and Its Application to Remote Sensing of Aerosols

    NASA Technical Reports Server (NTRS)

    Diner, D. J.; Abdou, W. A.; Bruegge, C. J.; Conel, J. E.; Kahn, R. A.; Martonchik, J. V.; Paradise, S. R.; West, R. A.

    1995-01-01

    The Multi-Angle Imaging SpectroRadiometer (MISR) is being developed at JPL for the AM1 spacecraft in the Earth Observing System (EOS) series. This paper reports on the progress of instrument fabrication and testing, and it discusses the strategy to use the instrument for studying tropospheric aerosols.

  11. A Strengthening Eastern Pacific Storm

    NASA Technical Reports Server (NTRS)

    2006-01-01

    These July 11, 2006 images are from the Multi-angle Imaging SpectroRadiometer (MISR) instrument aboard NASA's Terra Satellite. They show then Tropical Storm Bud as it was intensifying into a hurricane, which it became later that day. The true-color image at left is next to an image of cloud heights on the right. Two-dimensional maps of cloud heights such as these give scientists an opportunity to compare their models against actual hurricane observations.

    At the time of these images, Bud was located near 14.4 degrees north latitude and 112.5 degrees west longitude, or about 620 miles (1000 kilometers) southwest of Cabo San Lucas, Baja California, Mexico.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena,Calif. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, Md. JPL is a division of the California Institute of Technology.

  12. Where on Earth...? MISR Mystery Image Quiz #24: Shandong Province, China

    NASA Image and Video Library

    2010-11-03

    This image of the Shandong Province, China was acquired by the Multi-angle Imaging SpectroRadiometer instrument aboard NASA Terra spacecraft. This image is from the MISR Where on Earth...? Mystery Quiz #24.

  13. Yugoslavia

    Atmospheric Science Data Center

    2013-04-17

    ... Image These Multi-angle Imaging SpectroRadiometer (MISR) nadir camera images of Yugoslavia were acquired on July 28, 2000 during ... typically bright as a result of reflection from the plants' cell walls, to the brightness in the red. In the middle "false color" image, ...

  14. Enhanced iris recognition method based on multi-unit iris images

    NASA Astrophysics Data System (ADS)

    Shin, Kwang Yong; Kim, Yeong Gon; Park, Kang Ryoung

    2013-04-01

    For the purpose of biometric person identification, iris recognition uses the unique characteristics of the patterns of the iris; that is, the eye region between the pupil and the sclera. When obtaining an iris image, the iris's image is frequently rotated because of the user's head roll toward the left or right shoulder. As the rotation of the iris image leads to circular shifting of the iris features, the accuracy of iris recognition is degraded. To solve this problem, conventional iris recognition methods use shifting of the iris feature codes to perform the matching. However, this increases the computational complexity and level of false acceptance error. To solve these problems, we propose a novel iris recognition method based on multi-unit iris images. Our method is novel in the following five ways compared with previous methods. First, to detect both eyes, we use Adaboost and a rapid eye detector (RED) based on the iris shape feature and integral imaging. Both eyes are detected using RED in the approximate candidate region that consists of the binocular region, which is determined by the Adaboost detector. Second, we classify the detected eyes into the left and right eyes, because the iris patterns in the left and right eyes in the same person are different, and they are therefore considered as different classes. We can improve the accuracy of iris recognition using this pre-classification of the left and right eyes. Third, by measuring the angle of head roll using the two center positions of the left and right pupils, detected by two circular edge detectors, we obtain the information of the iris rotation angle. Fourth, in order to reduce the error and processing time of iris recognition, adaptive bit-shifting based on the measured iris rotation angle is used in feature matching. Fifth, the recognition accuracy is enhanced by the score fusion of the left and right irises. Experimental results on the iris open database of low-resolution images showed that the averaged equal error rate of iris recognition using the proposed method was 4.3006%, which is lower than that of other methods.

  15. Multi-operational tuneable Q-switched mode-locking Er fibre laser

    NASA Astrophysics Data System (ADS)

    Qamar, F. Z.

    2018-01-01

    A wavelength-spacing tuneable, Q-switched mode-locking (QML) erbium-doped fibre laser based on non-linear polarization rotation controlled by four waveplates and a cube polarizer is proposed. A mode-locked pulse train using two quarter-wave plates and a half-wave plate (HWP) is obtained first, and then an extra HWP is inserted into the cavity to produce different operation regimes. The evolutions of temporal and spectral dynamics with different orientation angles of the extra HWP are investigated. A fully modulated stable QML pulse train is observed experimentally. This is, to the author’s best knowledge, the first experimental work reporting QML operation without adding an extra saturable absorber inside the laser cavity. Multi-wavelength pulse laser operation, multi-pulse train continuous-wave mode-locking operation and pulse-splitting operations are also reported at certain HWP angles. The observed operational dynamics are interpreted as a mutual interaction of dispersion, non-linear effect and insertion loss. This work provides a new mechanism for fabricating cheap tuneable multi-wavelength lasers with QML pulses.

  16. Wave analysis of a plenoptic system and its applications

    NASA Astrophysics Data System (ADS)

    Shroff, Sapna A.; Berkner, Kathrin

    2013-03-01

    Traditional imaging systems directly image a 2D object plane on to the sensor. Plenoptic imaging systems contain a lenslet array at the conventional image plane and a sensor at the back focal plane of the lenslet array. In this configuration the data captured at the sensor is not a direct image of the object. Each lenslet effectively images the aperture of the main imaging lens at the sensor. Therefore the sensor data retains angular light-field information which can be used for a posteriori digital computation of multi-angle images and axially refocused images. If a filter array, containing spectral filters or neutral density or polarization filters, is placed at the pupil aperture of the main imaging lens, then each lenslet images the filters on to the sensor. This enables the digital separation of multiple filter modalities giving single snapshot, multi-modal images. Due to the diversity of potential applications of plenoptic systems, their investigation is increasing. As the application space moves towards microscopes and other complex systems, and as pixel sizes become smaller, the consideration of diffraction effects in these systems becomes increasingly important. We discuss a plenoptic system and its wave propagation analysis for both coherent and incoherent imaging. We simulate a system response using our analysis and discuss various applications of the system response pertaining to plenoptic system design, implementation and calibration.

  17. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    PubMed

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  18. Hi, Hokusai!

    NASA Image and Video Library

    2017-12-08

    This dramatic image features Hokusai in the foreground, famous for its extensive set of rays, some of which extend for over a thousand kilometers across Mercury's surface. The extensive, bright rays indicate that Hokusai is one of the youngest large craters on Mercury. Check out previously featured images to see high-resolution details of its central peaks, rim and ejecta blanket, and impact melt on its floor. This image was acquired as part of MDIS's high-incidence-angle base map. The high-incidence-angle base map complements the surface morphology base map of MESSENGER's primary mission that was acquired under generally more moderate incidence angles. High incidence angles, achieved when the Sun is near the horizon, result in long shadows that accentuate the small-scale topography of geologic features. The high-incidence-angle base map was acquired with an average resolution of 200 meters/pixel. The MESSENGER spacecraft is the first ever to orbit the planet Mercury, and the spacecraft's seven scientific instruments and radio science investigation are unraveling the history and evolution of the Solar System's innermost planet. During the first two years of orbital operations, MESSENGER acquired over 150,000 images and extensive other data sets. MESSENGER is capable of continuing orbital operations until early 2015. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  19. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM).

    PubMed

    Gao, Hao; Yu, Hengyong; Osher, Stanley; Wang, Ge

    2011-11-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations.

  20. India: Gujarat

    Atmospheric Science Data Center

    2013-04-16

    ... Gujarat), and in areas close to the earthquake epicenter.  Research uses the unique capabilities of the Multi-angle Imaging ... Indo-Pakistani border, which were not easily accessible to survey teams on the ground. Changes in reflection at different view angles ...

  1. Characteristics of mist 3D screen for projection type electro-holography

    NASA Astrophysics Data System (ADS)

    Sato, Koki; Okumura, Toshimichi; Kanaoka, Takumi; Koizumi, Shinya; Nishikawa, Satoko; Takano, Kunihiko

    2006-01-01

    The specification of hologram image is the full parallax 3D image. In this case we can get more natural 3D image because focusing and convergence are coincident each other. We try to get practical electro-holography system because for conventional electro-holography the image viewing angle is very small. This is due to the limited display pixel size. Now we are developing new method for large viewing angle by space projection method. White color laser is irradiated to single DMD panel (time shared CGH of RGB three colors). 3D space screen constructed by very small water particle is used to reconstruct the 3D image with large viewing angle by scattering of water particle.

  2. MISR Global Images See the Light of Day

    NASA Technical Reports Server (NTRS)

    2002-01-01

    As of July 31, 2002, global multi-angle, multi-spectral radiance products are available from the MISR instrument aboard the Terra satellite. Measuring the radiative properties of different types of surfaces, clouds and atmospheric particulates is an important step toward understanding the Earth's climate system. These images are among the first planet-wide summary views to be publicly released from the Multi-angle Imaging SpectroRadiometer experiment. Data for these images were collected during the month of March 2002, and each pixel represents monthly-averaged daylight radiances from an area measuring 1/2 degree in latitude by 1/2 degree in longitude.

    The top panel is from MISR's nadir (vertical-viewing) camera and combines data from the red, green and blue spectral bands to create a natural color image. The central view combines near-infrared, red, and green spectral data to create a false-color rendition that enhances highly vegetated terrain. It takes 9 days for MISR to view the entire globe, and only areas within 8 degrees of latitude of the north and south poles are not observed due to the Terra orbit inclination. Because a single pole-to-pole swath of MISR data is just 400 kilometers wide, multiple swaths must be mosaiced to create these global views. Discontinuities appear in some cloud patterns as a consequence of changes in cloud cover from one day to another.

    The lower panel is a composite in which red, green, and blue radiances from MISR's 70-degree forward-viewing camera are displayed in the northern hemisphere, and radiances from the 70-degree backward-viewing camera are displayed in the southern hemisphere. At the March equinox (spring in the northern hemisphere, autumn in the southern hemisphere), the Sun is near the equator. Therefore, both oblique angles are observing the Earth in 'forward scattering', particularly at high latitudes. Forward scattering occurs when you (or MISR) observe an object with the Sun at a point in the sky that is in front of you. Relative to the nadir view, this geometry accentuates the appearance of polar clouds, and can even reveal clouds that are invisible in the nadir direction. In relatively clear ocean areas, the oblique-angle composite is generally brighter than its nadir counterpart due to enhanced reflection of light by atmospheric particulates.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  3. Hurricanes Frances and Ivan

    Atmospheric Science Data Center

    2014-05-15

    ... Image NASA's Multi-angle Imaging SpectroRadiometer (MISR) captured these images and cloud-top height retrievals of Hurricane ... especially on the 24 to 48 hour timescale vital for disaster planning. To improve the operational models used to make hurricane ...

  4. Radiometric stability of the Multi-angle Imaging SpectroRadiometer (MISR) following 15 years on-orbit

    NASA Astrophysics Data System (ADS)

    Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu

    2014-09-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.

  5. Multiple incidence angle SIR-B experiment over Argentina

    NASA Technical Reports Server (NTRS)

    Cimino, Jobea; Casey, Daren; Wall, Stephen; Brandani, Aldo; Domik, Gitta; Leberl, Franz

    1986-01-01

    The Shuttle Imaging Radar (SIR-B), the second synthetic aperture radar (SAR) to fly aboard a shuttle, was launched on October 5, 1984. One of the primary goals of the SIR-B experiment was to use multiple incidence angle radar images to distinguish different terrain types through the use of their characteristic backscatter curves. This goal was accomplished in several locations including the Chubut Province of southern Argentina. Four descending image acquisitions were collected providing a multiple incidence angle image set. The data were first used to assess stereo-radargrammetric techniques. A digital elevation model was produced using the optimum pair of multiple incidence angle images. This model was then used to determine the local incidence angle of each picture element to generate curves of relative brightness vs. incidence angle. Secondary image products were also generated using the multi-angle data. The results of this work indicate that: (1) various forest species and various structures of a single species may be discriminated using multiple incidence angle radar imagery, and (2) it is essential to consider the variation in backscatter due to a variable incidence angle when analyzing and comparing data collected at varying frequencies and polarizations.

  6. Super-resolution imaging using multi- electrode CMUTs: theoretical design and simulation using point targets.

    PubMed

    You, Wei; Cretu, Edmond; Rohling, Robert

    2013-11-01

    This paper investigates a low computational cost, super-resolution ultrasound imaging method that leverages the asymmetric vibration mode of CMUTs. Instead of focusing on the broadband received signal on the entire CMUT membrane, we utilize the differential signal received on the left and right part of the membrane obtained by a multi-electrode CMUT structure. The differential signal reflects the asymmetric vibration mode of the CMUT cell excited by the nonuniform acoustic pressure field impinging on the membrane, and has a resonant component in immersion. To improve the resolution, we propose an imaging method as follows: a set of manifold matrices of CMUT responses for multiple focal directions are constructed off-line with a grid of hypothetical point targets. During the subsequent imaging process, the array sequentially steers to multiple angles, and the amplitudes (weights) of all hypothetical targets at each angle are estimated in a maximum a posteriori (MAP) process with the manifold matrix corresponding to that angle. Then, the weight vector undergoes a directional pruning process to remove the false estimation at other angles caused by the side lobe energy. Ultrasound imaging simulation is performed on ring and linear arrays with a simulation program adapted with a multi-electrode CMUT structure capable of obtaining both average and differential received signals. Because the differential signals from all receiving channels form a more distinctive temporal pattern than the average signals, better MAP estimation results are expected than using the average signals. The imaging simulation shows that using differential signals alone or in combination with the average signals produces better lateral resolution than the traditional phased array or using the average signals alone. This study is an exploration into the potential benefits of asymmetric CMUT responses for super-resolution imaging.

  7. ACTIM: an EDA initiated study on spectral active imaging

    NASA Astrophysics Data System (ADS)

    Steinvall, O.; Renhorn, I.; Ahlberg, J.; Larsson, H.; Letalick, D.; Repasi, E.; Lutzmann, P.; Anstett, G.; Hamoir, D.; Hespel, L.; Boucher, Y.

    2010-10-01

    This paper will describe ongoing work from an EDA initiated study on Active Imaging with emphasis of using multi or broadband spectral lasers and receivers. Present laser based imaging and mapping systems are mostly based on a fixed frequency lasers. On the other hand great progress has recently occurred in passive multi- and hyperspectral imaging with applications ranging from environmental monitoring and geology to mapping, military surveillance, and reconnaissance. Data bases on spectral signatures allow the possibility to discriminate between different materials in the scene. Present multi- and hyperspectral sensors mainly operate in the visible and short wavelength region (0.4-2.5 μm) and rely on the solar radiation giving shortcoming due to shadows, clouds, illumination angles and lack of night operation. Active spectral imaging however will largely overcome these difficulties by a complete control of the illumination. Active illumination enables spectral night and low-light operation beside a robust way of obtaining polarization and high resolution 2D/3D information. Recent development of broadband lasers and advanced imaging 3D focal plane arrays has led to new opportunities for advanced spectral and polarization imaging with high range resolution. Fusing the knowledge of ladar and passive spectral imaging will result in new capabilities in the field of EO-sensing to be shown in the study. We will present an overview of technology, systems and applications for active spectral imaging and propose future activities in connection with some prioritized applications.

  8. Retrospective multi-phase non-contrast-enhanced magnetic resonance angiography (ROMANCE MRA) for robust angiogram separation in the presence of cardiac arrhythmia.

    PubMed

    Kim, Hahnsung; Park, Suhyung; Kim, Eung Yeop; Park, Jaeseok

    2018-09-01

    To develop a novel, retrospective multi-phase non-contrast-enhanced MRA (ROMANCE MRA) in a single acquisition for robust angiogram separation even in the presence of cardiac arrhythmia. In the proposed ROMANCE MRA, data were continuously acquired over all cardiac phases using retrospective, multi-phase flow-sensitive single-slab 3D fast spin echo (FSE) with variable refocusing flip angles, while an external pulse oximeter was in sync with pulse repetitions in FSE to record real-time information on cardiac cycles. Data were then sorted into k-bin space using the real-time cardiac information. Angiograms were reconstructed directly from k-bin space by solving a constrained optimization problem with both subtraction-induced sparsity and low rank priors. Peripheral MRA was performed in normal volunteers with/without caffeine consumption and a volunteer with cardiac arrhythmia using conventional fresh blood imaging (FBI) and the proposed ROMANCE MRA for comparison. The proposed ROMANCE MRA shows superior performance in accurately delineating both major and small vessel branches with robust background suppression if compared with conventional FBI. Even in the presence of irregular heartbeats, the proposed method exhibits clear depiction of angiograms over conventional methods within clinically reasonable imaging time. We successfully demonstrated the feasibility of the proposed ROMANCE MRA in generating robust angiograms with background suppression. © 2018 International Society for Magnetic Resonance in Medicine.

  9. Anatahan Island

    Atmospheric Science Data Center

    2013-04-19

    ...     View Larger Image This natural-color image of Anatahan Island from the Multi-angle ... (Acro Service Corporation/Jet Propulsion Laboratory), David J. Diner (Jet Propulsion Laboratory). Other formats available at JPL ...

  10. Manhole Cover Detection Using Vehicle-Based Multi-Sensor Data

    NASA Astrophysics Data System (ADS)

    Ji, S.; Shi, Y.; Shi, Z.

    2012-07-01

    A new method combined wit multi-view matching and feature extraction technique is developed to detect manhole covers on the streets using close-range images combined with GPS/IMU and LINDAR data. The covers are an important target on the road traffic as same as transport signs, traffic lights and zebra crossing but with more unified shapes. However, the different shoot angle and distance, ground material, complex street scene especially its shadow, and cars in the road have a great impact on the cover detection rate. The paper introduces a new method in edge detection and feature extraction in order to overcome these difficulties and greatly improve the detection rate. The LIDAR data are used to do scene segmentation and the street scene and cars are excluded from the roads. And edge detection method base on canny which sensitive to arcs and ellipses is applied on the segmented road scene and the interesting areas contain arcs are extracted and fitted to ellipse. The ellipse are then resampled for invariance to shooting angle and distance and then are matched to adjacent images for further checking if covers and . More than 1000 images with different scenes are used in our tests and the detection rate is analyzed. The results verified our method have its advantages in correct covers detection in the complex street scene.

  11. Toward high-resolution global topography of Mercury from MESSENGER orbital stereo imaging: A prototype model for the H6 (Kuiper) quadrangle

    NASA Astrophysics Data System (ADS)

    Preusker, Frank; Stark, Alexander; Oberst, Jürgen; Matz, Klaus-Dieter; Gwinner, Klaus; Roatsch, Thomas; Watters, Thomas R.

    2017-08-01

    We selected approximately 10,500 narrow-angle camera (NAC) and wide-angle camera (WAC) images of Mercury acquired from orbit by MESSENGER's Mercury Dual Imaging System (MDIS) with an average resolution of 150 m/pixel to compute a digital terrain model (DTM) for the H6 (Kuiper) quadrangle, which extends from 22.5°S to 22.5°N and from 288.0°E to 360.0°E. From the images, we identified about 21,100 stereo image combinations consisting of at least three images each. We applied sparse multi-image matching to derive approximately 250,000 tie-points representing 50,000 ground points. We used the tie-points to carry out a photogrammetric block adjustment, which improves the image pointing and the accuracy of the ground point positions in three dimensions from about 850 m to approximately 55 m. We then applied high-density (pixel-by-pixel) multi-image matching to derive about 45 billion tie-points. Benefitting from improved image pointing data achieved through photogrammetric block adjustment, we computed about 6.3 billion surface points. By interpolation, we generated a DTM with a lateral spacing of 221.7 m/pixel (192 pixels per degree) and a vertical accuracy of about 30 m. The comparison of the DTM with Mercury Laser Altimeter (MLA) profiles obtained over four years of MESSENGER orbital operations reveals that the DTM is geometrically very rigid. It may be used as a reference to identify MLA outliers (e.g., when MLA operated at its ranging limit) or to map offsets of laser altimeter tracks, presumably caused by residual spacecraft orbit and attitude errors. After the relevant outlier removals and corrections, MLA profiles show excellent agreement with topographic profiles from H6, with a root mean square height difference of only 88 m.

  12. The Hyper-Angular Rainbow Polarimeter (HARP) CubeSat Observatory and the Characterization of Cloud Properties

    NASA Astrophysics Data System (ADS)

    Neilsen, T. L.; Martins, J. V.; Fish, C. S.; Fernandez Borda, R. A.

    2014-12-01

    The Hyper-Angular Rainbow Polarimeter HARP instrument is a wide field-of-view imager that splits three spatially identical images into three independent polarizers and detector arrays. This technique achieves simultaneous imagery of the same ground target in three polarization states and is the key innovation to achieve high polarimetric accuracy with no moving parts. The spacecraft consists of a 3U CubeSat with 3-axis stabilization designed to keep the image optics pointing nadir during data collection but maximizing solar panel sun pointing otherwise. The hyper-angular capability is achieved by acquiring overlapping images at very fast speeds. An imaging polarimeter with hyper-angular capability can make a strong contribution to characterizing cloud properties. Non-polarized multi-angle measurements have been shown to be sensitive to thin cirrus and can be used to provide climatology of these clouds. Adding polarization and increasing the number of observation angles allows for the retrieval of the complete size distribution of cloud droplets, including accurate information on the width of the droplet distribution in addition to the currently retrieved e­ffective radius. The HARP mission is funded by the NASA Earth Science Technology Office as part of In-Space Validation of Earth Science Technologies (InVEST) program. The HARP instrument is designed and built by a team of students and professionals lead by Dr. Vanderlei Martines at University of Maryland, Baltimore County. The HARP spacecraft is designed and built by a team of students and professionals and The Space Dynamics Laboratory.

  13. Research and implementation of finger-vein recognition algorithm

    NASA Astrophysics Data System (ADS)

    Pang, Zengyao; Yang, Jie; Chen, Yilei; Liu, Yin

    2017-06-01

    In finger vein image preprocessing, finger angle correction and ROI extraction are important parts of the system. In this paper, we propose an angle correction algorithm based on the centroid of the vein image, and extract the ROI region according to the bidirectional gray projection method. Inspired by the fact that features in those vein areas have similar appearance as valleys, a novel method was proposed to extract center and width of palm vein based on multi-directional gradients, which is easy-computing, quick and stable. On this basis, an encoding method was designed to determine the gray value distribution of texture image. This algorithm could effectively overcome the edge of the texture extraction error. Finally, the system was equipped with higher robustness and recognition accuracy by utilizing fuzzy threshold determination and global gray value matching algorithm. Experimental results on pairs of matched palm images show that, the proposed method has a EER with 3.21% extracts features at the speed of 27ms per image. It can be concluded that the proposed algorithm has obvious advantages in grain extraction efficiency, matching accuracy and algorithm efficiency.

  14. Augmentation of linear facial anthropometrics through modern morphometrics: a facial convexity example.

    PubMed

    Wei, R; Claes, P; Walters, M; Wholley, C; Clement, J G

    2011-06-01

    The facial region has traditionally been quantified using linear anthropometrics. These are well established in dentistry, but require expertise to be used effectively. The aim of this study was to augment the utility of linear anthropometrics by applying them in conjunction with modern 3-D morphometrics. Facial images of 75 males and 94 females aged 18-25 years with self-reported Caucasian ancestry were used. An anthropometric mask was applied to establish corresponding quasi-landmarks on the images in the dataset. A statistical face-space, encoding shape covariation, was established. The facial median plane was extracted facilitating both manual and automated indication of commonly used midline landmarks. From both indications, facial convexity angles were calculated and compared. The angles were related to the face-space using a regression based pathway enabling the visualization of facial form associated with convexity variation. Good agreement between the manual and automated angles was found (Pearson correlation: 0.9478-0.9474, Dahlberg root mean squared error: 1.15°-1.24°). The population mean angle was 166.59°-166.29° (SD 5.09°-5.2°) for males-females. The angle-pathway provided valuable feedback. Linear facial anthropometrics can be extended when used in combination with a face-space derived from 3-D scans and the exploration of property pathways inferred in a statistically verifiable way. © 2011 Australian Dental Association.

  15. Arizona Fires

    Atmospheric Science Data Center

    2014-05-15

    ... Image This image and accompanying animation from NASA's Multi-angle Imaging SpectroRadiometer (MISR) instrument on the Terra ... and is currently the second largest fire in Arizona history. More than 2,000 people are working to contain the fire, which is being ...

  16. Larsen B Ice Shelf

    Atmospheric Science Data Center

    2013-04-16

    article title:  Unique Views of a Shattered Ice Shelf     View Larger Image ... views of the breakup of the northern section of the Larsen B ice shelf are shown in this image pair from the Multi-angle Imaging ...

  17. Automatic method for estimation of in situ effective contact angle from X-ray micro tomography images of two-phase flow in porous media.

    PubMed

    Scanziani, Alessio; Singh, Kamaljit; Blunt, Martin J; Guadagnini, Alberto

    2017-06-15

    Multiphase flow in porous media is strongly influenced by the wettability of the system, which affects the arrangement of the interfaces of different phases residing in the pores. We present a method for estimating the effective contact angle, which quantifies the wettability and controls the local capillary pressure within the complex pore space of natural rock samples, based on the physical constraint of constant curvature of the interface between two fluids. This algorithm is able to extract a large number of measurements from a single rock core, resulting in a characteristic distribution of effective in situ contact angle for the system, that is modelled as a truncated Gaussian probability density distribution. The method is first validated on synthetic images, where the exact angle is known analytically; then the results obtained from measurements within the pore space of rock samples imaged at a resolution of a few microns are compared to direct manual assessment. Finally the method is applied to X-ray micro computed tomography (micro-CT) scans of two Ketton cores after waterflooding, that display water-wet and mixed-wet behaviour. The resulting distribution of in situ contact angles is characterized in terms of a mixture of truncated Gaussian densities. Crown Copyright © 2017. Published by Elsevier Inc. All rights reserved.

  18. Nicaraguan Volcanoes, 26 February 2000

    NASA Image and Video Library

    2000-04-19

    The true-color image at left is a downward-looking (nadir) view of the area around the San Cristobal volcano, which erupted the previous day. This image is oriented with east at the top and north at the left. The right image is a stereo anaglyph of the same area, created from red band multi-angle data taken by the 45.6-degree aftward and 70.5-degree aftward cameras on the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. View this image through red/blue 3D glasses, with the red filter over the left eye. A plume from San Cristobal (approximately at image center) is much easier to see in the anaglyph, due to 3 effects: the long viewing path through the atmosphere at the oblique angles, the reduced reflection from the underlying water, and the 3D stereoscopic height separation. In this image, the plume floats between the surface and the overlying cumulus clouds. A second plume is also visible in the upper right (southeast of San Cristobal). This very thin plume may originate from the Masaya volcano, which is continually degassing at as low rate. The spatial resolution is 275 meters (300 yards). http://photojournal.jpl.nasa.gov/catalog/PIA02600

  19. MISR Images Forest Fires and Hurricane

    NASA Technical Reports Server (NTRS)

    2000-01-01

    These images show forest fires raging in Montana and Hurricane Hector swirling in the Pacific. These two unrelated, large-scale examples of nature's fury were captured by the Multi-angle Imaging SpectroRadiometer(MISR) during a single orbit of NASA's Terra satellite on August 14, 2000.

    In the left image, huge smoke plumes rise from devastating wildfires in the Bitterroot Mountain Range near the Montana-Idaho border. Flathead Lake is near the upper left, and the Great Salt Lake is at the bottom right. Smoke accumulating in the canyons and plains is also visible. This image was generated from the MISR camera that looks forward at a steep angle (60 degrees); the instrument has nine different cameras viewing Earth at different angles. The smoke is far more visible when seen at this highly oblique angle than it would be in a conventional, straight-downward (nadir)view. The wide extent of the smoke is evident from comparison with the image on the right, a view of Hurricane Hector acquired from MISR's nadir-viewing camera. Both images show an area of approximately 400 kilometers (250 miles)in width and about 850 kilometers (530 miles) in length.

    When this image of Hector was taken, the eastern Pacific tropical cyclone was located approximately 1,100 kilometers (680 miles) west of the southern tip of Baja California, Mexico. The eye is faintly visible and measures 25 kilometers (16 miles) in diameter. The storm was beginning to weaken, and 24hours later the National Weather Service downgraded Hector from a hurricane to a tropical storm.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

    For more information: http://www-misr.jpl.nasa.gov

  20. SU-C-207-01: Four-Dimensional Inverse Geometry Computed Tomography: Concept and Its Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K; Kim, D; Kim, T

    2015-06-15

    Purpose: In past few years, the inverse geometry computed tomography (IGCT) system has been developed to overcome shortcomings of a conventional computed tomography (CT) system such as scatter problem induced from large detector size and cone-beam artifact. In this study, we intend to present a concept of a four-dimensional (4D) IGCT system that has positive aspects above all with temporal resolution for dynamic studies and reduction of motion artifact. Methods: Contrary to conventional CT system, projection data at a certain angle in IGCT was a group of fractionated narrow cone-beam projection data, projection group (PG), acquired from multi-source array whichmore » have extremely short time gap of sequential operation between each of sources. At this, for 4D IGCT imaging, time-related data acquisition parameters were determined by combining multi-source scanning time for collecting one PG with conventional 4D CBCT data acquisition sequence. Over a gantry rotation, acquired PGs from multi-source array were tagged time and angle for 4D image reconstruction. Acquired PGs were sorted into 10 phase and image reconstructions were independently performed at each phase. Image reconstruction algorithm based upon filtered-backprojection was used in this study. Results: The 4D IGCT had uniform image without cone-beam artifact on the contrary to 4D CBCT image. In addition, the 4D IGCT images of each phase had no significant artifact induced from motion compared with 3D CT. Conclusion: The 4D IGCT image seems to give relatively accurate dynamic information of patient anatomy based on the results were more endurable than 3D CT about motion artifact. From this, it will be useful for dynamic study and respiratory-correlated radiation therapy. This work was supported by the Industrial R&D program of MOTIE/KEIT [10048997, Development of the core technology for integrated therapy devices based on real-time MRI guided tumor tracking] and the Mid-career Researcher Program (2014R1A2A1A10050270) through the National Research Foundation of Korea funded by the Ministry of Science, ICT&Future Planning.« less

  1. Summer Harvest in Saratov, Russia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Russia's Saratov Oblast (province) is located in the southeastern portion of the East-European plain, in the Lower Volga River Valley. Southern Russia produces roughly 40 percent of the country's total agricultural output, and Saratov Oblast is the largest producer of grain in the Volga region. Vegetation changes in the province's agricultural lands between spring and summer are apparent in these images acquired on May 31 and July 18, 2002 (upper and lower image panels, respectively) by the Multi-angle Imaging SpectroRadiometer (MISR).

    The left-hand panels are natural color views acquired by MISR's vertical-viewing (nadir) camera. Less vegetation and more earth tones (indicative of bare soils) are apparent in the summer image (lower left). Farmers in the region utilize staggered sowing to help stabilize yields, and a number of different stages of crop maturity can be observed. The main crop is spring wheat, cultivated under non-irrigated conditions. A short growing season and relatively low and variable rainfall are the major limitations to production. Saratov city is apparent as the light gray pixels on the left (west) bank of the Volga River. Riparian vegetation along the Volga exhibits dark green hues, with some new growth appearing in summer.

    The right-hand panels are multi-angle composites created with red band data from MISR's 60-degree backward, nadir and 60-degree forward-viewing cameras displayed as red, green and blue respectively. In these images, color variations serve as a proxy for changes in angular reflectance, and the spring and summer views were processed identically to preserve relative variations in brightness between the two dates. Urban areas and vegetation along the Volga banks look similar in the two seasonal multi-angle composites. The agricultural areas, on the other hand, look strikingly different. This can be attributed to differences in brightness and texture between bare soil and vegetated land. The chestnut-colored soils in this region are brighter in MISR's red band than the vegetation. Because plants have vertical structure, the oblique cameras observe a greater proportion of vegetation relative to the nadir camera, which sees more soil. In spring, therefore, the scene is brightest in the vertical view and thus appears with an overall greenish hue. In summer, the soil characteristics play a greater role in governing the appearance of the scene, and the angular reflectance is now brighter at the oblique view angles (displayed as red and blue), thus imparting a pink color to much of the farmland and a purple color to areas along the banks of several narrow rivers. The unusual appearance of the clouds is due to geometric parallax which splits the imagery into spatially separated components as a consequence of their elevation above the surface.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. These images are a portion of the data acquired during Terra orbits 13033 and 13732, and cover an area of about 173 kilometers x 171 kilometers. They utilize data from blocks 49 to 50 within World Reference System-2 path 170.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  2. Greenland's Coast in Holiday Colors

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Vibrant reds, emerald greens, brilliant whites, and pastel blues adorn this view of the area surrounding the Jakobshavn Glacier on the western coast of Greenland. The image is a false-color (near-infrared, green, blue) view acquired by the Multi-angle Imaging SpectroRadiometer's nadir camera. The brightness of vegetation in the near-infrared contributes to the reddish hues; glacial silt gives rise to the green color of the water; and blue-colored melt ponds are visible in the bright white ice. A scattering of small icebergs in Disco Bay adds a touch of glittery sparkle to the scene.

    The large island in the upper left is called Qeqertarsuaq. To the east of this island, and just above image center, is the outlet of the fast-flowing Jakobshavn (or Ilulissat) glacier. Jakobshavn is considered to have the highest iceberg production of all Greenland glaciers and is a major drainage outlet for a large portion of the western side of the ice sheet. Icebergs released from the glacier drift slowly with the ocean currents and pose hazards for shipping along the coast.

    The Multi-angle Imaging SpectroRadiometer views the daylit Earth continuously and the entire globe between 82 degrees north and 82 degrees south latitude is observed every 9 days. These data products were generated from a portion of the imagery acquired on June 18, 2003 during Terra orbit 18615. The image cover an area of about 254 kilometers x 210 kilometers, and use data from blocks 34 to 35 within World Reference System-2 path 10.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  3. Scandinavia and the Baltic Region

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Data from the Multi-angle Imaging SpectroRadiometer's vertical-viewing (nadir) camera were combined to create this cloud-free natural-color mosaic of Scandinavia and the Baltic region. The image extends from 64oN, 0oE in the northwest to 56oN, 32oE in the southeast, and has been draped over a shaded relief Digital Terrain Elevation Model from the United States Geological Survey. It is displayed in an equidistant conic projection.

    The image area includes southern Norway, Sweden and Finland, northern Denmark, Estonia, Latvia and part of western Russia. Norway's rugged western coastline is deeply indented by fjords. Elongated lakes, formed by glacial erosion and deposition, are characteristic of the entire region, and are particularly dense throughout Finland and Sweden. Numerous islands are present, and a virtually continuous chain of small, scattered islands occur between Sweden and Finland. The northern and eastern waters of the Baltic Sea are almost fresh, since the Baltic receives saltwater only from the narrow and shallow sounds between Denmark and Sweden that connect it to the North Sea. Most of the major cities within the image area are coastal, including St. Petersburg, Stockholm, Helsinki, Riga, and Oslo.

    The Multi-angle Imaging SpectroRadiometer (MISR) observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  4. Smoke from Station Fire Blankets Southern California

    NASA Image and Video Library

    2009-09-01

    The Multi-angle Imaging SpectroRadiometer MISR instrument on NASA Terra satellite captured this Aug. 30 image of smoke plumes from the Station and other wildfires burning throughout Southern California.

  5. Sparsity-based multi-height phase recovery in holographic microscopy

    NASA Astrophysics Data System (ADS)

    Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan

    2016-11-01

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  6. Andes

    Atmospheric Science Data Center

    2013-04-18

    ... Arequipa, provide a striking demonstration of the power of water erosion. This image pair was acquired by the Multi-angle Imaging ... stereo image in 3-D requires red/blue glasses with the red filter placed over your left eye. Two main erosion formations can be seen. ...

  7. Where on Earth...? MISR Mystery Image Quiz #13: Western Uzbekistan and Northeastern Turkmenistan

    NASA Image and Video Library

    2003-03-19

    Acquired by the Multi-angle Imaging SpectroRadiometer instrument aboard NASA Terra spacecraft, this image is from the MISR Where on Earth...? Mystery Quiz #13. The location is Western Uzbekistan and Northeastern Turkmenistan.

  8. Aero-optical effects of an optical seeker with a supersonic jet for hypersonic vehicles in near space.

    PubMed

    Guo, Guangming; Liu, Hong; Zhang, Bin

    2016-06-10

    The aero-optical effects of an optical seeker with a supersonic jet for hypersonic vehicles in near space were investigated by three suites of cases, in which the altitude, angle of attack, and Mach number were varied in a large range. The direct simulation Monte Carlo based on the Boltzmann equation was used for flow computations and the ray-tracing method was used to simulate beam transmission through the nonuniform flow field over the optical window. Both imaging displacement and phase deviation were proposed as evaluation parameters, and along with Strehl ratio they were used to quantitatively evaluate aero-optical effects. The results show that aero-optical effects are quite weak when the altitude is greater than 30 km, the imaging displacement is related to the incident angle of a beam, and it is minimal when the incident angle is approximately 15°. For reducing the aero-optical effects, the optimal location of an aperture should be in the middle of the optical window.

  9. Co-Registration of Terrestrial and Uav-Based Images - Experimental Results

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Nex, F.; Jende, P.

    2016-03-01

    For many applications within urban environments the combined use of images taken from the ground and from unmanned aerial platforms seems interesting: while from the airborne perspective the upper parts of objects including roofs can be observed, the ground images can complement the data from lateral views to retrieve a complete visualisation or 3D reconstruction of interesting areas. The automatic co-registration of air- and ground-based images is still a challenge and cannot be considered solved. The main obstacle is originating from the fact that objects are photographed from quite different angles, and hence state-of-the-art tie point measurement approaches cannot cope with the induced perspective transformation. One first important step towards a solution is to use airborne images taken under slant directions. Those oblique views not only help to connect vertical images and horizontal views but also provide image information from 3D-structures not visible from the other two directions. According to our experience, however, still a good planning and many images taken under different viewing angles are needed to support an automatic matching across all images and complete bundle block adjustment. Nevertheless, the entire process is still quite sensible - the removal of a single image might lead to a completely different or wrong solution, or separation of image blocks. In this paper we analyse the impact different parameters and strategies have on the solution. Those are a) the used tie point matcher, b) the used software for bundle adjustment. Using the data provided in the context of the ISPRS benchmark on multi-platform photogrammetry, we systematically address the mentioned influences. Concerning the tie-point matching we test the standard SIFT point extractor and descriptor, but also the SURF and ASIFT-approaches, the ORB technique, as well as (A)KAZE, which are based on a nonlinear scale space. In terms of pre-processing we analyse the Wallis-filter. Results show that in more challenging situations, in this case for data captured from different platforms at different days most approaches do not perform well. Wallis-filtering emerged to be most helpful especially for the SIFT approach. The commercial software pix4dmapper succeeds in overall bundle adjustment only for some configurations, and especially not for the entire image block provided.

  10. Generic framework for vessel detection and tracking based on distributed marine radar image data

    NASA Astrophysics Data System (ADS)

    Siegert, Gregor; Hoth, Julian; Banyś, Paweł; Heymann, Frank

    2018-04-01

    Situation awareness is understood as a key requirement for safe and secure shipping at sea. The primary sensor for maritime situation assessment is still the radar, with the AIS being introduced as supplemental service only. In this article, we present a framework to assess the current situation picture based on marine radar image processing. Essentially, the framework comprises a centralized IMM-JPDA multi-target tracker in combination with a fully automated scheme for track management, i.e., target acquisition and track depletion. This tracker is conditioned on measurements extracted from radar images. To gain a more robust and complete situation picture, we are exploiting the aspect angle diversity of multiple marine radars, by fusing them a priori to the tracking process. Due to the generic structure of the proposed framework, different techniques for radar image processing can be implemented and compared, namely the BLOB detector and SExtractor. The overall framework performance in terms of multi-target state estimation will be compared for both methods based on a dedicated measurement campaign in the Baltic Sea with multiple static and mobile targets given.

  11. NASA Tech Briefs, March 2009

    NASA Technical Reports Server (NTRS)

    2009-01-01

    Topics covered include: Improved Instrument for Detecting Water and Ice in Soil; Real-Time Detection of Dust Devils from Pressure Readings; Determining Surface Roughness in Urban Areas Using Lidar Data; DSN Data Visualization Suite; Hamming and Accumulator Codes Concatenated with MPSK or QAM; Wide-Angle-Scanning Reflectarray Antennas Actuated by MEMS; Biasable Subharmonic Membrane Mixer for 520 to 600 GHz; Hardware Implementation of Serially Concatenated PPM Decoder; Symbolic Processing Combined with Model-Based Reasoning; Presentation Extensions of the SOAP; Spreadsheets for Analyzing and Optimizing Space Missions; Processing Ocean Images to Detect Large Drift Nets; Alternative Packaging for Back-Illuminated Imagers; Diamond Machining of an Off-Axis Biconic Aspherical Mirror; Laser Ablation Increases PEM/Catalyst Interfacial Area; Damage Detection and Self-Repair in Inflatable/Deployable Structures; Polyimide/Glass Composite High-Temperature Insulation; Nanocomposite Strain Gauges Having Small TCRs; Quick-Connect Windowed Non-Stick Penetrator Tips for Rapid Sampling; Modeling Unsteady Cavitation and Dynamic Loads in Turbopumps; Continuous-Flow System Produces Medical-Grade Water; Discrimination of Spore-Forming Bacilli Using spoIVA; nBn Infrared Detector Containing Graded Absorption Layer; Atomic References for Measuring Small Accelerations; Ultra-Broad-Band Optical Parametric Amplifier or Oscillator; Particle-Image Velocimeter Having Large Depth of Field; Enhancing SERS by Means of Supramolecular Charge Transfer; Improving 3D Wavelet-Based Compression of Hyperspectral Images; Improved Signal Chains for Readout of CMOS Imagers; SOI CMOS Imager with Suppression of Cross-Talk; Error-Rate Bounds for Coded PPM on a Poisson Channel; Biomorphic Multi-Agent Architecture for Persistent Computing; and Using Covariance Analysis to Assess Pointing Performance.

  12. Descriptive anatomy of the interscalene triangle and the costoclavicular space and their relationship to thoracic outlet syndrome: a study of 60 cadavers.

    PubMed

    Dahlstrom, Kelly A; Olinger, Anthony B

    2012-06-01

    Thoracic outlet syndrome classically results from constrictions in 1 or more of 3 specific anatomical locations: the interscalene triangle, costoclavicular space, and coracopectoral tunnel. Magnetic resonance and computed tomographic imaging studies suggest that, of the 3 potential locations for constriction, the costoclavicular space is the most susceptible to compression. This study of human cadavers aims to expand on the descriptive anatomy of the interscalene triangle and associated costoclavicular space. The interscalene angle, interscalene triangle base, and costoclavicular space were measured on 120 sides of embalmed human cadavers. Linear distances and angles were measured using a caliper and protractor, respectively. The data were analyzed by calculating the mean, range, and standard deviation. The range for the interscalene base was 0 to 21.0 mm with a mean of 10.7 mm. For the interscalene angle, the range was 4° to 22° with a mean of 11.3°. Measurements for the costoclavicular space ranged from 6 to 30.9 mm with a mean of 13.5 mm. No significant differences were observed between left and right interscalene triangles or costoclavicular spaces; furthermore, there were no differences between the sexes concerning these 2 locations. Copyright © 2012 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  13. Study on polarization image methods in turbid medium

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Mo, Chunhe; Liu, Boyu; Duan, Jin; Zhang, Su; Zhu, Yong

    2014-11-01

    Polarization imaging detection technology in addition to the traditional imaging information, also can get polarization multi-dimensional information, thus improve the probability of target detection and recognition.Image fusion in turbid medium target polarization image research, is helpful to obtain high quality images. Based on visible light wavelength of light wavelength of laser polarization imaging, through the rotation Angle of polaroid get corresponding linear polarized light intensity, respectively to obtain the concentration range from 5% to 10% of turbid medium target stocks of polarization parameters, introduces the processing of image fusion technology, main research on access to the polarization of the image by using different polarization image fusion methods for image processing, discusses several kinds of turbid medium has superior performance of polarization image fusion method, and gives the treatment effect and analysis of data tables. Then use pixel level, feature level and decision level fusion algorithm on three levels of information fusion, DOLP polarization image fusion, the results show that: with the increase of the polarization Angle, polarization image will be more and more fuzzy, quality worse and worse. Than a single fused image contrast of the image be improved obviously, the finally analysis on reasons of the increase the image contrast and polarized light.

  14. Feature point based 3D tracking of multiple fish from multi-view images

    PubMed Central

    Qian, Zhi-Ming

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly. PMID:28665966

  15. Feature point based 3D tracking of multiple fish from multi-view images.

    PubMed

    Qian, Zhi-Ming; Chen, Yan Qiu

    2017-01-01

    A feature point based method is proposed for tracking multiple fish in 3D space. First, a simplified representation of the object is realized through construction of two feature point models based on its appearance characteristics. After feature points are classified into occluded and non-occluded types, matching and association are performed, respectively. Finally, the object's motion trajectory in 3D space is obtained through integrating multi-view tracking results. Experimental results show that the proposed method can simultaneously track 3D motion trajectories for up to 10 fish accurately and robustly.

  16. Greener Pastures in Northern Queensland, Australia

    NASA Technical Reports Server (NTRS)

    2004-01-01

    After a 19 month rainfall deficiency, heavy rainfall during January 2004 brought drought relief to much of northern Queensland. Local graziers hope for good long-term responses in pasture growth from the heavy rains. These images and maps from the Multi-angle Imaging SpectroRadiometer (MISR) portray part of Australia's Mitchell Grasslands bioregion before summer rainfall, on October 18, 2003 (left) and afterwards, on February 7, 2004 (right).

    The top pair of images are natural color views from MISR's nadir camera. The green areas in the post-rainfall image highlight the growth of vegetation. The middle panels show the reflectivity of the surface over the photosynthetically active region (PAR) of visible light (400 - 700 nm), expressed as a directional-hemispherical reflectance (DHR-PAR), or albedo. That portion of the radiation that is not reflected back to the atmosphere or space is absorbed by either the vegetation or the soil. The fraction of PAR radiation absorbed by green vegetation, known as FPAR, is shown in the bottom panels. FPAR is one of the quantities that establishes the photosynthetic and carbon uptake efficiency of live vegetation. MISR's FPAR product makes use of aerosol retrievals to correct for atmospheric scattering and absorption effects, and uses plant canopy structural models to determine the partitioning of solar radiation. Both of these aspects are facilitated by the multiangular nature of the MISR measurements.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 20397 and 22028. The panels cover an area of about 290 kilometers x 228 kilometers, and utilize data from blocks 106 to 108 within World Reference System-2 path 96.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  17. Detection et caracterisation de naines brunes et exoplanetes avec un filtre accordable pour applications dans l'espace

    NASA Astrophysics Data System (ADS)

    Ingraham, Patrick Jon

    This thesis determines the capability of detecting faint companions in the presence of speckle noise when performing space-based high-contrast imaging through spectral differential imagery (SDI) using a low-order Fabry-Perot etalon as a tunable filter. The performance of such a tunable filter is illustrated through the Tunable Filter Imager (TFI), an instrument designed for the James Webb Space Telescope (JWST). Using a TFI prototype etalon and a custom designed test bed, the etalon's ability to perform speckle-suppression through SDI is demonstrated experimentally. Improvements in contrast vary with separation, ranging from a factor of ˜10 at working angles greater than 11 lambda/D and increasing up to a factor of ˜60 at 5 lambda/D. These measurements are consistent with a Fresnel optical propagation model which shows the speckle suppression capability is limited by the test bed and not the etalon. This result demonstrates that a tunable filter is an attractive option to perform high-contrast imaging through SDI. To explore the capability of space-based SDI using an etalon, we perform an end-to-end Fresnel propagation of JWST and TFI. Using this simulation, a contrast improvement ranging from a factor of ˜7 to ˜100 is predicted, depending on the instrument's configuration. The performance of roll-subtraction is simulated and compared to that of SDI. The SDI capability of the Near-Infrared Imager and Slitless Spectrograph (NIRISS), the science instrument module to replace TFI in the JWST Fine Guidance Sensor is also determined. Using low resolution, multi-band (0.85-2.4 microm) multi-object spectroscopy, 104 objects towards the central region of the Orion Nebular Cluster have been assigned spectral types including 7 new brown dwarfs, and 4 new planetary mass candidates. These objects are useful for determining the substellar initial mass function and for testing evolutionary and atmospheric models of young stellar and substellar objects. Using the measured H band magnitudes, combined with our determined extinction values, the classified objects are used to create an Hertzsprung-Russell diagram for the cluster. Our results indicate a single epoch of star formation beginning ˜1 Myr ago. The initial mass function of the cluster is derived and found to be consistent with the values determined for other young clusters and the galactic disk.

  18. Secure distribution for high resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Liu, Jin; Sun, Jing; Xu, Zheng Q.

    2010-09-01

    The use of remote sensing images collected by space platforms is becoming more and more widespread. The increasing value of space data and its use in critical scenarios call for adoption of proper security measures to protect these data against unauthorized access and fraudulent use. In this paper, based on the characteristics of remote sensing image data and application requirements on secure distribution, a secure distribution method is proposed, including users and regions classification, hierarchical control and keys generation, and multi-level encryption based on regions. The combination of the three parts can make that the same remote sensing images after multi-level encryption processing are distributed to different permission users through multicast, but different permission users can obtain different degree information after decryption through their own decryption keys. It well meets user access control and security needs in the process of high resolution remote sensing image distribution. The experimental results prove the effectiveness of the proposed method which is suitable for practical use in the secure transmission of remote sensing images including confidential information over internet.

  19. MISR Multi-angle Views of Sunday Morning Fires

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Hot, dry Santa Ana winds began blowing through the Los Angeles and San Diego areas on Sunday October 21, 2007. Wind speeds ranging from 30 to 50 mph were measured in the area, with extremely low relative humidities. These winds, coupled with exceptionally dry conditions due to lack of rainfall resulted in a number of fires in the Los Angeles and San Diego areas, causing the evacuation of more than 250,000 people.

    These two images show the Southern California coast from Los Angeles to San Diego from two of the nine cameras on the Multi-angle Imaging SpectroRadiometer (MISR) instrument on the NASA EOS Terra satellite. These images were obtained around 11:35 a.m. PDT on Sunday morning, October 21, 2007 and show a number of plumes extending out over the Pacific ocean. In addition, locations identified as potential hot spots from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on the same satellite are outlined in red.

    The left image is from MISR's nadir looking camera and the plumes appear very faint. The image on the right is from MISR's 60o forward looking camera, which accentuates the amount of light scattered by aerosols in the atmosphere, including smoke and dust. Both these images are false color and contain information from MISR's red, green, blue and near-infrared wavelengths, which makes vegetated land appear greener than it would naturally. Notice in the right hand image that the color of the plumes associated with the MODIS hot spots is bluish, while plumes not associated with hot spots appear more yellow. This is because the latter plumes are composed of dust kicked up by the strong Santa Ana winds. In some locations along Interstate 5 on this date, visibility was severely reduced due to blowing dust. MISR's multiangle and multispectral capability give it the ability to distinguish smoke from dust in this situation.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These images were generated from a portion of the imagery acquired during Terra orbit 41713, and use data from blocks 63 to 66 within World Reference System-2 path 40.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center. JPL is a division of the California Institute of Technology.

  20. Measurements of Multi-star Systems LEO 5 and MKT 13

    NASA Astrophysics Data System (ADS)

    AlZaben, Faisal; Priest, Allen; Priest, Stephen; Qiu, Rex; Boyce, Grady; Boyce, Pat

    2016-04-01

    We report measurements of the position angles and separations of two multi-star systems observed during the fall of 2015. Image data was obtained using an online 17-inch iTelescope system in Nerpio, Spain. Image data was analyzed using Maxim DL Pro 6 and Mira Pro x64 software tools at the Army and Navy Academy in Carlsbad, California. Our measurements of the LEO 5 system are consistent with historical data, although inconclusive as to the nature of the system. Our measurements and the historical data for the MKT 13 system show a consistent linearity in the position angle and separation.

  1. New Mexico: Los Alamos

    Atmospheric Science Data Center

    2014-05-15

    article title:  Los Alamos, New Mexico     View Larger JPEG image ... kb) Multi-angle views of the Fire in Los Alamos, New Mexico, May 9, 2000. These true-color images covering north-central New Mexico ...

  2. Mystery #14

    Atmospheric Science Data Center

    2013-04-22

    ... play geographical detective! This natural-color image from the Multi-angle Imaging SpectroRadiometer (MISR) instrument on the Terra ... type of clouds pictured here are often associated with lightning and sustained rainstorms lasting several hours or more. 5. ...

  3. Multi-class segmentation of neuronal electron microscopy images using deep learning

    NASA Astrophysics Data System (ADS)

    Khobragade, Nivedita; Agarwal, Chirag

    2018-03-01

    Study of connectivity of neural circuits is an essential step towards a better understanding of functioning of the nervous system. With the recent improvement in imaging techniques, high-resolution and high-volume images are being generated requiring automated segmentation techniques. We present a pixel-wise classification method based on Bayesian SegNet architecture. We carried out multi-class segmentation on serial section Transmission Electron Microscopy (ssTEM) images of Drosophila third instar larva ventral nerve cord, labeling the four classes of neuron membranes, neuron intracellular space, mitochondria and glia / extracellular space. Bayesian SegNet was trained using 256 ssTEM images of 256 x 256 pixels and tested on 64 different ssTEM images of the same size, from the same serial stack. Due to high class imbalance, we used a class-balanced version of Bayesian SegNet by re-weighting each class based on their relative frequency. We achieved an overall accuracy of 93% and a mean class accuracy of 88% for pixel-wise segmentation using this encoder-decoder approach. On evaluating the segmentation results using similarity metrics like SSIM and Dice Coefficient, we obtained scores of 0.994 and 0.886 respectively. Additionally, we used the network trained using the 256 ssTEM images of Drosophila third instar larva for multi-class labeling of ISBI 2012 challenge ssTEM dataset.

  4. Off-resonance artifacts correction with convolution in k-space (ORACLE).

    PubMed

    Lin, Wei; Huang, Feng; Simonotto, Enrico; Duensing, George R; Reykowski, Arne

    2012-06-01

    Off-resonance artifacts hinder the wider applicability of echo-planar imaging and non-Cartesian MRI methods such as radial and spiral. In this work, a general and rapid method is proposed for off-resonance artifacts correction based on data convolution in k-space. The acquired k-space is divided into multiple segments based on their acquisition times. Off-resonance-induced artifact within each segment is removed by applying a convolution kernel, which is the Fourier transform of an off-resonance correcting spatial phase modulation term. The field map is determined from the inverse Fourier transform of a basis kernel, which is calibrated from data fitting in k-space. The technique was demonstrated in phantom and in vivo studies for radial, spiral and echo-planar imaging datasets. For radial acquisitions, the proposed method allows the self-calibration of the field map from the imaging data, when an alternating view-angle ordering scheme is used. An additional advantage for off-resonance artifacts correction based on data convolution in k-space is the reusability of convolution kernels to images acquired with the same sequence but different contrasts. Copyright © 2011 Wiley-Liss, Inc.

  5. Illumination Invariant Change Detection (iicd): from Earth to Mars

    NASA Astrophysics Data System (ADS)

    Wan, X.; Liu, J.; Qin, M.; Li, S. Y.

    2018-04-01

    Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.

  6. Habitable Exoplanet Imaging Mission (HabEx): Architecture of the 4m Mission Concept

    NASA Astrophysics Data System (ADS)

    Kuan, Gary M.; Warfield, Keith R.; Mennesson, Bertrand; Kiessling, Alina; Stahl, H. Philip; Martin, Stefan; Shaklan, Stuart B.; amini, rashied

    2018-01-01

    The Habitable Exoplanet Imaging Mission (HabEx) study is tasked by NASA to develop a scientifically compelling and technologically feasible exoplanet direct imaging mission concept, with extensive general astrophysics capabilities, for the 2020 Decadal Survey in Astrophysics. The baseline architecture of this space-based observatory concept encompasses an unobscured 4m diameter aperture telescope flying in formation with a 72-meter diameter starshade occulter. This large aperture, ultra-stable observatory concept extends and enhances upon the legacy of the Hubble Space Telescope by allowing us to probe even fainter objects and peer deeper into the Universe in the same ultraviolet, visible, and near infrared wavelengths, and gives us the capability, for the first time, to image and characterize potentially habitable, Earth-sized exoplanets orbiting nearby stars. Revolutionary direct imaging of exoplanets will be undertaken using a high-contrast coronagraph and a starshade imager. General astrophysics science will be undertaken with two world-class instruments – a wide-field workhorse camera for imaging and multi-object grism spectroscopy, and a multi-object, multi-resolution ultraviolet spectrograph. This poster outlines the baseline architecture of the HabEx flagship mission concept.

  7. Space based optical staring sensor LOS determination and calibration using GCPs observation

    NASA Astrophysics Data System (ADS)

    Chen, Jun; An, Wei; Deng, Xinpu; Yang, Jungang; Sha, Zhichao

    2016-10-01

    Line of sight (LOS) attitude determination and calibration is the key prerequisite of tracking and location of targets in space based infrared (IR) surveillance systems (SBIRS) and the LOS determination and calibration of staring sensor is one of the difficulties. This paper provides a novel methodology for removing staring sensor bias through the use of Ground Control Points (GCPs) detected in the background field of the sensor. Based on researching the imaging model and characteristics of the staring sensor of SBIRS geostationary earth orbit part (GEO), the real time LOS attitude determination and calibration algorithm using landmark control point is proposed. The influential factors (including the thermal distortions error, assemble error, and so on) of staring sensor LOS attitude error are equivalent to bias angle of LOS attitude. By establishing the observation equation of GCPs and the state transition equation of bias angle, and using an extend Kalman filter (EKF), the real time estimation of bias angle and the high precision sensor LOS attitude determination and calibration are achieved. The simulation results show that the precision and timeliness of the proposed algorithm meet the request of target tracking and location process in space based infrared surveillance system.

  8. Closed-form solution for the Wigner phase-space distribution function for diffuse reflection and small-angle scattering in a random medium.

    PubMed

    Yura, H T; Thrane, L; Andersen, P E

    2000-12-01

    Within the paraxial approximation, a closed-form solution for the Wigner phase-space distribution function is derived for diffuse reflection and small-angle scattering in a random medium. This solution is based on the extended Huygens-Fresnel principle for the optical field, which is widely used in studies of wave propagation through random media. The results are general in that they apply to both an arbitrary small-angle volume scattering function, and arbitrary (real) ABCD optical systems. Furthermore, they are valid in both the single- and multiple-scattering regimes. Some general features of the Wigner phase-space distribution function are discussed, and analytic results are obtained for various types of scattering functions in the asymptotic limit s > 1, where s is the optical depth. In particular, explicit results are presented for optical coherence tomography (OCT) systems. On this basis, a novel way of creating OCT images based on measurements of the momentum width of the Wigner phase-space distribution is suggested, and the advantage over conventional OCT images is discussed. Because all previous published studies regarding the Wigner function are carried out in the transmission geometry, it is important to note that the extended Huygens-Fresnel principle and the ABCD matrix formalism may be used successfully to describe this geometry (within the paraxial approximation). Therefore for completeness we present in an appendix the general closed-form solution for the Wigner phase-space distribution function in ABCD paraxial optical systems for direct propagation through random media, and in a second appendix absorption effects are included.

  9. Comparison of centric and reverse-centric trajectories for highly accelerated three-dimensional saturation recovery cardiac perfusion imaging.

    PubMed

    Wang, Haonan; Bangerter, Neal K; Park, Daniel J; Adluru, Ganesh; Kholmovski, Eugene G; Xu, Jian; DiBella, Edward

    2015-10-01

    Highly undersampled three-dimensional (3D) saturation-recovery sequences are affected by k-space trajectory since the magnetization does not reach steady state during the acquisition and the slab excitation profile yields different flip angles in different slices. This study compares centric and reverse-centric 3D cardiac perfusion imaging. An undersampled (98 phase encodes) 3D ECG-gated saturation-recovery sequence that alternates centric and reverse-centric acquisitions each time frame was used to image phantoms and in vivo subjects. Flip angle variation across the slices was measured, and contrast with each trajectory was analyzed via Bloch simulation. Significant variations in flip angle were observed across slices, leading to larger signal variation across slices for the centric acquisition. In simulation, severe transient artifacts were observed when using the centric trajectory with higher flip angles, placing practical limits on the maximum flip angle used. The reverse-centric trajectory provided less contrast, but was more robust to flip angle variations. Both of the k-space trajectories can provide reasonable image quality. The centric trajectory can have higher CNR, but is more sensitive to flip angle variation. The reverse-centric trajectory is more robust to flip angle variation. © 2014 Wiley Periodicals, Inc.

  10. A scale space feature based registration technique for fusion of satellite imagery

    NASA Technical Reports Server (NTRS)

    Raghavan, Srini; Cromp, Robert F.; Campbell, William C.

    1997-01-01

    Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.

  11. Multi-modal molecular diffuse optical tomography system for small animal imaging

    PubMed Central

    Guggenheim, James A.; Basevi, Hector R. A.; Frampton, Jon; Styles, Iain B.; Dehghani, Hamid

    2013-01-01

    A multi-modal optical imaging system for quantitative 3D bioluminescence and functional diffuse imaging is presented, which has no moving parts and uses mirrors to provide multi-view tomographic data for image reconstruction. It is demonstrated that through the use of trans-illuminated spectral near infrared measurements and spectrally constrained tomographic reconstruction, recovered concentrations of absorbing agents can be used as prior knowledge for bioluminescence imaging within the visible spectrum. Additionally, the first use of a recently developed multi-view optical surface capture technique is shown and its application to model-based image reconstruction and free-space light modelling is demonstrated. The benefits of model-based tomographic image recovery as compared to 2D planar imaging are highlighted in a number of scenarios where the internal luminescence source is not visible or is confounding in 2D images. The results presented show that the luminescence tomographic imaging method produces 3D reconstructions of individual light sources within a mouse-sized solid phantom that are accurately localised to within 1.5mm for a range of target locations and depths indicating sensitivity and accurate imaging throughout the phantom volume. Additionally the total reconstructed luminescence source intensity is consistent to within 15% which is a dramatic improvement upon standard bioluminescence imaging. Finally, results from a heterogeneous phantom with an absorbing anomaly are presented demonstrating the use and benefits of a multi-view, spectrally constrained coupled imaging system that provides accurate 3D luminescence images. PMID:24954977

  12. Automatic measurement of pennation angle and fascicle length of gastrocnemius muscles using real-time ultrasound imaging.

    PubMed

    Zhou, Guang-Quan; Chan, Phoebe; Zheng, Yong-Ping

    2015-03-01

    Muscle imaging is a promising field of research to understand the biological and bioelectrical characteristics of muscles through the observation of muscle architectural change. Sonomyography (SMG) is a technique which can quantify the real-time architectural change of muscles under different contractions and motions with ultrasound imaging. The pennation angle and fascicle length are two crucial SMG parameters to understand the contraction mechanics at muscle level, but they have to be manually detected on ultrasound images frame by frame. In this study, we proposed an automatic method to quantitatively identify pennation angle and fascicle length of gastrocnemius (GM) muscle based on multi-resolution analysis and line feature extraction, which could overcome the limitations of tedious and time-consuming manual measurement. The method started with convolving Gabor wavelet specially designed for enhancing the line-like structure detection in GM ultrasound image. The resulting image was then used to detect the fascicles and aponeuroses for calculating the pennation angle and fascicle length with the consideration of their distribution in ultrasound image. The performance of this method was tested on computer simulated images and experimental images in vivo obtained from normal subjects. Tests on synthetic images showed that the method could identify the fascicle orientation with an average error less than 0.1°. The result of in vivo experiment showed a good agreement between the results obtained by the automatic and the manual measurements (r=0.94±0.03; p<0.001, and r=0.95±0.02, p<0.001). Furthermore, a significant correlation between the ankle angle and pennation angle (r=0.89±0.05; p<0.001) and fascicle length (r=-0.90±0.04; p<0.001) was found for the ankle plantar flexion. This study demonstrated that the proposed method was able to automatically measure the pennation angle and fascicle length of GM ultrasound images, which made it feasible to investigate muscle-level mechanics more comprehensively in vivo. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. California: San Joaquin Valley

    Atmospheric Science Data Center

    2014-05-15

    ...     View Larger Image This illustration features Multi-angle Imaging SpectroRadiometer ... quadrant is a map of haze amount determined from automated processing of the MISR imagery. Low amounts of haze are shown in blue, and a ...

  14. MISR High-Resolution, Cross-Track Winds for Hurricane Ida

    NASA Image and Video Library

    2009-11-10

    This image shows JPL Multi-angle Imaging SpectroRadiometer instrument onboard NASA Terra satellite on Sunday, Nov. 8, 2009 as it passed over Hurricane Ida while situated between western Cuba and the Yucatan Peninsula.

  15. Utah: Salt Lake City

    Atmospheric Science Data Center

    2014-05-15

    ... title:  Snow-Covered Peaks of the Wasatch and Uinta Mountains     View Larger ... edge of the Rocky Mountains and eastern rim of the Great Basin. This early-winter image pair was acquired by the Multi-angle Imaging ...

  16. A multimodal image sensor system for identifying water stress in grapevines

    NASA Astrophysics Data System (ADS)

    Zhao, Yong; Zhang, Qin; Li, Minzan; Shao, Yongni; Zhou, Jianfeng; Sun, Hong

    2012-11-01

    Water stress is one of the most common limitations of fruit growth. Water is the most limiting resource for crop growth. In grapevines, as well as in other fruit crops, fruit quality benefits from a certain level of water deficit which facilitates to balance vegetative and reproductive growth and the flow of carbohydrates to reproductive structures. A multi-modal sensor system was designed to measure the reflectance signature of grape plant surfaces and identify different water stress levels in this paper. The multi-modal sensor system was equipped with one 3CCD camera (three channels in R, G, and IR). The multi-modal sensor can capture and analyze grape canopy from its reflectance features, and identify the different water stress levels. This research aims at solving the aforementioned problems. The core technology of this multi-modal sensor system could further be used as a decision support system that combines multi-modal sensory data to improve plant stress detection and identify the causes of stress. The images were taken by multi-modal sensor which could output images in spectral bands of near-infrared, green and red channel. Based on the analysis of the acquired images, color features based on color space and reflectance features based on image process method were calculated. The results showed that these parameters had the potential as water stress indicators. More experiments and analysis are needed to validate the conclusion.

  17. The Focal Surface of the JEM-EUSO Telescope

    NASA Technical Reports Server (NTRS)

    Kawasaki, Yoshiya

    2007-01-01

    Extreme Universe Space Observatory onboard JEM/EP (JEM-EUSO) is a space mission to study extremely high-energy cosmic rays. The JEM-EUSO instrument is a wide-angle refractive telescope in near-ultraviolet wavelength region to observe time-resolved atmospheric fluorescence images of the extensive air showers from the International Space Station. The focal surface is a spherical curved surface, and its area amounts to about 4.5 square m. The focal surface detector is covered with about 6,000 multi-anode photomultipliers (MAPMTs). The focal surface detector consists of Photo-Detector-Modules, each of which consists of 9 Elementary Cells (ECs). The EC contains 4 units of the MAPMTs. Therefore, about 1,500 ECs or about 160 PDMS are arranged on the whole of the focal surface of JEM- EUSO. The EC is a basic unit of the front-end electronics. The PDM is a, basic unit of the data acquisition system

  18. [Evaluation of the resolving power of different angles in MPR images of 16DAS-MDCT].

    PubMed

    Kimura, Mikio; Usui, Junshi; Nozawa, Takeo

    2007-03-20

    In this study, we evaluated the resolving power of three-dimensional (3D) multiplanar reformation (MPR) images with various angles by using 16 data acquisition system multi detector row computed tomography (16DAS-MDCT) . We reconstructed the MPR images using data with a 0.75 mm slice thickness of the axial image in this examination. To evaluate resolving power, we used an original new phantom (RC phantom) that can be positioned at any slice angle in MPR images. We measured the modulation transfer function (MTF) by using the methods of measuring pre-sampling MTF, and used Fourier transform of image data of the square wave chart. The scan condition and image reconstruction condition that were adopted in this study correspond to the condition that we use for three-dimensional computed tomographic angiography (3D-CTA) examination of the head in our hospital. The MTF of MPR images showed minimum values at slice angles in parallel with the axial slice, and showed maximum values at the sagittal slice and coronal slice angles that are parallel to the Z-axis. With an oblique MPR image, MTF did not change with angle changes in the oblique sagittal slice plane, but in the oblique coronal slice plane, MTF increased as the tilt angle increased from the axial plane to the Z plane. As a result, we could evaluate the resolving power of a head 3D image by measuring the MTF of the axial image and sagittal image or the coronal image.

  19. Two Perspectives on Forest Fire

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Multi-angle Imaging Spectroradiometer (MISR) images of smoke plumes from wildfires in western Montana acquired on August 14, 2000. A portion of Flathead Lake is visible at the top, and the Bitterroot Range traverses the images. The left view is from MISR's vertical-viewing (nadir) camera. The right view is from the camera that looks forward at a steep angle (60 degrees). The smoke location and extent are far more visible when seen at this highly oblique angle. However, vegetation is much darker in the forward view. A brown burn scar is located nearly in the exact center of the nadir image, while in the high-angle view it is shrouded in smoke. Also visible in the center and upper right of the images, and more obvious in the clearer nadir view, are checkerboard patterns on the surface associated with land ownership boundaries and logging. Compare these images with the high resolution infrared imagery captured nearby by Landsat 7 half an hour earlier. Images by NASA/GSFC/JPL, MISR Science Team.

  20. Optical/Infrared Properties of Atmospheric Aerosols with an In-Situ, Multi-Wavelength, Multichannel Nephelometer System.

    DTIC Science & Technology

    1985-04-01

    from any angle of approach. An angular impetus is imparted to the particle 25 7. motion via the eight evenly spaced entrance vanes. As the particles...measurement cycle. 2 f.. 27 VA NE S M= VANE ASSEMBLY BASE IN SECT SCREEN TUBE PROTECTIVE CPLUGHOUSING HOUSING-DEFLECTORAEOYMIINT SPACING PHAEO YNAI N...AERODYNAMIC FLOW DEFLECTOR OUTER TUJBE TAPERED p. Figure 3 r Wedding PM 1 0 Inlet [I]J 28 . .I .. II | I I J . 1 _1 ! I 7 I ! . ?. . . - 120 -- WEDDING INLET

  1. A brain MRI bias field correction method created in the Gaussian multi-scale space

    NASA Astrophysics Data System (ADS)

    Chen, Mingsheng; Qin, Mingxin

    2017-07-01

    A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.

  2. A multi-directional backlight for a wide-angle, glasses-free three-dimensional display.

    PubMed

    Fattal, David; Peng, Zhen; Tran, Tho; Vo, Sonny; Fiorentino, Marco; Brug, Jim; Beausoleil, Raymond G

    2013-03-21

    Multiview three-dimensional (3D) displays can project the correct perspectives of a 3D image in many spatial directions simultaneously. They provide a 3D stereoscopic experience to many viewers at the same time with full motion parallax and do not require special glasses or eye tracking. None of the leading multiview 3D solutions is particularly well suited to mobile devices (watches, mobile phones or tablets), which require the combination of a thin, portable form factor, a high spatial resolution and a wide full-parallax view zone (for short viewing distance from potentially steep angles). Here we introduce a multi-directional diffractive backlight technology that permits the rendering of high-resolution, full-parallax 3D images in a very wide view zone (up to 180 degrees in principle) at an observation distance of up to a metre. The key to our design is a guided-wave illumination technique based on light-emitting diodes that produces wide-angle multiview images in colour from a thin planar transparent lightguide. Pixels associated with different views or colours are spatially multiplexed and can be independently addressed and modulated at video rate using an external shutter plane. To illustrate the capabilities of this technology, we use simple ink masks or a high-resolution commercial liquid-crystal display unit to demonstrate passive and active (30 frames per second) modulation of a 64-view backlight, producing 3D images with a spatial resolution of 88 pixels per inch and full-motion parallax in an unprecedented view zone of 90 degrees. We also present several transparent hand-held prototypes showing animated sequences of up to six different 200-view images at a resolution of 127 pixels per inch.

  3. Development of the MAMA Detectors for the Hubble Space Telescope Imaging Spectrograph

    NASA Technical Reports Server (NTRS)

    Timothy, J. Gethyn

    1997-01-01

    The development of the Multi-Anode Microchannel Array (MAMA) detector systems started in the early 1970's in order to produce multi-element detector arrays for use in spectrographs for solar studies from the Skylab-B mission. Development of the MAMA detectors for spectrographs on the Hubble Space Telescope (HST) began in the late 1970's, and reached its culmination with the successful installation of the Space Telescope Imaging Spectrograph (STIS) on the second HST servicing mission (STS-82 launched 11 February 1997). Under NASA Contract NAS5-29389 from December 1986 through June 1994 we supported the development of the MAMA detectors for STIS, including complementary sounding rocket and ground-based research programs. This final report describes the results of the MAMA detector development program for STIS.

  4. A fuzzy feature fusion method for auto-segmentation of gliomas with multi-modality diffusion and perfusion magnetic resonance images in radiotherapy.

    PubMed

    Guo, Lu; Wang, Ping; Sun, Ranran; Yang, Chengwen; Zhang, Ning; Guo, Yu; Feng, Yuanming

    2018-02-19

    The diffusion and perfusion magnetic resonance (MR) images can provide functional information about tumour and enable more sensitive detection of the tumour extent. We aimed to develop a fuzzy feature fusion method for auto-segmentation of gliomas in radiotherapy planning using multi-parametric functional MR images including apparent diffusion coefficient (ADC), fractional anisotropy (FA) and relative cerebral blood volume (rCBV). For each functional modality, one histogram-based fuzzy model was created to transform image volume into a fuzzy feature space. Based on the fuzzy fusion result of the three fuzzy feature spaces, regions with high possibility belonging to tumour were generated automatically. The auto-segmentations of tumour in structural MR images were added in final auto-segmented gross tumour volume (GTV). For evaluation, one radiation oncologist delineated GTVs for nine patients with all modalities. Comparisons between manually delineated and auto-segmented GTVs showed that, the mean volume difference was 8.69% (±5.62%); the mean Dice's similarity coefficient (DSC) was 0.88 (±0.02); the mean sensitivity and specificity of auto-segmentation was 0.87 (±0.04) and 0.98 (±0.01) respectively. High accuracy and efficiency can be achieved with the new method, which shows potential of utilizing functional multi-parametric MR images for target definition in precision radiation treatment planning for patients with gliomas.

  5. Design considerations for a C-shaped PET system, dedicated to small animal brain imaging, using GATE Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Efthimiou, N.; Papadimitroulas, P.; Kostou, T.; Loudos, G.

    2015-09-01

    Commercial clinical and preclinical PET scanners rely on the full cylindrical geometry for whole body scans as well as for dedicated organs. In this study we propose the construction of a low cost dual-head C-shaped PET system dedicated for small animal brain imaging. Monte Carlo simulation studies were performed using GATE toolkit to evaluate the optimum design in terms of sensitivity, distortions in the FOV and spatial resolution. The PET model is based on SiPMs and BGO pixelated arrays. Four different configurations with C- angle 0°, 15°, 30° and 45° within the modules, were considered. Geometrical phantoms were used for the evaluation process. STIR software, extended by an efficient multi-threaded ray tracing technique, was used for the image reconstruction. The algorithm automatically adjusts the size of the FOV according to the shape of the detector's geometry. The results showed improvement in sensitivity of ∼15% in case of 45° C-angle compared to the 0° case. The spatial resolution was found 2 mm for 45° C-angle.

  6. Hurricane Lilli

    Atmospheric Science Data Center

    2014-05-15

    article title:  Hurricane Lili Heads for Louisiana Landfall     ... Image Characteristics of a strengthening Category 3 Hurricane Lili are apparent in these images from the Multi-angle Imaging ... (MISR), including a well-developed clearing at the hurricane eye. When these views were acquired on October 2, 2002, Lili was ...

  7. Mystery #21 Answer

    Atmospheric Science Data Center

    2013-04-22

    article title:  MISR Mystery Image Quiz #21: Actinoform Clouds ... This mystery concerns a particular type of cloud, one example of which was imaged by the Multi-angle Imaging SpectroRadiometer (MISR) ... ) These clouds are commonly tracked using propeller-driven research aircraft. Answer: C is True. The weather satellite, TIROS ...

  8. Attenuation of multiples in image space

    NASA Astrophysics Data System (ADS)

    Alvarez, Gabriel F.

    In complex subsurface areas, attenuation of 3D specular and diffracted multiples in data space is difficult and inaccurate. In those areas, image space is an attractive alternative. There are several reasons: (1) migration increases the signal-to-noise ratio of the data; (2) primaries are mapped to coherent events in Subsurface Offset Domain Common Image Gathers (SODCIGs) or Angle Domain Common Image Gathers (ADCIGs); (3) image space is regular and smaller; (4) attenuating the multiples in data space leaves holes in the frequency-Wavenumber space that generate artifacts after migration. I develop a new equation for the residual moveout of specular multiples in ADCIGs and use it for the kernel of an apex-shifted Radon transform to focus and separate the primaries from specular and diffracted multiples. Because of small amplitude, phase and kinematic errors in the multiple estimate, we need adaptive matching and subtraction to estimate the primaries. I pose this problem as an iterative least-squares inversion that simultaneously matches the estimates of primaries and multiples to the data. Standard methods match only the estimate of the multiples. I demonstrate with real and synthetic data that the method produces primaries and multiples with little cross-talk. In 3D, the multiples exhibit residual moveout in SODCIGs in in-line and cross-line offsets. They map away from zero subsurface offsets when migrated with the faster velocity of the primaries. In ADCIGs the residual moveout of the primaries as a function of the aperture angle, for a given azimuth, is flat for those angles that illuminate the reflector. The multiples have residual moveout towards increasing depth for increasing aperture angles at all azimuths. As a function of azimuth, the primaries have better azimuth resolution than the multiples at larger aperture angles. I show, with a real 3D dataset, that even below salt, where illumination is poor, the multiples are well attenuated in ADCIGs with the new Radon transform in planes of azimuth-stacked ADCIGs. The angle stacks of the estimated primaries show little residual multiple energy.

  9. Submarine harbor navigation using image data

    NASA Astrophysics Data System (ADS)

    Stubberud, Stephen C.; Kramer, Kathleen A.

    2017-01-01

    The process of ingress and egress of a United States Navy submarine is a human-intensive process that takes numerous individuals to monitor locations and for hazards. Sailors pass vocal information to bridge where it is processed manually. There is interest in using video imaging of the periscope view to more automatically provide navigation within harbors and other points of ingress and egress. In this paper, video-based navigation is examined as a target-tracking problem. While some image-processing methods claim to provide range information, the moving platform problem and weather concerns, such as fog, reduce the effectiveness of these range estimates. The video-navigation problem then becomes an angle-only tracking problem. Angle-only tracking is known to be fraught with difficulties, due to the fact that the unobservable space is not the null space. When using a Kalman filter estimator to perform the tracking, significant errors arise which could endanger the submarine. This work analyzes the performance of the Kalman filter when angle-only measurements are used to provide the target tracks. This paper addresses estimation unobservability and the minimal set of requirements that are needed to address it in this complex but real-world problem. Three major issues are addressed: the knowledge of navigation beacons/landmarks' locations, the minimal number of these beacons needed to maintain the course, and update rates of the angles of the landmarks as the periscope rotates and landmarks become obscured due to blockage and weather. The goal is to address the problem of navigation to and from the docks, while maintaining the traversing of the harbor channel based on maritime rules relying solely on the image-based data. The minimal number of beacons will be considered. For this effort, the image correlation from frame to frame is assumed to be achieved perfectly. Variation in the update rates and the dropping of data due to rotation and obscuration is considered. The analysis will be based on a simple straight-line channel harbor entry to the dock, similar to a submarine entering the submarine port in San Diego.

  10. A digital 3D atlas of the marmoset brain based on multi-modal MRI.

    PubMed

    Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C

    2018-04-01

    The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.

  11. Mickey Mouse Spotted on Mercury!

    NASA Image and Video Library

    2012-06-15

    NASA image acquired: June 03, 2012 This scene is to the northwest of the recently named crater Magritte, in Mercury's south. The image is not map projected; the larger crater actually sits to the north of the two smaller ones. The shadowing helps define the striking "Mickey Mouse" resemblance, created by the accumulation of craters over Mercury's long geologic history. This image was acquired as part of MDIS's high-incidence-angle base map. The high-incidence-angle base map is a major mapping activity in MESSENGER's extended mission and complements the surface morphology base map of MESSENGER's primary mission that was acquired under generally more moderate incidence angles. High incidence angles, achieved when the Sun is near the horizon, result in long shadows that accentuate the small-scale topography of geologic features. The high-incidence-angle base map is being acquired with an average resolution of 200 meters/pixel. The MESSENGER spacecraft is the first ever to orbit the planet Mercury, and the spacecraft's seven scientific instruments and radio science investigation are unraveling the history and evolution of the Solar System's innermost planet. Visit the Why Mercury? section of this website to learn more about the key science questions that the MESSENGER mission is addressing. During the one-year primary mission, MESSENGER acquired 88,746 images and extensive other data sets. MESSENGER is now in a yearlong extended mission, during which plans call for the acquisition of more than 80,000 additional images to support MESSENGER's science goals. Credit: NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  12. Ash from Kilauea Eruption Viewed by NASA's MISR

    Atmospheric Science Data Center

    2018-06-07

    ... title:  Ash from Kilauea Eruption Viewed by NASA's MISR View Larger Image   Ash ... Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite captured this view of the island as it passed overhead. ...

  13. Mystery #16 Answer

    Atmospheric Science Data Center

    2013-04-22

    ... were acquired by the Multi-angle Imaging SpectroRadiometer (MISR) during October and November 2003. The two images represent about 310 ... obtain calcium from the seawater and carbon dioxide from cell respiration, and bring these products into the internal tissues of the ...

  14. Wide Angle Movie

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This brief movie illustrates the passage of the Moon through the Saturn-bound Cassini spacecraft's wide-angle camera field of view as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. From beginning to end of the sequence, 25 wide-angle images (with a spatial image scale of about 14 miles per pixel (about 23 kilometers)were taken over the course of 7 and 1/2 minutes through a series of narrow and broadband spectral filters and polarizers, ranging from the violet to the near-infrared regions of the spectrum, to calibrate the spectral response of the wide-angle camera. The exposure times range from 5 milliseconds to 1.5 seconds. Two of the exposures were smeared and have been discarded and replaced with nearby images to make a smooth movie sequence. All images were scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is approximately the same in every image. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS)at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  15. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    NASA Astrophysics Data System (ADS)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  16. Ground Optical Signal Processing Architecture for Contributing SSA Space Based Sensor Data

    NASA Astrophysics Data System (ADS)

    Koblick, D.; Klug, M.; Goldsmith, A.; Flewelling, B.; Jah, M.; Shanks, J.; Piña, R.

    2014-09-01

    The main objective of the DARPA program Orbit Outlook (O^2) is to improve the metric tracking and detection performance of the Space Situational Network (SSN) by adding a diverse low-cost network of contributing sensors to the Space Situational Awareness (SSA) mission. In order to accomplish this objective, not only must a sensor be in constant communication with a planning and scheduling system to process tasking requests, there must be an underlying framework to provide useful data products, such as angles only measurements. Existing optical signal processing implementations such as the Optical Processing Architecture at Lincoln (OPAL) are capable of converting mission data collections to angles only observations, but may be difficult for many users to obtain, support, and customize for low-cost missions and demonstration programs. The Ground Optical Signal Processing Architecture (GOSPA) will ingest raw imagery and telemetry data from a space based electro optical sensor and perform a background removal process to remove anomalous pixels, interpolate over bad pixels, and dominant temporal noise. After background removal, the streak end points and target centroids are located using a corner detection algorithm developed by Air Force Research Laboratory. These identified streak locations are then fused with the corresponding spacecraft telemetry data to determine the Right Ascension and Declination measurements with respect to time. To demonstrate the performance of GOSPA, non-rate tracking collections against a satellite in Geosynchronous Orbit are simulated from a visible optical imaging sensor in a polar Low Earth Orbit. Stars, noise and bad pixels are added to the simulated images based on look angles and sensor parameters. These collections are run through the GOSPA framework to provide angles- only measurements to the Air Force Research Laboratory Constrained Admissible Region Multiple Hypothesis Filter (CAR-MHF) in which an Initial Orbit Determination is performed and compared to truth data.

  17. Numerical analysis of deposition frequency for successive droplets coalescence dynamics

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaoding; Zhu, Yunlong; Zhang, Lei; Zhang, Dingyi; Ku, Tao

    2018-04-01

    A pseudopotential based multi-relaxation-time lattice Boltzmann model is employed to investigate the dynamic behaviors of successive droplets' impact and coalescence on a solid surface. The effects of deposition frequency on the morphology of the formed line are investigated with a zero receding contact angle by analyzing the droplet-to-droplet coalescence dynamics. Two collision modes (in-phase mode and out-of-phase mode) between the pre-deposited bead and the subsequent droplet are identified depending on the deposition frequency. A uniform line can be obtained at the optimal droplet spacing in the in-phase mode (Δt* < 1.875). However, a scalloped line pattern is formed in the out-of-phase mode (Δt* > 1.875). It is found that decreasing the droplet spacing or advancing contact angle can improve the smoothness of line in the out-of-phase mode. Furthermore, the effects of deposition frequency on the morphology of the formed lines are validated to be applicable to cases with a finite receding contact angle.

  18. Pore-scale Simulation and Imaging of Multi-phase Flow and Transport in Porous Media (Invited)

    NASA Astrophysics Data System (ADS)

    Crawshaw, J.; Welch, N.; Daher, I.; Yang, J.; Shah, S.; Grey, F.; Boek, E.

    2013-12-01

    We combine multi-scale imaging and computer simulation of multi-phase flow and reactive transport in rock samples to enhance our fundamental understanding of long term CO2 storage in rock formations. The imaging techniques include Confocal Laser Scanning Microscopy (CLSM), micro-CT and medical CT scanning, with spatial resolutions ranging from sub-micron to mm respectively. First, we report a new sample preparation technique to study micro-porosity in carbonates using CLSM in 3 dimensions. Second, we use micro-CT scanning to generate high resolution 3D pore space images of carbonate and cap rock samples. In addition, we employ micro-CT to image the processes of evaporation in fractures and cap rock degradation due to exposure to CO2 flow. Third, we use medical CT scanning to image spontaneous imbibition in carbonate rock samples. Our imaging studies are complemented by computer simulations of multi-phase flow and transport, using the 3D pore space images obtained from the scanning experiments. We have developed a massively parallel lattice-Boltzmann (LB) code to calculate the single phase flow field in these pore space images. The resulting flow fields are then used to calculate hydrodynamic dispersion using a novel scheme to predict probability distributions for molecular displacements using the LB method and a streamline algorithm, modified for optimal solid boundary conditions. We calculate solute transport on pore-space images of rock cores with increasing degree of heterogeneity: a bead pack, Bentheimer sandstone and Portland carbonate. We observe that for homogeneous rock samples, such as bead packs, the displacement distribution remains Gaussian with time increasing. In the more heterogeneous rocks, on the other hand, the displacement distribution develops a stagnant part. We observe that the fraction of trapped solute increases from the beadpack (0 %) to Bentheimer sandstone (1.5 %) to Portland carbonate (8.1 %), in excellent agreement with PFG-NMR experiments. We then use our preferred multi-phase model to directly calculate flow in pore space images of two different sandstones and observe excellent agreement with experimental relative permeabilities. Also we calculate cluster size distributions in good agreement with experimental studies. Our analysis shows that the simulations are able to predict both multi-phase flow and transport properties directly on large 3D pore space images of real rocks. Pore space images, left and velocity distributions, right (Yang and Boek, 2013)

  19. A Herschel resolved debris disc around HD 105211

    NASA Astrophysics Data System (ADS)

    Hengst, S.; Marshall, J. P.; Horner, J.; Marsden, S. C.

    2017-07-01

    Debris discs are the dusty aftermath of planet formation processes around main-sequence stars. Analysis of these discs is often hampered by the absence of any meaningful constraint on the location and spatial extent of the disc around its host star. Multi-wavelength, resolved imaging ameliorates the degeneracies inherent in the modelling process, making such data indispensable in the interpretation of these systems. The Herschel Space Observatory observed HD 105211 (η Cru, HIP 59072) with its Photodetector Array Camera and Spectrometer (PACS) instrument in three far-infrared wavebands (70, 100 and 160 μm). Here we combine these data with ancillary photometry spanning optical to far-infrared wavelengths in order to determine the extent of the circumstellar disc. The spectral energy distribution and multi-wavelength resolved emission of the disc are simultaneously modelled using a radiative transfer and imaging codes. Analysis of the Herschel/PACS images reveals the presence of extended structure in all three PACS images. From a radiative transfer model we derive a disc extent of 87.0 ± 2.5 au, with an inclination of 70.7 ± 2.2° to the line of sight and a position angle of 30.1 ± 0.5°. Deconvolution of the Herschel images reveals a potential asymmetry but this remains uncertain as a combined radiative transfer and image analysis replicates both the structure and the emission of the disc using a single axisymmetric annulus.

  20. Evaluation of SIR-A space radar for geologic interpretation: United States, Panama, Colombia, and New Guinea

    NASA Technical Reports Server (NTRS)

    Macdonald, H.; Waite, W. P.; Kaupp, V. H.; Bridges, L. C.; Storm, M.

    1983-01-01

    Comparisons between LANDSAT MSS imagery, and aircraft and space radar imagery from different geologic environments in the United States, Panama, Colombia, and New Guinea demonstrate the interdependence of radar system geometry and terrain configuration for optimum retrieval of geologic information. Illustrations suggest that in the case of space radars (SIR-A in particular), the ability to acquire multiple look-angle/look-direction radar images of a given area is more valuable for landform mapping than further improvements in spatial resolution. Radar look-angle is concluded to be one of the most important system parameters of a space radar designed to be used for geologic reconnaissance mapping. The optimum set of system parameters must be determined for imaging different classes of landform features and tailoring the look-angle to local topography.

  1. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  2. Mystery #21

    Atmospheric Science Data Center

    2013-04-22

    article title:  MISR Mystery Image Quiz #21   ... This mystery concerns a particular type of cloud, one example of which was imaged by the Multi-angle Imaging SpectroRadiometer (MISR) ... ) These clouds are commonly tracked using propeller-driven research aircraft. 3.   Two of these statements are false. Which one is ...

  3. Active illuminated space object imaging and tracking simulation

    NASA Astrophysics Data System (ADS)

    Yue, Yufang; Xie, Xiaogang; Luo, Wen; Zhang, Feizhou; An, Jianzhu

    2016-10-01

    Optical earth imaging simulation of a space target in orbit and it's extraction in laser illumination condition were discussed. Based on the orbit and corresponding attitude of a satellite, its 3D imaging rendering was built. General simulation platform was researched, which was adaptive to variable 3D satellite models and relative position relationships between satellite and earth detector system. Unified parallel projection technology was proposed in this paper. Furthermore, we denoted that random optical distribution in laser-illuminated condition was a challenge for object discrimination. Great randomicity of laser active illuminating speckles was the primary factor. The conjunction effects of multi-frame accumulation process and some tracking methods such as Meanshift tracking, contour poid, and filter deconvolution were simulated. Comparison of results illustrates that the union of multi-frame accumulation and contour poid was recommendable for laser active illuminated images, which had capacities of high tracking precise and stability for multiple object attitudes.

  4. Multi-Sensor Approach for Assessing the Taiga-Tundra Boundary

    NASA Technical Reports Server (NTRS)

    Ranson, K. J.; Sun, G.; Kharuk, V. I.; Kovacs, K.

    2003-01-01

    Monitoring the dynamics of the tundra-taiga boundary is critical for our understanding of the causes and consequences of the changes in this area. Because of its inaccessibility, remote sensing data will play an important role. In this study we examined the use of several remote sensing techniques for identifying the existing tundra-taiga ecotone. These include Landsat, MISR and RADARSAT data. High-resolution IKONOS images were used for local ground truth. It was found that on Landsat ETM+ summer images, reflectance from tundra and taiga at band 4 (NIR) is similar, but different at other bands such as red, and MIR bands. When the incidence angle is small, C-band HH-pol backscattering coefficients from both tundra and taiga are relatively high. The backscattering from tundra targets decreases faster than taiga targets when the incidence angle increases, because the tundra targets look smoother than taiga. Because of the shading effect of the vegetation, the MISR data, both multi-spectral data at nadir looking and multi-angle data at red and NIR bands, clearly show the transition zone.

  5. [Carotid plaque assessment using inversion recovery T1 weighted-3 dimensions variable refocus flip angle turbo spin echo sampling perfection with application optimized contrast using different angle evolutions black blood imaging].

    PubMed

    Inoue, Yuji; Yoneyama, Masami; Nakamura, Masanobu; Ozaki, Satoshi; Ito, Kenjiro; Hiura, Mikio

    2012-01-01

    Vulnerable plaque can be attributed to induction of ischemic symptoms and magnetic resonance imaging of carotid artery is valuable to detect the plaque. Magnetization prepared rapid acquisition with gradient echo (MPRAGE) method could detect hemorrhagic vulnerable plaque as high intensity signal; however, blood flow is not sufficiently masked by this method. The contrast for plaque in T1 weighted image (T1WI) could not be obtained sufficiently with black blood image (BBI) by sampling perfection with application optimized contrast using different angle evolutions (SPACE) method as turbo spin echo (TSE). In addition, an appearance of artifact by slow flow is a problem. Considering these controversial situations in plaque imaging, we examined the modified BBI inversion recovery (IR)-SPACE in which IR was added for SPACE method so that the contrast for plaque in T1WI was optimized. We investigated the application of this method in plaque imaging. As a result of phantom imaging, the contrast for plaque in T1WI was definitely obtained by choosing an appropriate inversion time (TI) for the corresponding repetition time. In clinical cases, blood flow was sufficiently masked by IR-SPACE method and the plaque imaging was clearly obtained in clinical cases to the same extent as MPRAGE method. Since BBI with IR-SPACE method was derived from both IR pulse and flow void effect, this method could obtain the blood flow masking effect definitely. The present study suggested that SPACE method might be applicable to estimate properties of carotid artery plaque.

  6. Western United States and Southwestern Canada

    NASA Technical Reports Server (NTRS)

    2003-01-01

    This natural-color image from the Multi-angle Imaging SpectroRadiometer (MISR) captures the beauty of the western United States and Canada. Data from 45 swaths from MISR's vertical-viewing (nadir) camera were combined to create this cloud-free mosaic. The image extends from 48o N 128o W in the northwest, to 32oN, 104o W in the southeast, and has been draped over a shaded relief Digital Terrain Elevation Model from the United States Geological Survey.

    The image area includes much of British Columbia, Alberta and Saskatchewan in the north, and extends southward to California, Arizona and New Mexico. The snow-capped Rocky Mountains are a prominent feature extending through British Columbia, Montana, Wyoming, Colorado and New Mexico. Many major rivers originate in the Columbia Plateau region of Washington, Oregon and Idaho. The Colorado Plateau region is characterized by the vibrant red-colored rocks of the Painted Desert in Utah and Arizona, and in New Mexico, White Sands National Park is the large white feature in the Southeast corner of the image with the Malpais lava flow just to its North. The southwest is dominated by the Mojave Desert of California and Nevada, California's San Joaquin Valley, the Los Angeles basin and the Pacific Ocean.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. This data product was generated from a portion of the imagery acquired during 2000-2002. The panels utilize data from blocks 45 to 65 within World Reference System-2 paths 31 to 53.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  7. Southern Quebec in Late Winter

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These images of Canada's Quebec province were acquired by the Multi-angle Imaging SpectroRadiometer on March 4, 2001. The region's forests are a mixture of coniferous and hardwood trees, and 'sugar-shack' festivities are held at this time of year to celebrate the beginning of maple syrup production. The large river visible in the images is the northeast-flowing St. Lawrence. The city of Montreal is located near the lower left corner, and Quebec City, at the upper right, is near the mouth of the partially ice-covered St. Lawrence Seaway.

    Both spectral and angular information are retrieved for every scene observed by MISR. The left-hand image was acquired by the instrument's vertical-viewing (nadir) camera, and is a false-color spectral composite from the near-infrared, red, and blue bands. The right-hand image is a false-color angular composite using red band data from the 60-degree backward-viewing, nadir, and 60-degree forward-viewing cameras. In each case, the individual channels of data are displayed as red, green, and blue, respectively.

    Much of the ground remains covered or partially covered with snow. Vegetation appears red in the left-hand image because of its high near-infrared brightness. In the multi-angle composite, vegetated areas appear in shades of green because they are brighter at nadir, possibly as a result of an underlying blanket of snow which is more visible from this direction. Enhanced forward scatter from the smooth water surface results in bluer hues, whereas urban areas look somewhat orange, possibly due to the effect of vertical structures which preferentially backscatter sunlight.

    The data were acquired during Terra orbit 6441, and cover an area measuring 275 kilometers x 310 kilometers.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  8. Colorado

    Atmospheric Science Data Center

    2014-05-15

    ... the Multi-angle Imaging SpectroRadiometer (MISR). On the left, a natural-color view acquired by MISR's vertical-viewing (nadir) camera ... Gunnison River at the city of Grand Junction. The striking "L" shaped feature in the lower image center is a sandstone monocline known as ...

  9. Assessment of pituitary micro-lesions using 3D sampling perfection with application-optimized contrasts using different flip-angle evolutions.

    PubMed

    Wang, Jing; Wu, Yue; Yao, Zhenwei; Yang, Zhong

    2014-12-01

    The aim of this study was to explore the value of three-dimensional sampling perfection with application-optimized contrasts using different flip-angle evolutions (3D-SPACE) sequence in assessment of pituitary micro-lesions. Coronal 3D-SPACE as well as routine T1- and dynamic contrast-enhanced (DCE) T1-weighted images of the pituitary gland were acquired in 52 patients (48 women and four men; mean age, 32 years; age range, 17-50 years) with clinically suspected pituitary abnormality at 3.0 T, retrospectively. The interobserver agreement of assessment results was analyzed with K-statistics. Qualitative analyses were compared using Wilcoxon signed-rank test. There was good interobserver agreement of the independent evaluations for 3D-SPACE images (k = 0.892), fair for routine MR images (k = 0.649). At 3.0 T, 3D-SPACE provided significantly better images than routine MR images in terms of the boundary of pituitary gland, definition of pituitary lesions, and overall image quality. The evaluation of pituitary micro-lesions using combined routine and 3D-SPACE MR imaging was superior to that using only routine or 3D-SPACE imaging. The 3D-SPACE sequence can be used for appropriate and successful evaluation of the pituitary gland. We suggest 3D-SPACE sequence to be a powerful supplemental sequence in MR examinations with suspected pituitary micro-lesions.

  10. Flexible feature-space-construction architecture and its VLSI implementation for multi-scale object detection

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans

    2018-04-01

    Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.

  11. Assessment of capabilities of multiangle imaging photo-polarimetry for atmospheric correction in presence of absorbing aerosols

    NASA Astrophysics Data System (ADS)

    Kalashnikova, O. V.; Garay, M. J.; Xu, F.; Seidel, F. C.; Diner, D. J.

    2015-12-01

    Satellite remote sensing of ocean color is a critical tool for assessing the productivity of marine ecosystems and monitoring changes resulting from climatic or environmental influences. Yet water-leaving radiance comprises less than 10% of the signal measured from space, making correction for absorption and scattering by the intervening atmosphere imperative. Traditional ocean color retrieval algorithms utilize a standard set of aerosol models and the assumption of negligible water-leaving radiance in the near-infrared. Modern improvements have been developed to handle absorbing aerosols such as urban particulates in coastal areas and transported desert dust over the open ocean, where ocean fertilization can impact biological productivity at the base of the marine food chain. Even so, imperfect knowledge of the absorbing aerosol optical properties or their height distribution results in well-documented sources of error. In the UV, the problem of UV-enhanced absorption and nonsphericity of certain aerosol types are amplified due to the increased Rayleigh and aerosol optical depth, especially at off-nadir view angles. Multi-angle spectro-polarimetric measurements have been advocated as an additional tool to better understand and retrieve the aerosol properties needed for atmospheric correction for ocean color retrievals. The central concern of the work to be described is the assessment of the effects of absorbing aerosol properties on water leaving radiance measurement uncertainty by neglecting UV-enhanced absorption of carbonaceous particles and by not accounting for dust nonsphericity. In addition, we evaluate the polarimetric sensitivity of absorbing aerosol properties in light of measurement uncertainties achievable for the next generation of multi-angle polarimetric imaging instruments, and demonstrate advantages and disadvantages of wavelength selection in the UV/VNIR range. The phase matrices for the spherical smoke particles were calculated using a standard Mie code, while those for non-spherical dust particles were calculated using the numerical approach described by Dubovik et al., 2006. A vector Markov Chain radiative transfer code including bio-optical models was used to evaluate TOA and water leaving radiances.

  12. Airborne Sea of Dust over China

    NASA Technical Reports Server (NTRS)

    2002-01-01

    TDust covered northern China in the last week of March during some of the worst dust storms to hit the region in a decade. The dust obscuring China's Inner Mongolian and Shanxi Provinces on March 24, 2002, is compared with a relatively clear day (October 31, 2001) in these images from the Multi-angle Imaging SpectroRadiometer's vertical-viewing (nadir) camera aboard NASA's Terra satellite. Each image represents an area of about 380 by 630 kilometers (236 by 391 miles). In the image from late March, shown on the right, wave patterns in the yellowish cloud liken the storm to an airborne ocean of dust. The veil of particulates obscures features on the surface north of the Yellow River (visible in the lower left). The area shown lies near the edge of the Gobi desert, a few hundred kilometers, or miles, west of Beijing. Dust originates from the desert and travels east across northern China toward the Pacific Ocean. For especially severe storms, fine particles can travel as far as North America. The Multi-angle Imaging SpectroRadiometer, built and managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., is one of five Earth-observing instruments aboard the Terra satellite, launched in December 1999. The instrument acquires images of Earth at nine angles simultaneously, using nine separate cameras pointed forward, downward and backward along its flight path. The change in reflection at different view angles affords the means to distinguish different types of atmospheric particles, cloud forms and land surface covers. Image courtesy NASA/GSFC/LaRC/JPL, MISR Team

  13. SU-F-J-183: Interior Region-Of-Interest Tomography by Using Inverse Geometry System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, K; Kim, D; Kang, S

    2016-06-15

    Purpose: The inverse geometry computed tomography (IGCT) composed of multiple source and small size detector has several merits such as reduction of scatter effect and large volumetric imaging within one rotation without cone-beam artifact, compared to conventional cone-beam computed tomography (CBCT). By using this multi-source characteristics, we intend to present a selective and multiple interior region-of-interest (ROI) imaging method by using a designed source on-off sequence of IGCT. Methods: All of the IGCT sources are operated one by one sequentially, and each projection in the shape of narrow cone-beam covers its own partial volume of full field of view (FOV)more » determined from system geometry. Thus, through controlling multi source operation, limited irradiation within ROI is possible and selective radon space data for ROI imaging can be acquired without additional X-ray filtration. With this feature, we designed a source on-off sequence for multi ROI-IGCT imaging, and projections of ROI-IGCT were generated by using the on-off sequence. Multi ROI-IGCT images were reconstructed by using filtered back-projection algorithm. All these imaging process of our study has been performed by utilizing digital phantom and patient CT data. ROI-IGCT images of the phantom were compared to CBCT image and the phantom data for the image quality evaluation. Results: Image quality of ROI-IGCT was comparable to that of CBCT. However, the distal axial-plane from the FOV center, large cone-angle region, ROI-IGCT showed uniform image quality without significant cone-beam artifact contrary to CBCT. Conclusion: ROI-IGCT showed comparable image quality and has the capability to provide multi ROI image within a rotation. Projection of ROI-IGCT is performed by selective irradiation, hence unnecessary imaging dose to non-interest region can be reduced. In this regard, it seems to be useful for diagnostic or image guidance purpose in radiotherapy such as low dose target localization and patient alignment. This research was supported by the Mid-career Researcher Program through NRF funded by the Ministry of Science, ICT & Future Planning of Korea (NRF-2014R1A2A1A10050270) and by the Radiation Technology R&D program through the National Research Foundation of Korea funded by the Ministry of Science, ICT & Future Planning (No. 2013M2A2A7038291)« less

  14. Multi-source and multi-angle remote sensing image data collection, application and sharing of Beichuan National Earthquake Ruins Museum

    NASA Astrophysics Data System (ADS)

    Lin, Yueguan; Wang, Wei; Wen, Qi; Huang, He; Lin, Jingli; Zhang, Wei

    2015-12-01

    Ms8.0 Wenchuan earthquake that occurred on May 12, 2008 brought huge casualties and property losses to the Chinese people, and Beichuan County was destroyed in the earthquake. In order to leave a site for commemorate of the people, and for science propaganda and research of earthquake science, Beichuan National Earthquake Ruins Museum has been built on the ruins of Beichuan county. Based on the demand for digital preservation of the earthquake ruins park and collection of earthquake damage assessment of research and data needs, we set up a data set of Beichuan National Earthquake Ruins Museum, including satellite remote sensing image, airborne remote sensing image, ground photogrammetry data and ground acquisition data. At the same time, in order to make a better service for earthquake science research, we design the sharing ideas and schemes for this scientific data set.

  15. WindCam and MSPI: two cloud and aerosol instrument concepts derived from Terra/MISR heritage

    NASA Astrophysics Data System (ADS)

    Diner, David J.; Mischna, Michael; Chipman, Russell A.; Davis, Ab; Cairns, Brian; Davies, Roger; Kahn, Ralph A.; Muller, Jan-Peter; Torres, Omar

    2008-08-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has been acquiring global cloud and aerosol data from polar orbit since February 2000. MISR acquires moderately high-resolution imagery at nine view angles from nadir to 70.5°, in four visible/near-infrared spectral bands. Stereoscopic parallax, time lapse among the nine views, and the variation of radiance with angle and wavelength enable retrieval of geometric cloud and aerosol plume heights, height-resolved cloud-tracked winds, and aerosol optical depth and particle property information. Two instrument concepts based upon MISR heritage are in development. The Cloud Motion Vector Camera, or WindCam, is a simplified version comprised of a lightweight, compact, wide-angle camera to acquire multiangle stereo imagery at a single visible wavelength. A constellation of three WindCam instruments in polar Earth orbit would obtain height-resolved cloud-motion winds with daily global coverage, making it a low-cost complement to a spaceborne lidar wind measurement system. The Multiangle SpectroPolarimetric Imager (MSPI) is aimed at aerosol and cloud microphysical properties, and is a candidate for the National Research Council Decadal Survey's Aerosol-Cloud-Ecosystem (ACE) mission. MSPI combines the capabilities of MISR with those of other aerosol sensors, extending the spectral coverage to the ultraviolet and shortwave infrared and incorporating high-accuracy polarimetric imaging. Based on requirements for the nonimaging Aerosol Polarimeter Sensor on NASA's Glory mission, a degree of linear polarization uncertainty of 0.5% is specified within a subset of the MSPI bands. We are developing a polarization imaging approach using photoelastic modulators (PEMs) to accomplish this objective.

  16. The Large UV/Optical/Infrared Surveyor (LUVOIR): Decadal Mission concept design update

    NASA Astrophysics Data System (ADS)

    Bolcar, Matthew R.; Aloezos, Steve; Bly, Vincent T.; Collins, Christine; Crooke, Julie; Dressing, Courtney D.; Fantano, Lou; Feinberg, Lee D.; France, Kevin; Gochar, Gene; Gong, Qian; Hylan, Jason E.; Jones, Andrew; Linares, Irving; Postman, Marc; Pueyo, Laurent; Roberge, Aki; Sacks, Lia; Tompkins, Steven; West, Garrett

    2017-09-01

    In preparation for the 2020 Astrophysics Decadal Survey, NASA has commissioned the study of four large mission concepts, including the Large Ultraviolet / Optical / Infrared (LUVOIR) Surveyor. The LUVOIR Science and Technology Definition Team (STDT) has identified a broad range of science objectives including the direct imaging and spectral characterization of habitable exoplanets around sun-like stars, the study of galaxy formation and evolution, the epoch of reionization, star and planet formation, and the remote sensing of Solar System bodies. NASA's Goddard Space Flight Center (GSFC) is providing the design and engineering support to develop executable and feasible mission concepts that are capable of the identified science objectives. We present an update on the first of two architectures being studied: a 15- meter-diameter segmented-aperture telescope with a suite of serviceable instruments operating over a range of wavelengths between 100 nm to 2.5 μm. Four instruments are being developed for this architecture: an optical / near-infrared coronagraph capable of 10-10 contrast at inner working angles as small as 2 λ/D the LUVOIR UV Multi-object Spectrograph (LUMOS), which will provide low- and medium-resolution UV (100 - 400 nm) multi-object imaging spectroscopy in addition to far-UV imaging; the High Definition Imager (HDI), a high-resolution wide-field-of-view NUV-Optical-IR imager; and a UV spectro-polarimeter being contributed by Centre National d'Etudes Spatiales (CNES). A fifth instrument, a multi-resolution optical-NIR spectrograph, is planned as part of a second architecture to be studied in late 2017.

  17. Polarimetric Observations of the Lunar Surface

    NASA Astrophysics Data System (ADS)

    Kim, S.

    2017-12-01

    Polarimetric images contain valuable information on the lunar surface such as grain size and porosity of the regolith, from which one can estimate the space weathering environment on the lunar surface. Surprisingly, polarimetric observation has never been conducted from the lunar orbit before. A Wide-Angle Polarimetric Camera (PolCam) has been recently selected as one of three Korean science instruments onboard the Korea Pathfinder Lunar Orbiter (KPLO), which is aimed to be launched in 2019/2020 as the first Korean lunar mission. PolCam will obtain 80 m-resolution polarimetric images of the whole lunar surface between -70º and +70º latitudes at 320, 430 and 750 nm bands for phase angles up to 115º. I will also discuss previous polarimetric studies on the lunar surface based on our ground-based observations.

  18. Resolving the Aerosol Piece of the Global Climate Picture

    NASA Astrophysics Data System (ADS)

    Kahn, R. A.

    2017-12-01

    Factors affecting our ability to calculate climate forcing and estimate model predictive skill include direct radiative effects of aerosols and their indirect effects on clouds. Several decades of Earth-observing satellite observations have produced a global aerosol column-amount (AOD) record, but an aerosol microphysical property record required for climate and many air quality applications is lacking. Surface-based photometers offer qualitative aerosol-type classification, and several space-based instruments map aerosol air-mass types under favorable conditions. However, aerosol hygroscopicity, mass extinction efficiency (MEE), and quantitative light absorption, must be obtained from in situ measurements. Completing the aerosol piece of the climate picture requires three elements: (1) continuing global AOD and qualitative type mapping from space-based, multi-angle imagers and aerosol vertical distribution from near-source stereo imaging and downwind lidar, (2) systematic, quantitative in situ observations of particle properties unobtainable from space, and (3) continuing transport modeling to connect observations to sources, and extrapolate limited sampling in space and time. At present, the biggest challenges to producing the needed aerosol data record are: filling gaps in particle property observations, maintaining global observing capabilities, and putting the pieces together. Obtaining the PDFs of key particle properties, adequately sampled, is now the leading observational deficiency. One simplifying factor is that, for a given aerosol source and season, aerosol amounts often vary, but particle properties tend to be repeatable. SAM-CAAM (Systematic Aircraft Measurements to Characterize Aerosol Air Masses), a modest aircraft payload deployed frequently could fill this gap, adding value to the entire satellite data record, improving aerosol property assumptions in retrieval algorithms, and providing MEEs to translate between remote-sensing optical constraints and aerosol mass book-kept in climate models [Kahn et al., BAMS 2017]. This will also improve connections between remote-sensing particle types and those defined in models. The third challenge, maintaining global observing capabilities, requires continued community effort and good budgetary fortune.

  19. Automated comprehensive Adolescent Idiopathic Scoliosis assessment using MVC-Net.

    PubMed

    Wu, Hongbo; Bailey, Chris; Rasoulinejad, Parham; Li, Shuo

    2018-05-18

    Automated quantitative estimation of spinal curvature is an important task for the ongoing evaluation and treatment planning of Adolescent Idiopathic Scoliosis (AIS). It solves the widely accepted disadvantage of manual Cobb angle measurement (time-consuming and unreliable) which is currently the gold standard for AIS assessment. Attempts have been made to improve the reliability of automated Cobb angle estimation. However, it is very challenging to achieve accurate and robust estimation of Cobb angles due to the need for correctly identifying all the required vertebrae in both Anterior-posterior (AP) and Lateral (LAT) view x-rays. The challenge is especially evident in LAT x-ray where occlusion of vertebrae by the ribcage occurs. We therefore propose a novel Multi-View Correlation Network (MVC-Net) architecture that can provide a fully automated end-to-end framework for spinal curvature estimation in multi-view (both AP and LAT) x-rays. The proposed MVC-Net uses our newly designed multi-view convolution layers to incorporate joint features of multi-view x-rays, which allows the network to mitigate the occlusion problem by utilizing the structural dependencies of the two views. The MVC-Net consists of three closely-linked components: (1) a series of X-modules for joint representation of spinal structure (2) a Spinal Landmark Estimator network for robust spinal landmark estimation, and (3) a Cobb Angle Estimator network for accurate Cobb Angles estimation. By utilizing an iterative multi-task training algorithm to train the Spinal Landmark Estimator and Cobb Angle Estimator in tandem, the MVC-Net leverages the multi-task relationship between landmark and angle estimation to reliably detect all the required vertebrae for accurate Cobb angles estimation. Experimental results on 526 x-ray images from 154 patients show an impressive 4.04° Circular Mean Absolute Error (CMAE) in AP Cobb angle and 4.07° CMAE in LAT Cobb angle estimation, which demonstrates the MVC-Net's capability of robust and accurate estimation of Cobb angles in multi-view x-rays. Our method therefore provides clinicians with a framework for efficient, accurate, and reliable estimation of spinal curvature for comprehensive AIS assessment. Copyright © 2018. Published by Elsevier B.V.

  20. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  1. Seasonal Surface Changes in Namibia and Central Angola

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Brightness variations in the terrain along a portion of southwestern Africa are displayed in these views from the Multi-angle Imaging SpectroRadiometer (MISR). The panels portray an area that includes Namibia's Skeleton Coast and Etosha National Park as well as Angola's Cuando Cubango. The top panels were acquired on March 6, 2001, during the region's wet season, and the bottom panels were acquired on September 1, 2002, during the dry season. Corresponding changes in the abundance of vegetation are apparent. The images on the left are natural color (red, green, blue) images from MISR's vertical-viewing (nadir) camera. The images on the right represent one of MISR's derived surface products.

    The radiance (light intensity) in each pixel of the so-called 'top-of-atmosphere' images on the left includes light that is reflected by the Earth's surface in addition to light that is transmitted and reflected by the atmosphere. The amount of radiation reflected by the surface into all upward directions, as opposed to any single direction, is important when studying Earth's energy budget. A quantity called the surface 'directional hemispherical reflectance' (DHR), sometimes called the 'black-sky albedo', captures this information, and is depicted in the images on the right. MISR's multi-angle views lead to more accurate estimates of the amount of radiation reflected into all directions than can be obtained as a result of looking at a single (e.g., vertically downward) view angle. Furthermore, to generate this surface product accurately, it is necessary to compensate for the effects of the intervening atmosphere, and MISR provides the ability to characterize and account for scattering of light by airborne particulates (aerosols).

    The DHR is called a hemispherical reflectance because it measures the amount of radiation reflected into all upward directions, and which therefore traverses an imaginary hemisphere situated above each surface point. The 'directional' part of the name describes the illumination geometry, and indicates that in the absence of an intervening atmosphere, light from the Sun illuminates the surface from a single direction (that is, there is no diffuse skylight, hence the 'black-sky' terminology). The DHR is retrieved over land surfaces in each of MISR's four wavelength bands, and the images on the right are red, green, blue spectral composites. Regions where DHR could not be derived, either due to an inability to retrieve the necessary atmospheric characteristics or due to the presence of clouds, are shown in dark gray.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 6466 and 14388. The panels cover an area of about 380 kilometers x 760 kilometers, and utilize data from blocks 102 to 107 within World Reference System-2 path 181.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center,Greenbelt, MD. JPL is a division of the California Institute of Technology.

  2. Sensitivity images for multi-view ultrasonic array inspection

    NASA Astrophysics Data System (ADS)

    Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony J.; Zhang, Jie; Wilcox, Paul D.; Kashubin, Artem; Cawley, Peter

    2018-04-01

    The multi-view total focusing method (TFM) is an imaging technique for ultrasonic full matrix array data that typically exploits ray paths with zero, one or two internal reflections in the inspected object and for all combinations of longitudinal and transverse modes. The fusion of this vast quantity of views is expected to increase the reliability of ultrasonic inspection; however, it is not trivial to determine which views and which areas are the most suited for the detection of a given type and orientation of defect. This work introduces sensitivity images that give the expected response of a defect in any part of the inspected object and for any view. These images are based on a ray-based analytical forward model. They can be used to determine which views and which areas lead to the highest probability of detection of the defect. They can also be used for quantitatively analyzing the effects of the parameters of the inspection (probe angle and position, for example) on the overall probability of detection. Finally, they can be used to rescale TFM images so that the different views have comparable amplitudes. This methodology is applied to experimental data and discussed.

  3. Multi-Contrast Multi-Atlas Parcellation of Diffusion Tensor Imaging of the Human Brain

    PubMed Central

    Tang, Xiaoying; Yoshida, Shoko; Hsu, John; Huisman, Thierry A. G. M.; Faria, Andreia V.; Oishi, Kenichi; Kutten, Kwame; Poretti, Andrea; Li, Yue; Miller, Michael I.; Mori, Susumu

    2014-01-01

    In this paper, we propose a novel method for parcellating the human brain into 193 anatomical structures based on diffusion tensor images (DTIs). This was accomplished in the setting of multi-contrast diffeomorphic likelihood fusion using multiple DTI atlases. DTI images are modeled as high dimensional fields, with each voxel exhibiting a vector valued feature comprising of mean diffusivity (MD), fractional anisotropy (FA), and fiber angle. For each structure, the probability distribution of each element in the feature vector is modeled as a mixture of Gaussians, the parameters of which are estimated from the labeled atlases. The structure-specific feature vector is then used to parcellate the test image. For each atlas, a likelihood is iteratively computed based on the structure-specific vector feature. The likelihoods from multiple atlases are then fused. The updating and fusing of the likelihoods is achieved based on the expectation-maximization (EM) algorithm for maximum a posteriori (MAP) estimation problems. We first demonstrate the performance of the algorithm by examining the parcellation accuracy of 18 structures from 25 subjects with a varying degree of structural abnormality. Dice values ranging 0.8–0.9 were obtained. In addition, strong correlation was found between the volume size of the automated and the manual parcellation. Then, we present scan-rescan reproducibility based on another dataset of 16 DTI images – an average of 3.73%, 1.91%, and 1.79% for volume, mean FA, and mean MD respectively. Finally, the range of anatomical variability in the normal population was quantified for each structure. PMID:24809486

  4. Multi-objective optimization and design for free piston Stirling engines based on the dimensionless power

    NASA Astrophysics Data System (ADS)

    Mou, Jian; Hong, Guotong

    2017-02-01

    In this paper, the dimensionless power is used to optimize the free piston Stirling engines (FPSE). The dimensionless power is defined as a ratio of the heat power loss and the output work. The heat power losses include the losses of expansion space, heater, regenerator, cooler and the compression space and every kind of the heat loss calculated by empirical formula. The output work is calculated by the adiabatic model. The results show that 82.66% of the losses come from the expansion space and 54.59% heat losses of expansion space come from the shuttle loss. At different pressure the optimum bore-stroke ratio, heat source temperature, phase angle and the frequency have different values, the optimum phase angles increase with the increase of pressure, but optimum frequencies drop with the increase of pressure. However, no matter what the heat source temperature, initial pressure and frequency are, the optimum ratios of piston stroke and displacer stroke all about 0.8. The three-dimensional diagram is used to analyse Stirling engine. From the three-dimensional diagram the optimum phase angle, frequency and heat source temperature can be acquired at the same time. This study offers some guides for the design and optimization of FPSEs.

  5. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  6. Large micromirror array for multi-object spectroscopy in space

    NASA Astrophysics Data System (ADS)

    Canonica, Michael; Zamkotsian, Frédéric; Lanzoni, Patrick; Noell, Wilfried

    2017-11-01

    Multi-object spectroscopy (MOS) is a powerful tool for space and ground-based telescopes for the study of the formation and evolution of galaxies. This technique requires a programmable slit mask for astronomical object selection. We are engaged in a European development of micromirror arrays (MMA) for generating reflective slit masks in future MOS, called MIRA. The 100 x 200 μm2 micromirrors are electrostatically tilted providing a precise angle. The main requirements are cryogenic environment capabilities, precise and uniform tilt angle over the whole device, uniformity of the mirror voltage-tilt hysteresis and a low mirror deformation. A first MMA with single-crystal silicon micromirrors was successfully designed, fabricated and tested. A new generation of micromirror arrays composed of 2048 micromirrors (32 x 64) and modelled for individual addressing were fabricated using fusion and eutectic wafer-level bonding. These micromirrors without coating show a peak-to-valley deformation less than 10 nm, a tilt angle of 24° for an actuation voltage of 130 V. Individual addressing capability of each mirror has been demonstrated using a line-column algorithm based on an optimized voltage-tilt hysteresis. Devices are currently packaged, wire-bonded and integrated to a dedicated electronics to demonstrate the individual actuation of all micromirrors on an array. An operational test of this large array with gold coated mirrors has been done at cryogenic temperature (162 K): the micromirrors were actuated successfully before, during and after the cryogenic experiment. The micromirror surface deformation was measured at cryo and is below 30 nm peak-to-valley.

  7. Digital sun sensor multi-spot operation.

    PubMed

    Rufino, Giancarlo; Grassi, Michele

    2012-11-28

    The operation and test of a multi-spot digital sun sensor for precise sun-line determination is described. The image forming system consists of an opaque mask with multiple pinhole apertures producing multiple, simultaneous, spot-like images of the sun on the focal plane. The sun-line precision can be improved by averaging multiple simultaneous measures. Nevertheless, the sensor operation on a wide field of view requires acquiring and processing images in which the number of sun spots and the related intensity level are largely variable. To this end, a reliable and robust image acquisition procedure based on a variable shutter time has been considered as well as a calibration function exploiting also the knowledge of the sun-spot array size. Main focus of the present paper is the experimental validation of the wide field of view operation of the sensor by using a sensor prototype and a laboratory test facility. Results demonstrate that it is possible to keep high measurement precision also for large off-boresight angles.

  8. Shock Capturing with PDE-Based Artificial Viscosity for an Adaptive, Higher-Order Discontinuous Galerkin Finite Element Method

    DTIC Science & Technology

    2008-06-01

    Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The

  9. MSVAT-SPACE-STIR and SEMAC-STIR for Reduction of Metallic Artifacts in 3T Head and Neck MRI.

    PubMed

    Hilgenfeld, T; Prager, M; Schwindling, F S; Nittka, M; Rammelsberg, P; Bendszus, M; Heiland, S; Juerchott, A

    2018-05-24

    The incidence of metallic dental restorations and implants is increasing, and head and neck MR imaging is becoming challenging regarding artifacts. Our aim was to evaluate whether multiple-slab acquisition with view angle tilting gradient based on a sampling perfection with application-optimized contrasts by using different flip angle evolution (MSVAT-SPACE)-STIR and slice-encoding for metal artifact correction (SEMAC)-STIR are beneficial regarding artifact suppression compared with the SPACE-STIR and TSE-STIR in vitro and in vivo. At 3T, 3D artifacts of 2 dental implants, supporting different single crowns, were evaluated. Image quality was evaluated quantitatively (normalized signal-to-noise ratio) and qualitatively (2 reads by 2 blinded radiologists). Feasibility was tested in vivo in 5 volunteers and 5 patients, respectively. Maximum achievable resolution and the normalized signal-to-noise ratio of MSVAT-SPACE-STIR were higher compared with SEMAC-STIR. Performance in terms of artifact correction was dependent on the material composition. For highly paramagnetic materials, SEMAC-STIR was superior to MSVAT-SPACE-STIR (27.8% smaller artifact volume) and TSE-STIR (93.2% less slice distortion). However, MSVAT-SPACE-STIR reduced the artifact size compared with SPACE-STIR by 71.5%. For low-paramagnetic materials, MSVAT-SPACE-STIR performed as well as SEMAC-STIR. Furthermore, MSVAT-SPACE-STIR decreased artifact volume by 69.5% compared with SPACE-STIR. The image quality of all sequences did not differ systematically. In vivo results were comparable with in vitro results. Regarding susceptibility artifacts and acquisition time, MSVAT-SPACE-STIR might be advantageous over SPACE-STIR for high-resolution and isotropic head and neck imaging. Only for materials with high-susceptibility differences to soft tissue, the use of SEMAC-STIR might be beneficial. Within limited acquisition times, SEMAC-STIR cannot exploit its full advantage over TSE-STIR regarding artifact suppression. © 2018 by American Journal of Neuroradiology.

  10. The holographic display of three-dimensional medical objects through the usage of a shiftable cylindrical lens

    NASA Astrophysics Data System (ADS)

    Teng, Dongdong; Liu, Lilin; Zhang, Yueli; Pang, Zhiyong; Wang, Biao

    2014-09-01

    Through the creative usage of a shiftable cylindrical lens, a wide-view-angle holographic display system is developed for medical object display in real three-dimensional (3D) space based on a time-multiplexing method. The two-dimensional (2D) source images for all computer generated holograms (CGHs) needed by the display system are only one group of computerized tomography (CT) or magnetic resonance imaging (MRI) slices from the scanning device. Complicated 3D message reconstruction on the computer is not necessary. A pelvis is taken as the target medical object to demonstrate this method and the obtained horizontal viewing angle reaches 28°.

  11. Simultaneous identification of optical constants and PSD of spherical particles by multi-wavelength scattering-transmittance measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2018-04-01

    An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.

  12. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  13. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  14. A High Fidelity Approach to Data Simulation for Space Situational Awareness Missions

    NASA Astrophysics Data System (ADS)

    Hagerty, S.; Ellis, H., Jr.

    2016-09-01

    Space Situational Awareness (SSA) is vital to maintaining our Space Superiority. A high fidelity, time-based simulation tool, PROXOR™ (Proximity Operations and Rendering), supports SSA by generating realistic mission scenarios including sensor frame data with corresponding truth. This is a unique and critical tool for supporting mission architecture studies, new capability (algorithm) development, current/future capability performance analysis, and mission performance prediction. PROXOR™ provides a flexible architecture for sensor and resident space object (RSO) orbital motion and attitude control that simulates SSA, rendezvous and proximity operations scenarios. The major elements of interest are based on the ability to accurately simulate all aspects of the RSO model, viewing geometry, imaging optics, sensor detector, and environmental conditions. These capabilities enhance the realism of mission scenario models and generated mission image data. As an input, PROXOR™ uses a library of 3-D satellite models containing 10+ satellites, including low-earth orbit (e.g., DMSP) and geostationary (e.g., Intelsat) spacecraft, where the spacecraft surface properties are those of actual materials and include Phong and Maxwell-Beard bidirectional reflectance distribution function (BRDF) coefficients for accurate radiometric modeling. We calculate the inertial attitude, the changing solar and Earth illumination angles of the satellite, and the viewing angles from the sensor as we propagate the RSO in its orbit. The synthetic satellite image is rendered at high resolution and aggregated to the focal plane resolution resulting in accurate radiometry even when the RSO is a point source. The sensor model includes optical effects from the imaging system [point spread function (PSF) includes aberrations, obscurations, support structures, defocus], detector effects (CCD blooming, left/right bias, fixed pattern noise, image persistence, shot noise, read noise, and quantization noise), and environmental effects (radiation hits with selectable angular distributions and 4-layer atmospheric turbulence model for ground based sensors). We have developed an accurate flash Light Detection and Ranging (LIDAR) model that supports reconstruction of 3-dimensional information on the RSO. PROXOR™ contains many important imaging effects such as intra-frame smear, realized by oversampling the image in time and capturing target motion and jitter during the integration time.

  15. Morocco and Algeria

    Atmospheric Science Data Center

    2013-04-15

    ... mosaic of southwestern Europe and northwestern Morocco and Algeria. The image extends from 48°N, 16°W in the northwest to 32°N, 8°E in ... corner. The rugged Atlas Mountain ranges traverse northern Algeria and Morocco. The Multi-angle Imaging SpectroRadiometer (MISR) ...

  16. The development of a specialized processor for a space-based multispectral earth imager

    NASA Astrophysics Data System (ADS)

    Khedr, Mostafa E.

    2008-10-01

    This work was done in the Department of Computer Engineering, Lvov Polytechnic National University, Lvov, Ukraine, as a thesis entitled "Space Imager Computer System for Raw Video Data Processing" [1]. This work describes the synthesis and practical implementation of a specialized computer system for raw data control and processing onboard a satellite MultiSpectral earth imager. This computer system is intended for satellites with resolution in the range of one meter with 12-bit precession. The design is based mostly on general off-the-shelf components such as (FPGAs) plus custom designed software for interfacing with PC and test equipment. The designed system was successfully manufactured and now fully functioning in orbit.

  17. One-way acoustic mirror based on anisotropic zero-index media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Zhong-ming; Liang, Bin, E-mail: liangbin@nju.edu.cn, E-mail: jccheng@nju.edu.cn; Yang, Jing

    2015-11-23

    We have designed a one-way acoustic mirror comprising anisotropic zero-index media. For acoustic beam incident at a particular angle, the designed structure behaves like a high-efficient mirror that redirects almost all the incident energy into another direction predicted by the Snell's law, while becoming virtually transparent to beams propagating reversely along this output path. Furthermore, the mirror can be tailored to work at arbitrary incident angle by simply adjusting its geometry. Our design, with undirectional reflection functionality and flexible working angle, may offer possibilities in space isolations and have deep implication in various scenarios like ultrasound imaging or noise control.

  18. Fully refocused multi-shot spatiotemporally encoded MRI: robust imaging in the presence of metallic implants.

    PubMed

    Ben-Eliezer, Noam; Solomon, Eddy; Harel, Elad; Nevo, Nava; Frydman, Lucio

    2012-12-01

    An approach has been recently introduced for acquiring arbitrary 2D NMR spectra or images in a single scan, based on the use of frequency-swept RF pulses for the sequential excitation and acquisition of the spins response. This spatiotemporal-encoding (SPEN) approach enables a unique, voxel-by-voxel refocusing of all frequency shifts in the sample, for all instants throughout the data acquisition. The present study investigates the use of this full-refocusing aspect of SPEN-based imaging in the multi-shot MRI of objects, subject to sizable field inhomogeneities that complicate conventional imaging approaches. 2D MRI experiments were performed at 7 T on phantoms and on mice in vivo, focusing on imaging in proximity to metallic objects. Fully refocused SPEN-based spin echo imaging sequences were implemented, using both Cartesian and back-projection trajectories, and compared with k-space encoded spin echo imaging schemes collected on identical samples under equal bandwidths and acquisition timing conditions. In all cases assayed, the fully refocused spatiotemporally encoded experiments evidenced a ca. 50 % reduction in signal dephasing in the proximity of the metal, as compared to analogous results stemming from the k-space encoded spin echo counterparts. The results in this study suggest that SPEN-based acquisition schemes carry the potential to overcome strong field inhomogeneities, of the kind that currently preclude high-field, high-resolution tissue characterizations in the neighborhood of metallic implants.

  19. A Query Expansion Framework in Image Retrieval Domain Based on Local and Global Analysis

    PubMed Central

    Rahman, M. M.; Antani, S. K.; Thoma, G. R.

    2011-01-01

    We present an image retrieval framework based on automatic query expansion in a concept feature space by generalizing the vector space model of information retrieval. In this framework, images are represented by vectors of weighted concepts similar to the keyword-based representation used in text retrieval. To generate the concept vocabularies, a statistical model is built by utilizing Support Vector Machine (SVM)-based classification techniques. The images are represented as “bag of concepts” that comprise perceptually and/or semantically distinguishable color and texture patches from local image regions in a multi-dimensional feature space. To explore the correlation between the concepts and overcome the assumption of feature independence in this model, we propose query expansion techniques in the image domain from a new perspective based on both local and global analysis. For the local analysis, the correlations between the concepts based on the co-occurrence pattern, and the metrical constraints based on the neighborhood proximity between the concepts in encoded images, are analyzed by considering local feedback information. We also analyze the concept similarities in the collection as a whole in the form of a similarity thesaurus and propose an efficient query expansion based on the global analysis. The experimental results on a photographic collection of natural scenes and a biomedical database of different imaging modalities demonstrate the effectiveness of the proposed framework in terms of precision and recall. PMID:21822350

  20. Characterization of a compact 6-band multifunctional camera based on patterned spectral filters in the focal plane

    NASA Astrophysics Data System (ADS)

    Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.

    2014-06-01

    In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.

  1. Multiphase flow predictions from carbonate pore space images using extracted network models

    NASA Astrophysics Data System (ADS)

    Al-Kharusi, Anwar S.; Blunt, Martin J.

    2008-06-01

    A methodology to extract networks from pore space images is used to make predictions of multiphase transport properties for subsurface carbonate samples. The extraction of the network model is based on the computation of the location and sizes of pores and throats to create a topological representation of the void space of three-dimensional (3-D) rock images, using the concept of maximal balls. In this work, we follow a multistaged workflow. We start with a 2-D thin-section image; convert it statistically into a 3-D representation of the pore space; extract a network model from this image; and finally, simulate primary drainage, waterflooding, and secondary drainage flow processes using a pore-scale simulator. We test this workflow for a reservoir carbonate rock. The network-predicted absolute permeability is similar to the core plug measured value and the value computed on the 3-D void space image using the lattice Boltzmann method. The predicted capillary pressure during primary drainage agrees well with a mercury-air experiment on a core sample, indicating that we have an adequate representation of the rock's pore structure. We adjust the contact angles in the network to match the measured waterflood and secondary drainage capillary pressures. We infer a significant degree of contact angle hysteresis. We then predict relative permeabilities for primary drainage, waterflooding, and secondary drainage that agree well with laboratory measured values. This approach can be used to predict multiphase transport properties when wettability and pore structure vary in a reservoir, where experimental data is scant or missing. There are shortfalls to this approach, however. We compare results from three networks, one of which was derived from a section of the rock containing vugs. Our method fails to predict properties reliably when an unrepresentative image is processed to construct the 3-D network model. This occurs when the image volume is not sufficient to represent the geological variations observed in a core plug sample.

  2. Multi-color, rotationally resolved photometry of asteroid 21 Lutetia from OSIRIS/Rosetta observations

    NASA Astrophysics Data System (ADS)

    Lamy, P. L.; Faury, G.; Jorda, L.; Kaasalainen, M.; Hviid, S. F.

    2010-10-01

    Context. Asteroid 21 Lutetia is the second target of the Rosetta space mission. Extensive pre-encounter, space-, and ground-based observations are being performed to prepare for the flyby in July 2010. Aims: The aim of this article is to accurately characterize the photometric properties of this asteroid over a broad spectral range from the ultraviolet to the near-infrared and to search for evidence of surface inhomogeneities. Methods: The asteroid was imaged on 2 and 3 January 2007 with the Narrow Angle Camera (NAC) of the Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) during the cruise phase of the Rosetta spacecraft. The geometric conditions were such that the aspect angle was 44^circ (i.e., mid-northern latitudes) and the phase angle 22.4^circ. Lutetia was continuously monitored over 14.3 h, thus exceeding one rotational period and a half, with twelve filters whose spectral coverage extended from 271 to 986 nm. An accurate photometric calibration was obtained from the observations of a solar analog star, 16 Cyg B. Results: High-quality light curves in the U, B, V, R and I photometric bands were obtained. Once they were merged with previous light curves from over some 45 years, the sidereal period is accurately determined: Prot = 8.168271 ± 0.000002 h. Color variations with rotational phase are marginally detected with the ultraviolet filter centered at 368 nm but are absent in the other visible and near-infrared filters. The albedo is directly determined from the observed maximum cross-section obtained from an elaborated shape model that results from a combination of adaptive-optics imaging and light curve inversion. Using current solutions for the phase function, we find geometric albedos pV = 0.130 ± 0.014 when using the linear phase function and pV(H-G) = 0.180 ± 0.018 when using the (H-G) phase function, which incorporates the opposition effect. The spectral variation of the reflectance indicates a steady decrease with decreasing wavelength rather than a sharp fall-off. Photometric tables (Tables 4 to 8) are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/521/A19

  3. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  4. New Approaches to the Use and Integration of Multi-Sensor Remote Sensing for Historic Resource Identification and Evaluation

    DTIC Science & Technology

    2006-11-10

    features based on shape are easy to come by. The Great Pyramids at Giza are instantly identified from space, even at the very coarse spatial... Pyramids at Giza , Egypt, are recognized by their triangular faces in this 1 m resolution Ikonos image, as are nearby rectangular tombs (credit: Space

  5. Phase-Imaging with a Sharpened Multi-Walled Carbon Nanotube AFM Tip: Investigation of Low-k Dielectric Polymer Hybrids

    NASA Technical Reports Server (NTRS)

    Nguyen, Cattien V.; Stevens, Ramsey M.; Meyyappan, M.; Volksen, Willi; Miller, Robert D.

    2005-01-01

    Phase shift tapping mode scanning force microscopy (TMSFM) has evolved into a very powerful technique for the nanoscale surface characterization of compositional variations in heterogeneous samples. Phase shift signal measures the difference between the phase angle of the excitation signal and the phase angle of the cantilever response. The signal correlates to the tip-sample inelastic interactions, identifying the different chemical and/or physical property of surfaces. In general, the resolution and quality of scanning probe microscopic images are highly dependent on the size of the scanning probe tip. In improving AFM tip technology, we recently developed a technique for sharpening the tip of a multi-walled carbon nanotube (CNT) AFM tip, reducing the radius of curvature of the CNT tip to less than 5 nm while still maintaining the inherent stability of multi-walled CNT tips. Herein we report the use of sharpened (CNT) AFM tips for phase-imaging of polymer hybrids, a precursor for generating nanoporous low-k dielectrics for on-chip interconnect applications. Using sharpened CNT tips, we obtained phase-contrast images having domains less than 10 nm. In contrast, conventional Si tips and unsharpened CNT tips (radius greater than 15 nm) were not able to resolve the nanoscale domains in the polymer hybrid films. C1early, the size of the CNT tip contributes significantly to the resolution of phase-contrast imaging. In addition, a study on the nonlinear tapping dynamics of the multi-walled CNT tip indicates that the multi-walled CNT tip is immune to conventional imaging instabilities related to the coexistence of attractive and repulsive tapping regimes. This factor may also contribute to the phase-contrast image quality of multi-walled CNT AFM tips. This presentation will also offer data in support of the stability of the CNT tip for phase shift TMSFM.

  6. Evaluation of hydrocephalus patients with 3D-SPACE technique using variant FA mode at 3T.

    PubMed

    Algin, Oktay

    2018-06-01

    The major advantages of three-dimensional sampling perfection with application optimized contrasts using different flip-angle evolution (3D-SPACE) technique are its high resistance to artifacts that occurs as a result of radiofrequency or static field, the ability of providing images with sub-millimeter voxel size which allows obtaining reformatted images in any plane due to isotropic three-dimensional data with lower specific absorption rate values. That is crucial during examination of cerebrospinal-fluid containing complex structures, and the acquisition time, which is approximately 5 min for scanning of entire cranium. Recent data revealed that T2-weighted (T2W) 3D-SPACE with variant flip-angle mode (VFAM) imaging allows fast and accurate evaluation of the hydrocephalus patients during both pre- and post-operative period for monitoring the treatment. For a better assessment of these patients; radiologists and neurosurgeons should be aware of the details and implications regarding to the 3D-SPACE technique, and they should follow the updates in this field. There could be a misconception about the difference between T2W-VFAM and routine heavily T2W 3D-SPACE images. T2W 3D-SPACE with VFAM imaging is only a subtype of 3D-SPACE technique. In this review, we described the details of T2W 3D-SPACE with VFAM imaging and comprehensively reviewed its recent applications.

  7. A Rigid Image Registration Based on the Nonsubsampled Contourlet Transform and Genetic Algorithms

    PubMed Central

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise. PMID:22163672

  8. A rigid image registration based on the nonsubsampled contourlet transform and genetic algorithms.

    PubMed

    Meskine, Fatiha; Chikr El Mezouar, Miloud; Taleb, Nasreddine

    2010-01-01

    Image registration is a fundamental task used in image processing to match two or more images taken at different times, from different sensors or from different viewpoints. The objective is to find in a huge search space of geometric transformations, an acceptable accurate solution in a reasonable time to provide better registered images. Exhaustive search is computationally expensive and the computational cost increases exponentially with the number of transformation parameters and the size of the data set. In this work, we present an efficient image registration algorithm that uses genetic algorithms within a multi-resolution framework based on the Non-Subsampled Contourlet Transform (NSCT). An adaptable genetic algorithm for registration is adopted in order to minimize the search space. This approach is used within a hybrid scheme applying the two techniques fitness sharing and elitism. Two NSCT based methods are proposed for registration. A comparative study is established between these methods and a wavelet based one. Because the NSCT is a shift-invariant multidirectional transform, the second method is adopted for its search speeding up property. Simulation results clearly show that both proposed techniques are really promising methods for image registration compared to the wavelet approach, while the second technique has led to the best performance results of all. Moreover, to demonstrate the effectiveness of these methods, these registration techniques have been successfully applied to register SPOT, IKONOS and Synthetic Aperture Radar (SAR) images. The algorithm has been shown to work perfectly well for multi-temporal satellite images as well, even in the presence of noise.

  9. Assessing the Altitude and Dispersion of Volcanic Plumes Using MISR Multi-angle Imaging from Space: Sixteen Years of Volcanic Activity in the Kamchatka Peninsula, Russia

    NASA Technical Reports Server (NTRS)

    Flower, Verity J. B.; Kahn, Ralph A.

    2017-01-01

    Volcanic eruptions represent a significant source of atmospheric aerosols and can display local, regional and global effects, impacting earth systems and human populations. In order to assess the relative impacts of these events, accurate plume injection altitude measurements are needed. In this work, volcanic plumes generated from seven Kamchatka Peninsula volcanoes (Shiveluch, Kliuchevskoi, Bezymianny, Tolbachik, Kizimen, Karymsky and Zhupanovsky), were identified using over 16 years of Multi-angle Imaging SpectroRadimeter (MISR) measurements. Eighty-eight volcanic plumes were observed by MISR, capturing 3-25% of reported events at individual volcanoes. Retrievals were most successful where high intensity events persisted over a period of weeks to months. Compared with existing ground and airborne observations, and alternative satellite-based reports compiled by the Global Volcanism Program (GVP), MISR plume height retrievals showed general consistency; the comparison reports appear to be skewed towards the region of highest concentration observed in MISR-constrained vertical plume extent. The report observations display less discrepancy with MISR toward the end of the analysis period, with improvements in the suborbital data likely the result of the deployment of new instrumentation. Conversely, the general consistency of MISR plume heights with conventionally reported observations supports the use of MISR in the ongoing assessment of volcanic activity globally, especially where other types of volcanic plume observations are unavailable. Differences between the northern (Shiveluch, Kliuchevskoi, Bezymianny and Tolbachik) and southern (Kizimen, Karymsky and Zhupanovsky) volcanoes broadly correspond to the Central Kamchatka Depression (CKD) and Eastern Volcanic Front (EVF), respectively, geological sub-regions of Kamchatka distinguished by varying magma composition. For example, by comparison with reanalysis-model simulations of local meteorological conditions, CKD plumes generally were less constrained by mid-tropospheric (< 6 km) layers of vertical stability above the boundary layer, suggesting that these eruptions were more energetic than those in the EVF

  10. The Multi-Angle Imager for Aerosols (MAIA) Instrument, the Satellite-Based Element of an Investigation to Benefit Public Health

    NASA Astrophysics Data System (ADS)

    Diner, D. J.

    2016-12-01

    Maps of airborne particulate matter (PM) derived from satellite instruments, including MISR and MODIS, have provided key contributions to many health-related investigations. Although it is well established that PM exposure increases the risks of cardiovascular and respiratory disease, adverse birth outcomes, and premature deaths, our understanding of the relative toxicity of specific PM types—mixtures having different size distributions and compositions—is relatively poor. To address this, the Multi-Angle Imager for Aerosols (MAIA) investigation was proposed to NASA's third Earth Venture Instrument (EVI-3) solicitation. MAIA was selected for funding in March 2016. The satellite-based MAIA instrument is one element of the scientific investigation, which will combine WRF-Chem transport model estimates of the abundances of different aerosol types with the data acquired from Earth orbit. Geostatistical models derived from collocated surface and MAIA retrievals will be used to relate retrieved fractional column aerosol optical depths to near-surface concentrations of major PM constituents. Epidemiological analyses of geocoded birth, death, and hospital records will be used to associate exposure to PM types with adverse health outcomes. The MAIA instrument obtains its sensitivity to particle type by building upon the legacies of many satellite sensors; observing in the UV, visible, near-IR, and shortwave-IR regions of the electromagnetic spectrum; acquiring images at multiple angles of view; determining the degree to which the scattered light is polarized; and integrating these capabilities at moderately high spatial resolution. The instrument concept is based on the first and second generation Airborne Multiangle SpectroPolarimetric Imagers, AirMSPI and AirMSPI-2. MAIA incorporates a pair of pushbroom cameras on a two-axis gimbal to provide regional multiangle observations of selected, globally distributed target areas. A set of Primary Target Areas (PTAs) on five continents includes major population centers covering a range of PM concentrations and particle types. MAIA will also collect aerosol and cloud observations over regions of interest to the radiation science, climate, and environmental science communities. Launch of the MAIA instrument is planned for early in the next decade.

  11. Single Still Image

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This narrow angle image taken by Cassini's camera system of the Moon is one of the best of a sequence of narrow angle frames taken as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The 80 millisecond exposure was taken through a spectral filter centered at 0.33 microns; the filter bandpass was 85 Angstroms wide. The spatial scale of the image is about 1.4 miles per pixel (about 2.3 kilometers). The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  12. Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)

    NASA Astrophysics Data System (ADS)

    Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.

    1993-01-01

    The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.

  13. A new star tracker concept for satellite attitude determination based on a multi-purpose panoramic camera

    NASA Astrophysics Data System (ADS)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare

    2017-11-01

    This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.

  14. Distinguishing Clouds from Ice over the East Siberian Sea, Russia

    NASA Technical Reports Server (NTRS)

    2002-01-01

    As a consequence of its capability to retrieve cloud-top elevations, stereoscopic observations from the Multi-angle Imaging SpectroRadiometer (MISR) can discriminate clouds from snow and ice. The central portion of Russia's East Siberian Sea, including one of the New Siberian Islands, Novaya Sibir, are portrayed in these views from data acquired on May 28, 2002.

    The left-hand image is a natural color view from MISR's nadir camera. On the right is a height field retrieved using automated computer processing of data from multiple MISR cameras. Although both clouds and ice appear white in the natural color view, the stereoscopic retrievals are able to identify elevated clouds based on the geometric parallax which results when they are observed from different angles. Owing to their elevation above sea level, clouds are mapped as green and yellow areas, whereas land, sea ice, and very low clouds appear blue and purple. Purple, in particular, denotes elevations very close to sea level. The island of Novaya Sibir is located in the lower left of the images. It can be identified in the natural color view as the dark area surrounded by an expanse of fast ice. In the stereo map the island appears as a blue region indicating its elevation of less than 100 meters above sea level. Areas where the automated stereo processing failed due to lack of sufficient spatial contrast are shown in dark gray. The northern edge of the Siberian mainland can be found at the very bottom of the panels, and is located a little over 250 kilometers south of Novaya Sibir. Pack ice containing numerous fragmented ice floes surrounds the fast ice, and narrow areas of open ocean are visible.

    The East Siberian Sea is part of the Arctic Ocean and is ice-covered most of the year. The New Siberian Islands are almost always covered by snow and ice, and tundra vegetation is very scant. Despite continuous sunlight from the end of April until the middle of August, the ice between the island and the mainland typically remains until August or September.

    The Multi-angle Imaging SpectroRadiometer views almost the entire Earth every 9 days. These images were acquired during Terra orbit 12986 and cover an area of about 380 kilometers x 1117 kilometers. They utilize data from blocks 24 to 32 within World Reference System-2 path 117.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  15. The algorithm of motion blur image restoration based on PSF half-blind estimation

    NASA Astrophysics Data System (ADS)

    Chen, Da-Ke; Lin, Zhe

    2011-08-01

    A novel algorithm of motion blur image restoration based on PSF half-blind estimation with Hough transform was introduced on the basis of full analysis of the principle of TDICCD camera, with the problem that vertical uniform linear motion estimation used by IBD algorithm as the original value of PSF led to image restoration distortion. Firstly, the mathematical model of image degradation was established with the transcendental information of multi-frame images, and then two parameters (movement blur length and angle) that have crucial influence on PSF estimation was set accordingly. Finally, the ultimate restored image can be acquired through multiple iterative of the initial value of PSF estimation in Fourier domain, which the initial value was gained by the above method. Experimental results show that the proposal algorithm can not only effectively solve the image distortion problem caused by relative motion between TDICCD camera and movement objects, but also the details characteristics of original image are clearly restored.

  16. Coherent Doppler lidar for automated space vehicle rendezvous, stationkeeping and capture

    NASA Technical Reports Server (NTRS)

    Bilbro, James A.

    1991-01-01

    The inherent spatial resolution of laser radar makes ladar or lidar an attractive candidate for Automated Rendezvous and Capture application. Previous applications were based on incoherent lidar techniques, requiring retro-reflectors on the target vehicle. Technology improvements (reduced size, no cryogenic cooling requirement) have greatly enhanced the construction of coherent lidar systems. Coherent lidar permits the acquisition of non-cooperative targets at ranges that are limited by the detection capability rather than by the signal-to-noise ratio (SNR) requirements. The sensor can provide translational state information (range, velocity, and angle) by direct measurement and, when used with any array detector, also can provide attitude information by Doppler imaging techniques. Identification of the target is accomplished by scanning with a high pulse repetition frequency (dependent on the SNR). The system performance is independent of range and should not be constrained by sun angle. An initial effort to characterize a multi-element detection system has resulted in a system that is expected to work to a minimum range of 1 meter. The system size, weight and power requirements are dependent on the operating range; 10 km range requires a diameter of 3 centimeters with overall size at 3 x 3 x 15 to 30 cm, while 100 km range requires a 30 cm diameter.

  17. The Density-wave Theory and Spiral Structures by Looking at Spiral Arms through a Multi-wavelength StudyHamed Pour-Imani1,2, Daniel Kennefick1,2, Julia Kennefick1,2, Mohamed Shameer Abdeen1,2, Eric Monson1,2, Douglas W. Shields1,2, B. L. Davis31Department of Physics, University of Arkansas, Fayetteville, AR 72701, USA2Arkansas Center for Space & Planetary Sciences, Univ. of Arkans

    NASA Astrophysics Data System (ADS)

    Pour-Imani, Hamed; Kennefick, Daniel; Kennefick, Julia; Shameer Abdeen, Mohammad; Monson, Erick; Shields, Douglas William; Davis, Benjamin L.

    2018-01-01

    The density-wave theory of spiral structure, though first proposed as long ago as the mid-1960s by C.C. Lin and F. Shu, continues to be challenged by rival theories, such as the manifold theory. One test between these theories which has been proposed is that the pitch angle of spiral arms for galaxies should vary with the wavelength of the image in the density-wave theory, but not in the manifold theory. The reason is that stars are born in the density wave but move out of it as they age. In this study, we combined large sample size with a wide range of wavelengths to investigate this issue. For each galaxy, we used wavelength FUV151nm, U-band, H-alpha, optical wavelength B-band and infrared 3.6 and 8.0μm. We measured the pitch angle with the 2DFFT and Spirality codes (Davis et al. 2012; Shields et al. 2015). We find that the B-band and 3.6μm images have smaller pitch angles than the infrared 8.0μm image in all cases, in agreement with the prediction of the density-wave theory. We also find that the pitch angle at FUV and H-alpha are close to the measurements made at 8.0μm. The Far-ultraviolet wavelength at 151nm shows very young, very bright UV stars still in the star-forming region (they are so bright as to be visible there and so short-lived that they never move out of it). We find that for both sets of measurements (2dFFT and Spirality) the 8.0μm, H-alpha and ultraviolet images agree in their pitch angle measurements, suggesting that they are, in fact, sensitive to the same region. By contrast, the 3.6μm and B-band images are uniformly tighter in pitch angle measurements than these wavelengths, suggesting that the density-wave picture is correct.

  18. Multi-Shot Sensitivity-Encoded Diffusion Data Recovery Using Structured Low-Rank Matrix Completion (MUSSELS)

    PubMed Central

    Mani, Merry; Jacob, Mathews; Kelley, Douglas; Magnotta, Vincent

    2017-01-01

    Purpose To introduce a novel method for the recovery of multi-shot diffusion weighted (MS-DW) images from echo-planar imaging (EPI) acquisitions. Methods Current EPI-based MS-DW reconstruction methods rely on the explicit estimation of the motion-induced phase maps to recover artifact-free images. In the new formulation, the k-space data of the artifact-free DWI is recovered using a structured low-rank matrix completion scheme, which does not require explicit estimation of the phase maps. The structured matrix is obtained as the lifting of the multi-shot data. The smooth phase-modulations between shots manifest as null-space vectors of this matrix, which implies that the structured matrix is low-rank. The missing entries of the structured matrix are filled in using a nuclear-norm minimization algorithm subject to the data-consistency. The formulation enables the natural introduction of smoothness regularization, thus enabling implicit motion-compensated recovery of the MS-DW data. Results Our experiments on in-vivo data show effective removal of artifacts arising from inter-shot motion using the proposed method. The method is shown to achieve better reconstruction than the conventional phase-based methods. Conclusion We demonstrate the utility of the proposed method to effectively recover artifact-free images from Cartesian fully/under-sampled and partial Fourier acquired data without the use of explicit phase estimates. PMID:27550212

  19. Optimization and validation of accelerated golden-angle radial sparse MRI reconstruction with self-calibrating GRAPPA operator gridding.

    PubMed

    Benkert, Thomas; Tian, Ye; Huang, Chenchan; DiBella, Edward V R; Chandarana, Hersh; Feng, Li

    2018-07-01

    Golden-angle radial sparse parallel (GRASP) MRI reconstruction requires gridding and regridding to transform data between radial and Cartesian k-space. These operations are repeatedly performed in each iteration, which makes the reconstruction computationally demanding. This work aimed to accelerate GRASP reconstruction using self-calibrating GRAPPA operator gridding (GROG) and to validate its performance in clinical imaging. GROG is an alternative gridding approach based on parallel imaging, in which k-space data acquired on a non-Cartesian grid are shifted onto a Cartesian k-space grid using information from multicoil arrays. For iterative non-Cartesian image reconstruction, GROG is performed only once as a preprocessing step. Therefore, the subsequent iterative reconstruction can be performed directly in Cartesian space, which significantly reduces computational burden. Here, a framework combining GROG with GRASP (GROG-GRASP) is first optimized and then compared with standard GRASP reconstruction in 22 prostate patients. GROG-GRASP achieved approximately 4.2-fold reduction in reconstruction time compared with GRASP (∼333 min versus ∼78 min) while maintaining image quality (structural similarity index ≈ 0.97 and root mean square error ≈ 0.007). Visual image quality assessment by two experienced radiologists did not show significant differences between the two reconstruction schemes. With a graphics processing unit implementation, image reconstruction time can be further reduced to approximately 14 min. The GRASP reconstruction can be substantially accelerated using GROG. This framework is promising toward broader clinical application of GRASP and other iterative non-Cartesian reconstruction methods. Magn Reson Med 80:286-293, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. An Overview of SIMBIOS Program Activities and Accomplishments. Chapter 1

    NASA Technical Reports Server (NTRS)

    Fargion, Giulietta S.; McClain, Charles R.

    2003-01-01

    The SIMBIOS Program was conceived in 1994 as a result of a NASA management review of the agency's strategy for monitoring the bio-optical properties of the global ocean through space-based ocean color remote sensing. At that time, the NASA ocean color flight manifest included two data buy missions, the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) and Earth Observing System (EOS) Color, and three sensors, two Moderate Resolution Imaging Spectroradiometers (MODIS) and the Multi-angle Imaging Spectro-Radiometer (MISR), scheduled for flight on the EOS-Terra and EOS-Aqua satellites. The review led to a decision that the international assemblage of ocean color satellite systems provided ample redundancy to assure continuous global coverage, with no need for the EOS Color mission. At the same time, it was noted that non-trivial technical difficulties attended the challenge (and opportunity) of combining ocean color data from this array of independent satellite systems to form consistent and accurate global bio-optical time series products. Thus, it was announced at the October 1994 EOS Interdisciplinary Working Group meeting that some of the resources budgeted for EOS Color should be redirected into an intercalibration and validation program (McClain et al., 2002).

  1. MISR Views Northern Australia

    NASA Technical Reports Server (NTRS)

    2000-01-01

    MISR images of tropical northern Australia acquired on June 1, 2000 (Terra orbit 2413) during the long dry season. Left: color composite of vertical (nadir) camera blue, green, and red band data. Right: multi-angle composite of red band data only from the cameras viewing 60 degrees aft, 60 degrees forward, and nadir. Color and contrast have been enhanced to accentuate subtle details. In the left image, color variations indicate how different parts of the scene reflect light differently at blue, green, and red wavelengths; in the right image color variations show how these same scene elements reflect light differently at different angles of view. Water appears in blue shades in the right image, for example, because glitter makes the water look brighter at the aft camera's view angle. The prominent inland water body is Lake Argyle, the largest human-made lake in Australia, which supplies water for the Ord River Irrigation Area and the town of Kununurra (pop. 6500) just to the north. At the top is the southern edge of Joseph Bonaparte Gulf; the major inlet at the left is Cambridge Gulf, the location of the town of Wyndham (pop. 850), the port for this region. This area is sparsely populated, and is known for its remote, spectacular mountains and gorges. Visible along much of the coastline are intertidal mudflats of mangroves and low shrubs; to the south the terrain is covered by open woodland merging into open grassland in the lower half of the pictures.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  2. Comparison of Orbiter PRCS Plume Flow Fields Using CFD and Modified Source Flow Codes

    NASA Technical Reports Server (NTRS)

    Rochelle, Wm. C.; Kinsey, Robin E.; Reid, Ethan A.; Stuart, Phillip C.; Lumpkin, Forrest E.

    1997-01-01

    The Space Shuttle Orbiter will use Reaction Control System (RCS) jets for docking with the planned International Space Station (ISS). During approach and backout maneuvers, plumes from these jets could cause high pressure, heating, and thermal loads on ISS components. The object of this paper is to present comparisons of RCS plume flow fields used to calculate these ISS environments. Because of the complexities of 3-D plumes with variable scarf-angle and multi-jet combinations, NASA/JSC developed a plume flow-field methodology for all of these Orbiter jets. The RCS Plume Model (RPM), which includes effects of scarfed nozzles and dual jets, was developed as a modified source-flow engineering tool to rapidly generate plume properties and impingement environments on ISS components. This paper presents flow-field properties from four PRCS jets: F3U low scarf-angle single jet, F3F high scarf-angle single jet, DTU zero scarf-angle dual jet, and F1F/F2F high scarf-angle dual jet. The RPM results compared well with plume flow fields using four CFD programs: General Aerodynamic Simulation Program (GASP), Cartesian (CART), Unified Solution Algorithm (USA), and Reacting and Multi-phase Program (RAMP). Good comparisons of predicted pressures are shown with STS 64 Shuttle Plume Impingement Flight Experiment (SPIFEX) data.

  3. Design of a concise Féry-prism hyperspectral imaging system based on multi-configuration

    NASA Astrophysics Data System (ADS)

    Dong, Wei; Nie, Yun-feng; Zhou, Jin-song

    2013-08-01

    In order to meet the needs of space borne and airborne hyperspectral imaging system for light weight, simplification and high spatial resolution, a novel design of Féry-prism hyperspectral imaging system based on Zemax multi-configuration method is presented. The novel structure is well arranged by analyzing optical monochromatic aberrations theoretically, and the optical structure of this design is concise. The fundamental of this design is Offner relay configuration, whereas the secondary mirror is replaced by Féry-prism with curved surfaces and a reflective front face. By reflection, the light beam passes through the Féry-prism twice, which promotes spectral resolution and enhances image quality at the same time. The result shows that the system can achieve light weight and simplification, compared to other hyperspectral imaging systems. Composed of merely two spherical mirrors and one achromatized Féry-prism to perform both dispersion and imaging functions, this structure is concise and compact. The average spectral resolution is 6.2nm; The MTFs for 0.45~1.00um spectral range are greater than 0.75, RMSs are less than 2.4um; The maximal smile is less than 10% pixel, while the keystones is less than 2.8% pixel; image quality approximates the diffraction limit. The design result shows that hyperspectral imaging system with one modified Féry-prism substituting the secondary mirror of Offner relay configuration is feasible from the perspective of both theory and practice, and possesses the merits of simple structure, convenient optical alignment, and good image quality, high resolution in space and spectra, adjustable dispersive nonlinearity. The system satisfies the requirements of airborne or space borne hyperspectral imaging system.

  4. Optical system design with wide field of view and high resolution based on monocentric multi-scale construction

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Wang, Hu; Xiao, Nan; Shen, Yang; Xue, Yaoke

    2018-03-01

    With the development of related technology gradually mature in the field of optoelectronic information, it is a great demand to design an optical system with high resolution and wide field of view(FOV). However, as it is illustrated in conventional Applied Optics, there is a contradiction between these two characteristics. Namely, the FOV and imaging resolution are limited by each other. Here, based on the study of typical wide-FOV optical system design, we propose the monocentric multi-scale system design method to solve this problem. Consisting of a concentric spherical lens and a series of micro-lens array, this system has effective improvement on its imaging quality. As an example, we designed a typical imaging system, which has a focal length of 35mm and a instantaneous field angle of 14.7", as well as the FOV set to be 120°. By analyzing the imaging quality, we demonstrate that in different FOV, all the values of MTF at 200lp/mm are higher than 0.4 when the sampling frequency of the Nyquist is 200lp/mm, which shows a good accordance with our design.

  5. Low Clouds

    Atmospheric Science Data Center

    2013-04-19

    article title:  Indian Ocean Clouds     View Larger ... Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's polar-orbiting Terra spacecraft. The area covered by the image is 247.5 ... during the last decade. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission ...

  6. Implementation of a rotational ultrasound biomicroscopy system equipped with a high-frequency angled needle transducer--ex vivo ultrasound imaging of porcine ocular posterior tissues.

    PubMed

    Bok, Tae-Hoon; Kim, Juho; Bae, Jinho; Lee, Chong Hyun; Paeng, Dong-Guk

    2014-09-24

    The mechanical scanning of a single element transducer has been mostly utilized for high-frequency ultrasound imaging. However, it requires space for the mechanical motion of the transducer. In this paper, a rotational scanning ultrasound biomicroscopy (UBM) system equipped with a high-frequency angled needle transducer is designed and implemented in order to minimize the space required. It was applied to ex vivo ultrasound imaging of porcine posterior ocular tissues through a minimal incision hole of 1 mm in diameter. The retina and sclera for the one eye were visualized in the relative rotating angle range of 270°~330° and at a distance range of 6~7 mm, whereas the tissues of the other eye were observed in relative angle range of 160°~220° and at a distance range of 7.5~9 mm. The layer between retina and sclera seemed to be bent because the distance between the transducer tip and the layer was varied while the transducer was rotated. Certin features of the rotation system such as the optimal scanning angle, step angle and data length need to be improved for ensure higher accuracy and precision. Moreover, the focal length should be considered for the image quality. This implementation represents the first report of a rotational scanning UBM system.

  7. Implementation of a Rotational Ultrasound Biomicroscopy System Equipped with a High-Frequency Angled Needle Transducer — Ex Vivo Ultrasound Imaging of Porcine Ocular Posterior Tissues

    PubMed Central

    Bok, Tae-Hoon; Kim, Juho; Bae, Jinho; Lee, Chong Hyun; Paeng, Dong-Guk

    2014-01-01

    The mechanical scanning of a single element transducer has been mostly utilized for high-frequency ultrasound imaging. However, it requires space for the mechanical motion of the transducer. In this paper, a rotational scanning ultrasound biomicroscopy (UBM) system equipped with a high-frequency angled needle transducer is designed and implemented in order to minimize the space required. It was applied to ex vivo ultrasound imaging of porcine posterior ocular tissues through a minimal incision hole of 1 mm in diameter. The retina and sclera for the one eye were visualized in the relative rotating angle range of 270° ∼ 330° and at a distance range of 6 ∼ 7 mm, whereas the tissues of the other eye were observed in relative angle range of 160° ∼ 220° and at a distance range of 7.5 ∼ 9 mm. The layer between retina and sclera seemed to be bent because the distance between the transducer tip and the layer was varied while the transducer was rotated. Certin features of the rotation system such as the optimal scanning angle, step angle and data length need to be improved for ensure higher accuracy and precision. Moreover, the focal length should be considered for the image quality. This implementation represents the first report of a rotational scanning UBM system. PMID:25254305

  8. Direct Parametric Image Reconstruction in Reduced Parameter Space for Rapid Multi-Tracer PET Imaging.

    PubMed

    Cheng, Xiaoyin; Li, Zhoulei; Liu, Zhen; Navab, Nassir; Huang, Sung-Cheng; Keller, Ulrich; Ziegler, Sibylle; Shi, Kuangyu

    2015-02-12

    The separation of multiple PET tracers within an overlapping scan based on intrinsic differences of tracer pharmacokinetics is challenging, due to limited signal-to-noise ratio (SNR) of PET measurements and high complexity of fitting models. In this study, we developed a direct parametric image reconstruction (DPIR) method for estimating kinetic parameters and recovering single tracer information from rapid multi-tracer PET measurements. This is achieved by integrating a multi-tracer model in a reduced parameter space (RPS) into dynamic image reconstruction. This new RPS model is reformulated from an existing multi-tracer model and contains fewer parameters for kinetic fitting. Ordered-subsets expectation-maximization (OSEM) was employed to approximate log-likelihood function with respect to kinetic parameters. To incorporate the multi-tracer model, an iterative weighted nonlinear least square (WNLS) method was employed. The proposed multi-tracer DPIR (MTDPIR) algorithm was evaluated on dual-tracer PET simulations ([18F]FDG and [11C]MET) as well as on preclinical PET measurements ([18F]FLT and [18F]FDG). The performance of the proposed algorithm was compared to the indirect parameter estimation method with the original dual-tracer model. The respective contributions of the RPS technique and the DPIR method to the performance of the new algorithm were analyzed in detail. For the preclinical evaluation, the tracer separation results were compared with single [18F]FDG scans of the same subjects measured 2 days before the dual-tracer scan. The results of the simulation and preclinical studies demonstrate that the proposed MT-DPIR method can improve the separation of multiple tracers for PET image quantification and kinetic parameter estimations.

  9. Analytical Bistatic k Space Images Compared to Experimental Swept Frequency EAR Images

    NASA Technical Reports Server (NTRS)

    Shaeffer, John; Cooper, Brett; Hom, Kam

    2004-01-01

    A case study of flat plate scattering images obtained by the analytical bistatic k space and experimental swept frequency ISAR methods is presented. The key advantage of the bistatic k space image is that a single excitation is required, i.e., one frequency I one angle. This means that prediction approaches such as MOM only need to compute one solution at a single frequency. Bistatic image Fourier transform data are obtained by computing the scattered field at various bistatic positions about the body in k space. Experimental image Fourier transform data are obtained from the measured response to a bandwidth of frequencies over a target rotation range.

  10. Impacts of Cross-Platform Vicarious Calibration on the Deep Blue Aerosol Retrievals for Moderate Resolution Imaging Spectroradiometer Aboard Terra

    NASA Technical Reports Server (NTRS)

    Jeong, Myeong-Jae; Hsu, N. Christina; Kwiatkowska, Ewa J.; Franz, Bryan A.; Meister, Gerhard; Salustro, Clare E.

    2012-01-01

    The retrieval of aerosol properties from spaceborne sensors requires highly accurate and precise radiometric measurements, thus placing stringent requirements on sensor calibration and characterization. For the Terra/Moderate Resolution Imaging Spedroradiometer (MODIS), the characteristics of the detectors of certain bands, particularly band 8 [(B8); 412 nm], have changed significantly over time, leading to increased calibration uncertainty. In this paper, we explore a possibility of utilizing a cross-calibration method developed for characterizing the Terral MODIS detectors in the ocean bands by the National Aeronautics and Space Administration Ocean Biology Processing Group to improve aerosol retrieval over bright land surfaces. We found that the Terra/MODIS B8 reflectance corrected using the cross calibration method resulted in significant improvements for the retrieved aerosol optical thickness when compared with that from the Multi-angle Imaging Spectroradiometer, Aqua/MODIS, and the Aerosol Robotic Network. The method reported in this paper is implemented for the operational processing of the Terra/MODIS Deep Blue aerosol products.

  11. Material identification in x-ray microscopy and micro CT using multi-layer, multi-color scintillation detectors

    PubMed Central

    Modgil, Dimple; Rigie, David S.; Wang, Yuxin; Xiao, Xianghui; Vargas, Phillip A.; La Rivière, Patrick J.

    2015-01-01

    We demonstrate that a dual-layer, dual-color scintillator construct for microscopic CT, originally proposed to increase sensitivity in synchrotron imaging, can also be used to perform material quantification and classification when coupled with polychromatic illumination. We consider two different approaches to data handling: (1) a data-domain material decomposition whose estimation performance can be characterized by the Cramer-Rao Lower Bound formalism but which requires careful calibration and (2) an image-domain material classification approach that is more robust to calibration errors. The data-domain analysis indicates that useful levels of SNR (>5) could be achieved in one second or less at typical bending magnet fluxes for relatively large amounts of contrast (several mm path length, such as in a fluid flow experiment) and at typical undulator fluxes for small amount of contrast (tens of microns path length, such as an angiography experiment). The tools introduced could of course be used to study and optimize parameters for a wider range of potential applications. The image domain approach was analyzed in terms of its ability to distinguish different elemental stains by characterizing the angle between the lines traced out in a two-dimensional space of effective attenuation coefficient in the front and back layer images. This approach was implemented at a synchrotron and the results were consistent with simulation predictions. PMID:26422059

  12. Material identification in x-ray microscopy and micro CT using multi-layer, multi-color scintillation detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modgil, Dimple; Rigie, David S.; Wang, Yuxin

    We demonstrate that a dual-layer, dual-color scintillator construct for microscopic CT, originally proposed to increase sensitivity in synchrotron imaging, can also be used to perform material quantification and classification when coupled with polychromatic illumination. We consider two different approaches to data handling: (1) a data-domain material decomposition whose estimation performance can be characterized by the Cramer-Rao lower bound formalism but which requires careful calibration and (2) an image-domain material classification approach that is more robust to calibration errors. The data-domain analysis indicates that useful levels of SNR (>5) could be achieved in one second or less at typical bending magnetmore » fluxes for relatively large amounts of contrast (several mm path length, such as in a fluid flow experiment) and at typical undulator fluxes for small amount of contrast (tens of microns path length, such as an angiography experiment). The tools introduced could of course be used to study and optimize parameters for a wider range of potential applications. The image domain approach was analyzed in terms of its ability to distinguish different elemental stains by characterizing the angle between the lines traced out in a two-dimensional space of effective attenuation coefficient in the front and back layer images. As a result, this approach was implemented at a synchrotron and the results were consistent with simulation predictions.« less

  13. Material identification in x-ray microscopy and micro CT using multi-layer, multi-color scintillation detectors

    DOE PAGES

    Modgil, Dimple; Rigie, David S.; Wang, Yuxin; ...

    2015-09-30

    We demonstrate that a dual-layer, dual-color scintillator construct for microscopic CT, originally proposed to increase sensitivity in synchrotron imaging, can also be used to perform material quantification and classification when coupled with polychromatic illumination. We consider two different approaches to data handling: (1) a data-domain material decomposition whose estimation performance can be characterized by the Cramer-Rao lower bound formalism but which requires careful calibration and (2) an image-domain material classification approach that is more robust to calibration errors. The data-domain analysis indicates that useful levels of SNR (>5) could be achieved in one second or less at typical bending magnetmore » fluxes for relatively large amounts of contrast (several mm path length, such as in a fluid flow experiment) and at typical undulator fluxes for small amount of contrast (tens of microns path length, such as an angiography experiment). The tools introduced could of course be used to study and optimize parameters for a wider range of potential applications. The image domain approach was analyzed in terms of its ability to distinguish different elemental stains by characterizing the angle between the lines traced out in a two-dimensional space of effective attenuation coefficient in the front and back layer images. As a result, this approach was implemented at a synchrotron and the results were consistent with simulation predictions.« less

  14. Multi-pinhole collimator design for small-object imaging with SiliSPECT: a high-resolution SPECT

    NASA Astrophysics Data System (ADS)

    Shokouhi, S.; Metzler, S. D.; Wilson, D. W.; Peterson, T. E.

    2009-01-01

    We have designed a multi-pinhole collimator for a dual-headed, stationary SPECT system that incorporates high-resolution silicon double-sided strip detectors. The compact camera design of our system enables imaging at source-collimator distances between 20 and 30 mm. Our analytical calculations show that using knife-edge pinholes with small-opening angles or cylindrically shaped pinholes in a focused, multi-pinhole configuration in combination with this camera geometry can generate narrow sensitivity profiles across the field of view that can be useful for imaging small objects at high sensitivity and resolution. The current prototype system uses two collimators each containing 127 cylindrically shaped pinholes that are focused toward a target volume. Our goal is imaging objects such as a mouse brain, which could find potential applications in molecular imaging.

  15. Recovery of a spectrum based on a compressive-sensing algorithm with weighted principal component analysis

    NASA Astrophysics Data System (ADS)

    Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang

    2017-07-01

    The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.

  16. Reducing the uncertainty in the fidelity of seismic imaging results

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Zou, Z.

    2017-12-01

    A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.

  17. Texas Fires

    Atmospheric Science Data Center

    2014-05-15

    ... one-year drought on record and the warmest month in Texas history. The Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra spacecraft passed over the wildfires at 12:05 p.m. CDT on ...

  18. Appalachian Mountains

    Atmospheric Science Data Center

    2014-05-15

    ...     View Larger Image Multi-angle views of the Appalachian Mountains, March 6, 2000 . ... Center Atmospheric Science Data Center in Hampton, VA. Photo credit: NASA/GSFC/LaRC/JPL, MISR Science Team Other formats ...

  19. Flat Panel Space Based Space Surveillance Sensor

    NASA Astrophysics Data System (ADS)

    Kendrick, R.; Duncan, A.; Wilm, J.; Thurman, S. T.; Stubbs, D. M.; Ogden, C.

    2013-09-01

    Traditional electro-optical (EO) imaging payloads consist of an optical telescope to collect the light from the object scene and map the photons to an image plane to be digitized by a focal plane detector array. The size, weight, and power (SWaP) for the traditional EO imager is dominated by the optical telescope, driven primarily by the large optics, large stiff structures, and the thermal control needed to maintain precision free-space optical alignments. We propose a non-traditional Segmented Planar Imaging Detector for EO Reconnaissance (SPIDER) imager concept that is designed to substantially reduce SWaP, by at least an order of magnitude. SPIDER maximizes performance by providing a larger effective diameter (resolution) while minimizing mass and cost. SPIDER replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies. Lenslets couple light from the object into a set of waveguides on a PIC. Light from each lenslet is distributed among different waveguides by both field angle and optical frequency, and the lenslets are paired up to form unique interferometer baselines by combining light from different waveguides. The complex spatial coherence of the object (for each field angle, frequency, and baseline) is measured with a balanced four quadrature detection scheme. By the Van-Cittert Zernike Theorem, each measurement corresponds to a unique Fourier component of the incoherent object intensity distribution. Finally, an image reconstruction algorithm is used to invert all the data and form an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., CMOS fabrication). The standard EO payload integration and test process which involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication that substantially reduces associated schedule and cost. The low profile and low SWaP of a SPIDER system enables high resolution imaging with a payload that is similar in size and aspect ratio to a solar panel. This allows high resolution low cost options for space based space surveillance telescopes. The low SWaP design enables hosted payloads, cubesat designs as well as traditional bus options that are lower cost. We present a description of the concept and preliminary simulation and experimental data that demonstrate the imaging capabilities of the SPIDER technique.

  20. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    NASA Astrophysics Data System (ADS)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  1. Investigation on the separability of slums by multi-aspect TerraSAR-X dual-co-polarized high resolution spotlight images based on the multi-scale evaluation of local distributions

    NASA Astrophysics Data System (ADS)

    Schmitt, Andreas; Sieg, Tobias; Wurm, Michael; Taubenböck, Hannes

    2018-02-01

    Following recent advances in distinguishing settlements vs. non-settlement areas from latest SAR data, the question arises whether a further automatic intra-urban delineation and characterization of different structural types is possible. This paper studies the appearance of the structural type ;slums; in high resolution SAR images. Geocoded Kennaugh elements are used as backscatter information and Schmittlet indices as descriptor of local texture. Three cities with a significant share of slums (Cape Town, Manila, Mumbai) are chosen as test sites. These are imaged by TerraSAR-X in the dual-co-polarized high resolution spotlight mode in any available aspect angle. Representative distributions are estimated and fused by a robust approach. Our observations identify a high similarity of slums throughout all three test sites. The derived similarity maps are validated with reference data sets from visual interpretation and ground truth. The final validation strategy is based on completeness and correctness versus other classes in relation to the similarity. High accuracies (up to 87%) in identifying morphologic slums are reached for Cape Town. For Manila (up to 60%) and Mumbai (up to 54%), the distinction is more difficult due to their complex structural configuration. Concluding, high resolution SAR data can be suitable to automatically trace potential locations of slums. Polarimetric information and the incidence angle seem to have a negligible impact on the results whereas the intensity patterns and the passing direction of the satellite are playing a key role. Hence, the combination of intensity images (brightness) acquired from ascending and descending orbits together with Schmittlet indices (spatial pattern) promises best results. The transfer from the automatically recognized physical similarity to the semantic interpretation remains challenging.

  2. Improved integral images compression based on multi-view extraction

    NASA Astrophysics Data System (ADS)

    Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric

    2016-09-01

    Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.

  3. Automatic classification and detection of clinically relevant images for diabetic retinopathy

    NASA Astrophysics Data System (ADS)

    Xu, Xinyu; Li, Baoxin

    2008-03-01

    We proposed a novel approach to automatic classification of Diabetic Retinopathy (DR) images and retrieval of clinically-relevant DR images from a database. Given a query image, our approach first classifies the image into one of the three categories: microaneurysm (MA), neovascularization (NV) and normal, and then it retrieves DR images that are clinically-relevant to the query image from an archival image database. In the classification stage, the query DR images are classified by the Multi-class Multiple-Instance Learning (McMIL) approach, where images are viewed as bags, each of which contains a number of instances corresponding to non-overlapping blocks, and each block is characterized by low-level features including color, texture, histogram of edge directions, and shape. McMIL first learns a collection of instance prototypes for each class that maximizes the Diverse Density function using Expectation- Maximization algorithm. A nonlinear mapping is then defined using the instance prototypes and maps every bag to a point in a new multi-class bag feature space. Finally a multi-class Support Vector Machine is trained in the multi-class bag feature space. In the retrieval stage, we retrieve images from the archival database who bear the same label with the query image, and who are the top K nearest neighbors of the query image in terms of similarity in the multi-class bag feature space. The classification approach achieves high classification accuracy, and the retrieval of clinically-relevant images not only facilitates utilization of the vast amount of hidden diagnostic knowledge in the database, but also improves the efficiency and accuracy of DR lesion diagnosis and assessment.

  4. NASA Tech Briefs, August 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics covered include: Mars Science Laboratory Drill; Ultra-Compact Motor Controller; A Reversible Thermally Driven Pump for Use in a Sub-Kelvin Magnetic Refrigerator; Shape Memory Composite Hybrid Hinge; Binding Causes of Printed Wiring Assemblies with Card-Loks; Coring Sample Acquisition Tool; Joining and Assembly of Bulk Metallic Glass Composites Through Capacitive Discharge; 670-GHz Schottky Diode-Based Subharmonic Mixer with CPW Circuits and 70-GHz IF; Self-Nulling Lock-in Detection Electronics for Capacitance Probe Electrometer; Discontinuous Mode Power Supply; Optimal Dynamic Sub-Threshold Technique for Extreme Low Power Consumption for VLSI; Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing; Blocking Filters with Enhanced Throughput for X-Ray Microcalorimetry; High-Thermal-Conductivity Fabrics; Imidazolium-Based Polymeric Materials as Alkaline Anion-Exchange Fuel Cell Membranes; Electrospun Nanofiber Coating of Fiber Materials: A Composite Toughening Approach; Experimental Modeling of Sterilization Effects for Atmospheric Entry Heating on Microorganisms; Saliva Preservative for Diagnostic Purposes; Hands-Free Transcranial Color Doppler Probe; Aerosol and Surface Parameter Retrievals for a Multi-Angle, Multiband Spectrometer LogScope; TraceContract; AIRS Maps from Space Processing Software; POSTMAN: Point of Sail Tacking for Maritime Autonomous Navigation; Space Operations Learning Center; OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems; Large Eddy Simulation (LES) of Particle-Laden Temporal Mixing Layers; Projection of Stabilized Aerial Imagery Onto Digital Elevation Maps for Geo-Rectified and Jitter-Free Viewing; Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery; 3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer; Social Networking Adapted for Distributed Scientific Collaboration; General Methodology for Designing Spacecraft Trajectories; Hemispherical Field-of-View Above-Water Surface Imager for Submarines; and Quantum-Well Infrared Photodetector (QWIP) Focal Plane Assembly.

  5. Closed Large Cell Clouds

    Atmospheric Science Data Center

    2013-04-19

    article title:  Closed Large Cell Clouds in the South Pacific ... the Multi-angle Imaging SpectroRadiometer (MISR) provide an example of very large scale closed cells, and can be contrasted with the  ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...

  6. Accurate joint space quantification in knee osteoarthritis: a digital x-ray tomosynthesis phantom study

    NASA Astrophysics Data System (ADS)

    Sewell, Tanzania S.; Piacsek, Kelly L.; Heckel, Beth A.; Sabol, John M.

    2011-03-01

    The current imaging standard for diagnosis and monitoring of knee osteoarthritis (OA) is projection radiography. However radiographs may be insensitive to markers of early disease such as osteophytes and joint space narrowing (JSN). Relative to standard radiography, digital X-ray tomosynthesis (DTS) may provide improved visualization of the markers of knee OA without the interference of superimposed anatomy. DTS utilizes a series of low-dose projection images over an arc of +/-20 degrees to reconstruct tomographic images parallel to the detector. We propose that DTS can increase accuracy and precision in JSN quantification. The geometric accuracy of DTS was characterized by quantifying joint space width (JSW) as a function of knee flexion and position using physical and anthropomorphic phantoms. Using a commercially available digital X-ray system, projection and DTS images were acquired for a Lucite rod phantom with known gaps at various source-object-distances, and angles of flexion. Gap width, representative of JSW, was measured using a validated algorithm. Over an object-to-detector-distance range of 5-21cm, a 3.0mm gap width was reproducibly measured in the DTS images, independent of magnification. A simulated 0.50mm (+/-0.13) JSN was quantified accurately (95% CI 0.44-0.56mm) in the DTS images. Angling the rods to represent knee flexion, the minimum gap could be precisely determined from the DTS images and was independent of flexion angle. JSN quantification using DTS was insensitive to distance from patient barrier and flexion angle. Potential exists for the optimization of DTS for accurate radiographic quantification of knee OA independent of patient positioning.

  7. Monitoring Change Through Hierarchical Segmentation of Remotely Sensed Image Data

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Lawrence, William T.

    2005-01-01

    NASA's Goddard Space Flight Center has developed a fast and effective method for generating image segmentation hierarchies. These segmentation hierarchies organize image data in a manner that makes their information content more accessible for analysis. Image segmentation enables analysis through the examination of image regions rather than individual image pixels. In addition, the segmentation hierarchy provides additional analysis clues through the tracing of the behavior of image region characteristics at several levels of segmentation detail. The potential for extracting the information content from imagery data based on segmentation hierarchies has not been fully explored for the benefit of the Earth and space science communities. This paper explores the potential of exploiting these segmentation hierarchies for the analysis of multi-date data sets, and for the particular application of change monitoring.

  8. A new multi-angle remote sensing framework for scaling vegetation properties from tower-based spectro-radiometers to next generation "CubeSat"-satellites.

    NASA Astrophysics Data System (ADS)

    Hilker, T.; Hall, F. G.; Dyrud, L. P.; Slagowski, S.

    2014-12-01

    Frequent earth observations are essential for assessing the risks involved with global climate change, its feedbacks on carbon, energy and water cycling and consequences for live on earth. Often, satellite-remote sensing is the only practical way to provide such observations at comprehensive spatial scales, but relationships between land surface parameters and remotely sensed observations are mostly empirical and cannot easily be scaled across larger areas or over longer time intervals. For instance, optically based methods frequently depend on extraneous effects that are unrelated to the surface property of interest, including the sun-server geometry or background reflectance. As an alternative to traditional, mono-angle techniques, multi-angle remote sensing can help overcome some of these limitations by allowing vegetation properties to be derived from comprehensive reflectance models that describe changes in surface parameters based on physical principles and radiative transfer theory. Recent results have shown in theoretical and experimental research that multi-angle techniques can be used to infer and scale the photosynthetic rate of vegetation, its biochemical and structural composition robustly from remote sensing. Multi-angle remote sensing could therefore revolutionize estimates of the terrestrial carbon uptake as scaling of primary productivity may provide a quantum leap in understanding the spatial and temporal complexity of terrestrial earth science. Here, we introduce a framework of next generation tower-based instruments to a novel and unique constellation of nano-satellites (Figure 1) that will allow us to systematically scale vegetation parameters from stand to global levels. We provide technical insights, scientific rationale and present results. We conclude that future earth observation from multi-angle satellite constellations, supported by tower based remote sensing will open new opportunities for earth system science and earth system modeling.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Okumura, Satoshi; Komori, Masataka

    We developed a prototype positron emission tomography (PET) system based on a new concept called Open-close PET, which has two modes: open and close-modes. In the open-mode, the detector ring is separated into two halved rings and subject is imaged with the open space and projection image is formed. In the close-mode, the detector ring is closed to be a regular circular ring, and the subject can be imaged without an open space, and so reconstructed images can be made without artifacts. The block detector of the Open-close PET system consists of two scintillator blocks that use two types ofmore » gadolinium orthosilicate (GSO) scintillators with different decay times, angled optical fiber-based image guides, and a flat panel photomultiplier tube. The GSO pixel size was 1.6 × 2.4 × 7 mm and 8 mm for fast (35 ns) and slow (60 ns) GSOs, respectively. These GSOs were arranged into an 11 × 15 matrix and optically coupled in the depth direction to form a depth-of-interaction detector. The angled optical fiber-based image guides were used to arrange the two scintillator blocks at 22.5° so that they can be arranged in a hexadecagonal shape with eight block detectors to simplify the reconstruction algorithm. The detector ring was divided into two halves to realize the open-mode and set on a mechanical stand with which the distance between the two parts can be manually changed. The spatial resolution in the close-mode was 2.4-mm FWHM, and the sensitivity was 1.7% at the center of the field-of-view. In both the close- and open-modes, we made sagittal (y-z plane) projection images between the two halved detector rings. We obtained reconstructed and projection images of {sup 18}F-NaF rat studies and proton-irradiated phantom images. These results indicate that our developed Open-close PET is useful for some applications such as proton therapy as well as other applications such as molecular imaging.« less

  10. Cloud Spirals and Outflow in Tropical Storm Katrina

    NASA Technical Reports Server (NTRS)

    2005-01-01

    On Tuesday, August 30, 2005, NASA's Multi-angle Imaging SpectroRadiometer retrieved cloud-top heights and cloud-tracked wind velocities for Tropical Storm Katrina, as the center of the storm was situated over the Tennessee valley. At this time Katrina was weakening and no longer classified as a hurricane, and would soon become an extratropical depression. Measurements such as these can help atmospheric scientists compare results of computer-generated hurricane simulations with observed conditions, ultimately allowing them to better represent and understand physical processes occurring in hurricanes.

    Because air currents are influenced by the Coriolis force (caused by the rotation of the Earth), Northern Hemisphere hurricanes are characterized by an inward counterclockwise (cyclonic) rotation towards the center. It is less widely known that, at high altitudes, outward-spreading bands of cloud rotate in a clockwise (anticyclonic) direction. The image on the left shows the retrieved cloud-tracked winds as red arrows superimposed across the natural color view from MISR's nadir (vertical-viewing) camera. Both the counter-clockwise motion for the lower-level storm clouds and the clockwise motion for the upper clouds are apparent in these images. The speeds for the clockwise upper level winds have typical values between 40 and 45 m/s (144-162 km/hr). The low level counterclockwise winds have typical values between 7 and 24 m/s (25-86 km/hr), weakening with distance from the storm center. The image on the right displays the cloud-top height retrievals. Areas where cloud heights could not be retrieved are shown in dark gray. Both the wind velocity vectors and the cloud-top height field were produced by automated computer recognition of displacements in spatial features within successive MISR images acquired at different view angles and at slightly different times.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously, viewing the entire globe between 82o north and 82o south latitude every nine days. This image covers an area of about 380 kilometers by 1970 kilometers. These data products were generated from a portion of the imagery acquired during Terra orbit 30324 and utilize data from blocks 55-68 within World Reference System-2 path 22.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is managed for NASA by the California Institute of Technology.

  11. An advanced scanning method for space-borne hyper-spectral imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Yue-ming; Lang, Jun-Wei; Wang, Jian-Yu; Jiang, Zi-Qing

    2011-08-01

    Space-borne hyper-spectral imagery is an important means for the studies and applications of earth science. High cost efficiency could be acquired by optimized system design. In this paper, an advanced scanning method is proposed, which contributes to implement both high temporal and spatial resolution imaging system. Revisit frequency and effective working time of space-borne hyper-spectral imagers could be greatly improved by adopting two-axis scanning system if spatial resolution and radiometric accuracy are not harshly demanded. In order to avoid the quality degradation caused by image rotation, an idea of two-axis rotation has been presented based on the analysis and simulation of two-dimensional scanning motion path and features. Further improvement of the imagers' detection ability under the conditions of small solar altitude angle and low surface reflectance can be realized by the Ground Motion Compensation on pitch axis. The structure and control performance are also described. An intelligent integration technology of two-dimensional scanning and image motion compensation is elaborated in this paper. With this technology, sun-synchronous hyper-spectral imagers are able to pay quick visit to hot spots, acquiring both high spatial and temporal resolution hyper-spectral images, which enables rapid response of emergencies. The result has reference value for developing operational space-borne hyper-spectral imagers.

  12. Results from a multi aperture Fizeau interferometer ground testbed: demonstrator for a future space-based interferometer

    NASA Astrophysics Data System (ADS)

    Baccichet, Nicola; Caillat, Amandine; Rakotonimbahy, Eddy; Dohlen, Kjetil; Savini, Giorgio; Marcos, Michel

    2016-08-01

    In the framework of the European FP7-FISICA (Far Infrared Space Interferometer Critical Assessment) program, we developed a miniaturized version of the hyper-telescope to demonstrate multi-aperture interferometry on ground. This setup would be ultimately integrated into a CubeSat platform, therefore providing the first real demonstrator of a multi aperture Fizeau interferometer in space. In this paper, we describe the optical design of the ground testbed and the data processing pipeline implemented to reconstruct the object image from interferometric data. As a scientific application, we measured the Sun diameter by fitting a limb-darkening model to our data. Finally, we present the design of a CubeSat platform carrying this miniature Fizeau interferometer, which could be used to monitor the Sun diameter over a long in-orbit period.

  13. Intracranial cerebrospinal fluid spaces imaging using a pulse-triggered three-dimensional turbo spin echo MR sequence with variable flip-angle distribution.

    PubMed

    Hodel, Jérôme; Silvera, Jonathan; Bekaert, Olivier; Rahmouni, Alain; Bastuji-Garin, Sylvie; Vignaud, Alexandre; Petit, Eric; Durning, Bruno; Decq, Philippe

    2011-02-01

    To assess the three-dimensional turbo spin echo with variable flip-angle distribution magnetic resonance sequence (SPACE: Sampling Perfection with Application optimised Contrast using different flip-angle Evolution) for the imaging of intracranial cerebrospinal fluid (CSF) spaces. We prospectively investigated 18 healthy volunteers and 25 patients, 20 with communicating hydrocephalus (CH), five with non-communicating hydrocephalus (NCH), using the SPACE sequence at 1.5T. Volume rendering views of both intracranial and ventricular CSF were obtained for all patients and volunteers. The subarachnoid CSF distribution was qualitatively evaluated on volume rendering views using a four-point scale. The CSF volumes within total, ventricular and subarachnoid spaces were calculated as well as the ratio between ventricular and subarachnoid CSF volumes. Three different patterns of subarachnoid CSF distribution were observed. In healthy volunteers we found narrowed CSF spaces within the occipital aera. A diffuse narrowing of the subarachnoid CSF spaces was observed in patients with NCH whereas patients with CH exhibited narrowed CSF spaces within the high midline convexity. The ratios between ventricular and subarachnoid CSF volumes were significantly different among the volunteers, patients with CH and patients with NCH. The assessment of CSF spaces volume and distribution may help to characterise hydrocephalus.

  14. Video sensor with range measurement capability

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)

    2008-01-01

    A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.

  15. Axial traction magnetic resonance imaging (MRI) of the glenohumeral joint in healthy volunteers: initial experience.

    PubMed

    Garwood, Elisabeth R; Souza, Richard B; Zhang, Amy; Zhang, Alan L; Ma, C Benjamin; Link, Thomas M; Motamedi, Daria

    Evaluate technical feasibility and potential applications of glenohumeral (GH) joint axial traction magnetic resonance imaging (MRI) in healthy volunteers. Eleven shoulders were imaged in neutral and with 4kg axial traction at 3T. Quantitative measurements were assessed. Axial traction was well tolerated. There was statistically significant widening of the superior GH joint space (p=0.002) and acromial angle (p=0.017) with traction. Inter-rater agreement was high. GH joint axial traction MRI is technically feasible and well tolerated in volunteers. Traction of the capsule, widening of the superior GH joint space and acromial angle were observed. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Multi-beam laser altimeter

    NASA Technical Reports Server (NTRS)

    Bufton, Jack L.; Harding, David J.; Ramos-Izquierdo, Luis

    1993-01-01

    Laser altimetry provides a high-resolution, high-accuracy method for measurement of the elevation and horizontal variability of Earth-surface topography. The basis of the measurement is the timing of the round-trip propagation of short-duration pulses of laser radiation between a spacecraft and the Earth's surface. Vertical resolution of the altimetry measurement is determined primarily by laser pulsewidth, surface-induced spreading in time of the reflected pulse, and the timing precision of the altimeter electronics. With conventional gain-switched pulses from solid-state lasers and sub-nsec resolution electronics, sub-meter vertical range resolution is possible from orbital attitudes of several hundred kilometers. Horizontal resolution is a function of laser beam footprint size at the surface and the spacing between successive laser pulses. Laser divergence angle and altimeter platform height above the surface determine the laser footprint size at the surface, while laser pulse repetition-rate, laser transmitter beam configuration, and altimeter platform velocity determine the space between successive laser pulses. Multiple laser transitters in a singlaltimeter instrument provide across-track and along-track coverage that can be used to construct a range image of the Earth's surface. Other aspects of the multi-beam laser altimeter are discussed.

  17. Interleaved diffusion-weighted EPI improved by adaptive partial-Fourier and multi-band multiplexed sensitivity-encoding reconstruction

    PubMed Central

    Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei

    2014-01-01

    Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000

  18. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  19. Three-dimensional single-cell imaging with X-ray waveguides in the holographic regime

    DOE PAGES

    Krenkel, Martin; Toepperwien, Mareike; Alves, Frauke; ...

    2017-06-29

    X-ray tomography at the level of single biological cells is possible in a low-dose regime, based on full-field holographic recordings, with phase contrast originating from free-space wave propagation. Building upon recent progress in cellular imaging based on the illumination by quasi-point sources provided by X-ray waveguides, here this approach is extended in several ways. First, the phase-retrieval algorithms are extended by an optimized deterministic inversion, based on a multi-distance recording. Second, different advanced forms of iterative phase retrieval are used, operational for single-distance and multi-distance recordings. Results are compared for several different preparations of macrophage cells, for different staining andmore » labelling. As a result, it is shown that phase retrieval is no longer a bottleneck for holographic imaging of cells, and how advanced schemes can be implemented to cope also with high noise and inconsistencies in the data.« less

  20. Optimal distance of multi-plane sensor in three-dimensional electrical impedance tomography.

    PubMed

    Hao, Zhenhua; Yue, Shihong; Sun, Benyuan; Wang, Huaxiang

    2017-12-01

    Electrical impedance tomography (EIT) is a visual imaging technique for obtaining the conductivity and permittivity distributions in the domain of interest. As an advanced technique, EIT has the potential to be a valuable tool for continuously bedside monitoring of pulmonary function. The EIT applications in any three-dimensional (3 D) field are very limited to the 3 D effects, i.e. the distribution of electric field spreads far beyond the electrode plane. The 3 D effects can result in measurement errors and image distortion. An important way to overcome the 3 D effect is to use the multiple groups of sensors. The aim of this paper is to find the best space resolution of EIT image over various electrode planes and select an optimal plane spacing in a 3 D EIT sensor, and provide guidance for 3 D EIT electrodes placement in monitoring lung function. In simulation and experiment, several typical conductivity distribution models, such as one rod (central, midway and edge), two rods and three rods, are set at different plane spacings between the two electrode planes. A Tikhonov regularization algorithm is utilized for reconstructing the images; the relative error and the correlation coefficient are utilized for evaluating the image quality. Based on numerical simulation and experimental results, the image performance at different spacing conditions is evaluated. The results demonstrate that there exists an optimal plane spacing between the two electrode planes for 3 D EIT sensor. And then the selection of the optimal plane spacing between the electrode planes is suggested for the electrodes placement of multi-plane EIT sensor.

  1. Amery Ice Shelf

    Atmospheric Science Data Center

    2013-04-16

    ... funded by NASA and undertaken by the Scripps Institution of Oceanography and the Australian Antarctic Division. The Multi-angle Imaging ... Laboratory), and Helen A. Fricker (Scripps Institution of Oceanography). Other formats available at JPL Oct 6, ...

  2. Johannesburg

    Atmospheric Science Data Center

    2013-04-15

    ... coming from there), the discovery of now-famous hominid fossils at the Sterkfontein Caves, and the convening of the world's ... the outstanding universal value of the paleo-anthropological fossils found there. These views from the Multi-angle Imaging ...

  3. Diagnostic discrepancies in retinopathy of prematurity classification

    PubMed Central

    Campbell, J. Peter; Ryan, Michael C.; Lore, Emily; Tian, Peng; Ostmo, Susan; Jonas, Karyn; Chan, R.V. Paul; Chiang, Michael F.

    2016-01-01

    Objective To identify the most common areas for discrepancy in retinopathy of prematurity (ROP) classification between experts. Design Prospective cohort study. Subjects, Participants, and/or Controls 281 infants were identified as part of a multi-center, prospective, ROP cohort study from 7 participating centers. Each site had participating ophthalmologists who provided the clinical classification after routine examination using binocular indirect ophthalmoscopy (BIO), and obtained wide-angle retinal images, which were independently classified by two study experts. Methods Wide-angle retinal images (RetCam; Clarity Medical Systems, Pleasanton, CA) were obtained from study subjects, and two experts evaluated each image using a secure web-based module. Image-based classifications for zone, stage, plus disease, overall disease category (no ROP, mild ROP, Type II or pre-plus, and Type I) were compared between the two experts, and to the clinical classification obtained by BIO. Main Outcome Measures Inter-expert image-based agreement and image-based vs. ophthalmoscopic diagnostic agreement using absolute agreement and weighted kappa statistic. Results 1553 study eye examinations from 281 infants were included in the study. Experts disagreed on the stage classification in 620/1553 (40%) of comparisons, plus disease classification (including pre-plus) in 287/1553 (18%), zone in 117/1553 (8%), and overall ROP category in 618/1553 (40%). However, agreement for presence vs. absence of type 1 disease was >95%. There were no differences between image-based and clinical classification except for zone III disease. Conclusions The most common area of discrepancy in ROP classification is stage, although inter-expert agreement for clinically-significant disease such as presence vs. absence of type 1 and type 2 disease is high. There were no differences between image-based grading and the clinical exam in the ability to detect clinically-significant disease. This study provides additional evidence that image-based classification of ROP reliably detects clinically significant levels of ROP with high accuracy compared to the clinical exam. PMID:27238376

  4. Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array.

    PubMed

    Phillips, Zachary F; D'Ambrosio, Michael V; Tian, Lei; Rulison, Jared J; Patel, Hurshal S; Sadras, Nitin; Gande, Aditya V; Switz, Neil A; Fletcher, Daniel A; Waller, Laura

    2015-01-01

    We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope--a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities.

  5. Multi-Contrast Imaging and Digital Refocusing on a Mobile Microscope with a Domed LED Array

    PubMed Central

    Phillips, Zachary F.; D'Ambrosio, Michael V.; Tian, Lei; Rulison, Jared J.; Patel, Hurshal S.; Sadras, Nitin; Gande, Aditya V.; Switz, Neil A.; Fletcher, Daniel A.; Waller, Laura

    2015-01-01

    We demonstrate the design and application of an add-on device for improving the diagnostic and research capabilities of CellScope—a low-cost, smartphone-based point-of-care microscope. We replace the single LED illumination of the original CellScope with a programmable domed LED array. By leveraging recent advances in computational illumination, this new device enables simultaneous multi-contrast imaging with brightfield, darkfield, and phase imaging modes. Further, we scan through illumination angles to capture lightfield datasets, which can be used to recover 3D intensity and phase images without any hardware changes. This digital refocusing procedure can be used for either 3D imaging or software-only focus correction, reducing the need for precise mechanical focusing during field experiments. All acquisition and processing is performed on the mobile phone and controlled through a smartphone application, making the computational microscope compact and portable. Using multiple samples and different objective magnifications, we demonstrate that the performance of our device is comparable to that of a commercial microscope. This unique device platform extends the field imaging capabilities of CellScope, opening up new clinical and research possibilities. PMID:25969980

  6. Self-Organizing-Map Program for Analyzing Multivariate Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Jacob, Joseph C.; Block, Gary L.; Braverman, Amy J.

    2005-01-01

    SOM_VIS is a computer program for analysis and display of multidimensional sets of Earth-image data typified by the data acquired by the Multi-angle Imaging Spectro-Radiometer [MISR (a spaceborne instrument)]. In SOM_VIS, an enhanced self-organizing-map (SOM) algorithm is first used to project a multidimensional set of data into a nonuniform three-dimensional lattice structure. The lattice structure is mapped to a color space to obtain a color map for an image. The Voronoi cell-refinement algorithm is used to map the SOM lattice structure to various levels of color resolution. The final result is a false-color image in which similar colors represent similar characteristics across all its data dimensions. SOM_VIS provides a control panel for selection of a subset of suitably preprocessed MISR radiance data, and a control panel for choosing parameters to run SOM training. SOM_VIS also includes a component for displaying the false-color SOM image, a color map for the trained SOM lattice, a plot showing an original input vector in 36 dimensions of a selected pixel from the SOM image, the SOM vector that represents the input vector, and the Euclidean distance between the two vectors.

  7. Computation of Femoral Canine Morphometric Parameters in Three-Dimensional Geometrical Models.

    PubMed

    Savio, Gianpaolo; Baroni, Teresa; Concheri, Gianmaria; Baroni, Ermenegildo; Meneghello, Roberto; Longo, Federico; Isola, Maurizio

    2016-11-01

    To define and validate a method for the measurement of 3-dimensional (3D) morphometric parameters in polygonal mesh models of canine femora. Ex vivo/computerized model. Sixteen femora from 8 medium to large-breed canine cadavers (mean body weight 28.3 kg, mean age 5.3 years). Femora were measured with a 3D scanner, obtaining 3D meshes. A computer-aided design-based (CAD) software tool was purposely developed, which allowed automatic calculation of morphometric parameters on a mesh model. Anatomic and mechanical lateral proximal femoral angles (aLPFA and mLPFA), anatomic and mechanical lateral distal femoral angles (aLDFA and mLDFA), femoral neck angle (FNA), femoral torsion angle (FTA), and femoral varus angle (FVA) were measured in 3D space. Angles were also measured onto projected planes and radiographic images. Mean (SD) femoral angles (degrees) measured in 3D space were: aLPFA 115.2 (3.9), mLPFA 105.5 (4.2), aLDFA 88.6 (4.5), mLDFA 93.4 (3.9), FNA 129.6 (4.3), FTA 45 (4.5), and FVA -1.4 (4.5). Onto projection planes, aLPFA was 103.7 (5.9), mLPFA 98.4 (5.3), aLDFA 88.3 (5.5), mLDFA 93.6 (4.2), FNA 132.1 (3.5), FTA 19.1 (5.7), and FVA -1.7 (5.5). With radiographic imaging, aLPFA was 109.6 (5.9), mLPFA 105.3 (5.2), aLDFA 92.6 (3.8), mLDFA 96.9 (2.9), FNA 120.2 (8.0), FTA 30.2 (5.7), and FVA 2.6 (3.8). The proposed method gives reliable and consistent information about 3D bone conformation. Results are obtained automatically and depend only on femur morphology, avoiding any operator-related bias. Angles in 3D space are different from those measured with standard radiographic methods, mainly due to the different definition of femoral axes. © Copyright 2016 by The American College of Veterinary Surgeons.

  8. Flying Over Mimas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This movie was made of narrow-angle images taken over a period of seven hours during Cassini's close encounter with Saturn's moon Mimas on Aug. 2, 2005.

    In the movie the moon appears to rotate through about 115 degrees and the range varies from 253,000 to 64,000 kilometers (158,000 to 40,000 miles). The image scale in the final pan across the surface is about 760 meters (about 2,500 feet) per pixel.

    The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

    For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

  9. Casting Light and Shadows on a Saharan Dust Storm

    NASA Technical Reports Server (NTRS)

    2003-01-01

    On March 2, 2003, near-surface winds carried a large amount of Saharan dust aloft and transported the material westward over the Atlantic Ocean. These observations from the Multi-angle Imaging SpectroRadiometer (MISR) aboard NASA's Terra satellite depict an area near the Cape Verde Islands (situated about 700 kilometers off of Africa's western coast) and provide images of the dust plume along with measurements of its height and motion. Tracking the three-dimensional extent and motion of air masses containing dust or other types of aerosols provides data that can be used to verify and improve computer simulations of particulate transport over large distances, with application to enhancing our understanding of the effects of such particles on meteorology, ocean biological productivity, and human health.

    MISR images the Earth by measuring the spatial patterns of reflected sunlight. In the upper panel of the still image pair, the observations are displayed as a natural-color snapshot from MISR's vertical-viewing (nadir) camera. High-altitude cirrus clouds cast shadows on the underlying ocean and dust layer, which are visible in shades of blue and tan, respectively. In the lower panel, heights derived from automated stereoscopic processing of MISR's multi-angle imagery show the cirrus clouds (yellow areas) to be situated about 12 kilometers above sea level. The distinctive spatial patterns of these clouds provide the necessary contrast to enable automated feature matching between images acquired at different view angles. For most of the dust layer, which is spatially much more homogeneous, the stereoscopic approach was unable to retrieve elevation data. However, the edges of shadows cast by the cirrus clouds onto the dust (indicated by blue and cyan pixels) provide sufficient spatial contrast for a retrieval of the dust layer's height, and indicate that the top of layer is only about 2.5 kilometers above sea level.

    Motion of the dust and clouds is directly observable with the assistance of the multi-angle 'fly-over' animation (Below). The frames of the animation consist of data acquired by the 70-degree, 60-degree, 46-degree and 26-degree forward-viewing cameras in sequence, followed by the images from the nadir camera and each of the four backward-viewing cameras, ending with 70-degree backward image. Much of the south-to-north shift in the position of the clouds is due to geometric parallax between the nine view angles (rather than true motion), whereas the west-to-east motion is due to actual motion of the clouds over the seven minutes during which all nine cameras observed the scene. MISR's automated data processing retrieved a primarily westerly (eastward) motion of these clouds with speeds of 30-40 meters per second. Note that there is much less geometric parallax for the cloud shadows owing to the relatively low altitude of the dust layer upon which the shadows are cast (the amount of parallax is proportional to elevation and a feature at the surface would have no geometric parallax at all); however, the westerly motion of the shadows matches the actual motion of the clouds. The automated processing was not able to resolve a velocity for the dust plume, but by manually tracking dust features within the plume images that comprise the animation sequence we can derive an easterly (westward) speed of about 16 meters per second. These analyses and visualizations of the MISR data demonstrate that not only are the cirrus clouds and dust separated significantly in elevation, but they exist in completely different wind regimes, with the clouds moving toward the east and the dust moving toward the west.

    [figure removed for brevity, see original site]

    (Click on image above for high resolution version)

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17040. The panels cover an area of about 312 kilometers x 242 kilometers, and use data from blocks 74 to 77 within World Reference System-2 path 207.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  10. Oil Fire Plumes Over Baghdad

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Dark smoke from oil fires extend for about 60 kilometers south of Iraq's capital city of Baghdad in these images acquired by the Multi-angle Imaging SpectroRadiometer (MISR) on April 2, 2003. The thick, almost black smoke is apparent near image center and contains chemical and particulate components hazardous to human health and the environment.

    The top panel is from MISR's vertical-viewing (nadir) camera. Vegetated areas appear red here because this display is constructed using near-infrared, red and blue band data, displayed as red, green and blue, respectively, to produce a false-color image. The bottom panel is a combination of two camera views of the same area and is a 3-D stereo anaglyph in which red band nadir camera data are displayed as red, and red band data from the 60-degree backward-viewing camera are displayed as green and blue. Both panels are oriented with north to the left in order to facilitate stereo viewing. Viewing the 3-D anaglyph with red/blue glasses (with the red filter placed over the left eye and the blue filter over the right) makes it possible to see the rising smoke against the surface terrain. This technique helps to distinguish features in the atmosphere from those on the surface. In addition to the smoke, several high, thin cirrus clouds (barely visible in the nadir view) are readily observed using the stereo image.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 17489. The panels cover an area of about 187 kilometers x 123 kilometers, and use data from blocks 63 to 65 within World Reference System-2 path 168.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  11. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  12. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  13. MISR INTEX-B Products

    Atmospheric Science Data Center

    2016-11-25

    ... scales and assess their impact on air quality and climate. Phase B will be performed March 1-31, 2006 and it will focus on Mexico City pollution outflow. The Multi-angle Imaging SpectroRadiometer (MISR) team ...

  14. A method of solving tilt illumination for multiple distance phase retrieval

    NASA Astrophysics Data System (ADS)

    Guo, Cheng; Li, Qiang; Tan, Jiubin; Liu, Shutian; Liu, Zhengjun

    2018-07-01

    Multiple distance phase retrieval is a technique of using a series of intensity patterns to reconstruct a complex-valued image of object. However, tilt illumination originating from the off-axis displacement of incident light significantly impairs its imaging quality. To eliminate this affection, we use cross-correlation calibration to estimate oblique angle of incident light and a Fourier-based strategy to correct tilted illumination effect. Compared to other methods, binary and biological object are both stably reconstructed in simulation and experiment. This work provides a simple but beneficial method to solve the problem of tilt illumination for lens-free multi-distance system.

  15. Computer image generation: Reconfigurability as a strategy in high fidelity space applications

    NASA Technical Reports Server (NTRS)

    Bartholomew, Michael J.

    1989-01-01

    The demand for realistic, high fidelity, computer image generation systems to support space simulation is well established. However, as the number and diversity of space applications increase, the complexity and cost of computer image generation systems also increase. One strategy used to harmonize cost with varied requirements is establishment of a reconfigurable image generation system that can be adapted rapidly and easily to meet new and changing requirements. The reconfigurability strategy through the life cycle of system conception, specification, design, implementation, operation, and support for high fidelity computer image generation systems are discussed. The discussion is limited to those issues directly associated with reconfigurability and adaptability of a specialized scene generation system in a multi-faceted space applications environment. Examples and insights gained through the recent development and installation of the Improved Multi-function Scene Generation System at Johnson Space Center, Systems Engineering Simulator are reviewed and compared with current simulator industry practices. The results are clear; the strategy of reconfigurability applied to space simulation requirements provides a viable path to supporting diverse applications with an adaptable computer image generation system.

  16. Multi-use lunar telescopes

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Hine, Butler; Genet, Russell; Genet, David; Talent, David; Boyd, Louis; Trueblood, Mark; Filippenko, Alexei V. (Editor)

    1991-01-01

    The objective of multi-use telescopes is to reduce the initial and operational costs of space telescopes to the point where a fair number of telescopes, a dozen or so, would be affordable. The basic approach is to develop a common telescope, control system, and power and communications subsystem that can be used with a wide variety of instrument payloads, i.e., imaging CCD cameras, photometers, spectrographs, etc. By having such a multi-use and multi-user telescope, a common practice for earth-based telescopes, development cost can be shared across many telescopes, and the telescopes can be produced in economical batches.

  17. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    DTIC Science & Technology

    2013-09-01

    Ground testing of prototype hardware and processing algorithms for a Wide Area Space Surveillance System (WASSS) Neil Goldstein, Rainer A...at Magdalena Ridge Observatory using the prototype Wide Area Space Surveillance System (WASSS) camera, which has a 4 x 60 field-of-view , < 0.05...objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and a Principal Component Analysis based image

  18. Test technology on divergence angle of laser range finder based on CCD imaging fusion

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-bing; Chen, Zhen-xing; Lv, Yao

    2016-09-01

    Laser range finder has been equipped with all kinds of weapons, such as tank, ship, plane and so on, is important component of fire control system. Divergence angle is important performance and incarnation of horizontal resolving power for laser range finder, is necessary appraised test item in appraisal test. In this paper, based on high accuracy test on divergence angle of laser range finder, divergence angle test system is designed based on CCD imaging, divergence angle of laser range finder is acquired through fusion technology for different attenuation imaging, problem that CCD characteristic influences divergence angle test is solved.

  19. Testing ElEvoHI on a multi-point in situ detected Coronal Mass Ejection

    NASA Astrophysics Data System (ADS)

    Amerstorfer, Tanja; Möstl, Christian; Hess, Phillip; Mays, M. Leila; Temmer, Manuela

    2017-04-01

    The Solar TErrestrial RElations Observatory (STEREO) has provided us a deep insight into the interplanetary propagation of coronal mass ejections (CMEs). Especially the wide-angle heliospheric imagers (HI) enabled the development of a multitude of methods for analyzing the evolution of CMEs through interplanetary (IP) space. Methods able to forecast arrival times and speeds at Earth (or other targets) use the advantage of following a CME's path of propagation up to 1 AU. However, these methods were not able to reduce today's errors in arrival time forecasts to less than ±6 hours, arrival speeds are mostly overestimated by some 100 km s-1. One reason for that is the assumption of constant propagation speed, which is clearly incorrect for most CMEs—especially for those being faster than the ambient solar wind. ElEvoHI, the Ellipse Evolution model (ElEvo) based on HI observations, is a new prediction tool, which uses the benefits of different methods and observations. It provides the possibility to adjust the CME frontal shape (angular width, ellipse aspect ratio) and the direction of motion for each CME event individually. This information can be gained from Graduated Cylindrical Shell (GCS) flux-rope fitting within coronagraph images. Using the Ellipse Conversion (ElCon) method, the observed HI elongation angle is converted into a unit of distance, which reveals the kinematics of the event. After fitting the time-distance profile of the CME using the drag-based equation of motion, where real-time in situ solar wind speed from 1 AU is used as additional input, we receive all input parameters needed to run a forecast using the ElEvo model and to predict arrival times and speeds at any target of interest in IP space. Here, we present a test on a slow CME event of 3 November 2010, in situ detected by the lined-up spacecraft MESSENGER and STEREO Behind. We gain the shape of the CME front from a cut of the 3D GCS CME shape with the ecliptic plane, resulting in an almost ideal ElEvoHI forecast of arrival time and speed at 1 AU.

  20. MAPPING ANNUAL MEAN GROUND-LEVEL PM2.5 CONCENTRATIONS USING MULTIANGLE IMAGING SPECTRORADIOMETER AEROSOL OPTICAL THICKNESS OVER THE CONTIGUOUS UNITED STATES

    EPA Science Inventory

    We present a simple approach to estimating ground-level fine particle (PM2.5, particles smaller than 2.5 um in diameter) concentration using global atmospheric chemistry models and aerosol optical thickness (AOT) measurements from the Multi- angle Imaging SpectroRadiometer (MISR)...

  1. Treatment response assessment of radiofrequency ablation for hepatocellular carcinoma: usefulness of virtual CT sonography with magnetic navigation.

    PubMed

    Minami, Yasunori; Kitai, Satoshi; Kudo, Masatoshi

    2012-03-01

    Virtual CT sonography using magnetic navigation provides cross sectional images of CT volume data corresponding to the angle of the transducer in the magnetic field in real-time. The purpose of this study was to clarify the value of this virtual CT sonography for treatment response of radiofrequency ablation for hepatocellular carcinoma. Sixty-one patients with 88 HCCs measuring 0.5-1.3 cm (mean±SD, 1.0±0.3 cm) were treated by radiofrequency ablation. For early treatment response, dynamic CT was performed 1-5 days (median, 2 days). We compared early treatment response between axial CT images and multi-angle CT images using virtual CT sonography. Residual tumor stains on axial CT images and multi-angle CT images were detected in 11.4% (10/88) and 13.6% (12/88) after the first session of RFA, respectively (P=0.65). Two patients were diagnosed as showing hyperemia enhancement after the initial radiofrequency ablation on axial CT images and showed local tumor progression shortly because of unnoticed residual tumors. Only virtual CT sonography with magnetic navigation retrospectively showed the residual tumor as circular enhancement. In safety margin analysis, 10 patients were excluded because of residual tumors. The safety margin more than 5 mm by virtual CT sonographic images and transverse CT images were determined in 71.8% (56/78) and 82.1% (64/78), respectively (P=0.13). The safety margin should be overestimated on axial CT images in 8 nodules. Virtual CT sonography with magnetic navigation was useful in evaluating the treatment response of radiofrequency ablation therapy for hepatocellular carcinoma. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  2. High resolution human diffusion tensor imaging using 2-D navigated multi-shot SENSE EPI at 7 Tesla

    PubMed Central

    Jeong, Ha-Kyu; Gore, John C.; Anderson, Adam W.

    2012-01-01

    The combination of parallel imaging with partial Fourier acquisition has greatly improved the performance of diffusion-weighted single-shot EPI and is the preferred method for acquisitions at low to medium magnetic field strength such as 1.5 or 3 Tesla. Increased off-resonance effects and reduced transverse relaxation times at 7 Tesla, however, generate more significant artifacts than at lower magnetic field strength and limit data acquisition. Additional acceleration of k-space traversal using a multi-shot approach, which acquires a subset of k-space data after each excitation, reduces these artifacts relative to conventional single-shot acquisitions. However, corrections for motion-induced phase errors are not straightforward in accelerated, diffusion-weighted multi-shot EPI because of phase aliasing. In this study, we introduce a simple acquisition and corresponding reconstruction method for diffusion-weighted multi-shot EPI with parallel imaging suitable for use at high field. The reconstruction uses a simple modification of the standard SENSE algorithm to account for shot-to-shot phase errors; the method is called Image Reconstruction using Image-space Sampling functions (IRIS). Using this approach, reconstruction from highly aliased in vivo image data using 2-D navigator phase information is demonstrated for human diffusion-weighted imaging studies at 7 Tesla. The final reconstructed images show submillimeter in-plane resolution with no ghosts and much reduced blurring and off-resonance artifacts. PMID:22592941

  3. Naphthalene Planar Laser-Induced Fluorescence Imaging of Orion Multi-Purpose Crew Vehicle Heat Shield Ablation Products

    NASA Astrophysics Data System (ADS)

    Combs, Christopher S.; Clemens, Noel T.; Danehy, Paul M.

    2013-11-01

    The Orion Multi-Purpose Crew Vehicle (MPCV) calls for an ablative heat shield. In order to better design this heat shield and others that will undergo planetary entry, an improved understanding of the ablation process is required. Given that ablation is a multi-physics process involving heat and mass transfer, codes aiming to predict heat shield ablation are in need of experimental data pertaining to the turbulent transport of ablation products for validation. At The University of Texas at Austin, a technique is being developed that uses planar laser-induced fluorescence (PLIF) of a low-temperature sublimating ablator (naphthalene) to visualize the transport of ablation products in a supersonic flow. Since ablation at reentry temperatures can be difficult to recreate in a laboratory setting it is desirable to create a limited physics problem and simulate the ablation process at relatively low temperature conditions using naphthalene. A scaled Orion MPCV model with a solid naphthalene heat shield has been tested in a Mach 5 wind tunnel at various angles of attack in the current work. PLIF images have shown high concentrations of scalar in the capsule wake region, intermittent turbulent structures on the heat shield surface, and interesting details of the capsule shear layer structure. This work was supported by a NASA Office of the Chief Technologist's Space Technology Research Fellowship (NNX11AN55H).

  4. A method for simultaneous echo planar imaging of hyperpolarized 13C pyruvate and 13C lactate

    NASA Astrophysics Data System (ADS)

    Reed, Galen D.; Larson, Peder E. Z.; von Morze, Cornelius; Bok, Robert; Lustig, Michael; Kerr, Adam B.; Pauly, John M.; Kurhanewicz, John; Vigneron, Daniel B.

    2012-04-01

    A rapid echo planar imaging sequence for dynamic imaging of [1-13C] lactate and [1-13C] pyruvate simultaneously was developed. Frequency-based separation of these metabolites was achieved by spatial shifting in the phase-encoded direction with the appropriate choice of echo spacing. Suppression of the pyruvate-hydrate and alanine resonances is achieved through an optimized spectral-spatial RF waveform. Signal sampling efficiency as a function of pyruvate and lactate excitation angle was simulated using two site exchange models. Dynamic imaging is demonstrated in a transgenic mouse model, and phantom validations of the RF pulse frequency selectivity were performed.

  5. BATMAN flies: a compact spectro-imager for space observation

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frederic; Ilbert, Olivier; Zoubian, Julien; Delsanti, Audrey; Boissier, Samuel; Lancon, Ariane

    2014-08-01

    BATMAN flies is a compact spectro-imager based on MOEMS for generating reconfigurable slit masks, and feeding two arms in parallel. The FOV is 25 x 12 arcmin2 for a 1m telescope, in infrared (0.85-1.7μm) and 500-1000 spectral resolution. Unique science cases for Space Observation are reachable with this deep spectroscopic multi-survey instrument: deep survey of high-z galaxies down to H=25 on 5 deg2 with continuum detection and all z>7 candidates at H=26.2 over 5 deg2; deep survey of young stellar clusters in nearby galaxies; deep survey of the Kuiper Belt of ALL known objects down to H=22. Pathfinder towards BATMAN in space is already running with ground-based demonstrators.

  6. [Usefulness of curved coronal MPR imaging for the diagnosis of cervical radiculopathy].

    PubMed

    Inukai, Chikage; Inukai, Takashi; Matsuo, Naoki; Shimizu, Ikuo; Goto, Hisaharu; Takagi, Teruhide; Takayasu, Masakazu

    2010-03-01

    In surgical treatment of cervical radiculopathy, localization of the responsible lesions by various imaging modalities is essential. Among them, MRI is non-invasive and plays a primary role in the assessment of spinal radicular symptoms. However, demonstration of nerve root compression is sometimes difficult by the conventional methods of MRI, such as T1 weighted (T1W) and T2 weighted (T2W) sagittal or axial images. We have applied a new technique of curved coronal multiplanar reconstruction (MPR) imaging for the diagnosis of cervical radiculopathy. Ten patients (4 male, 6 female) with ages between 31 and 79 year-old, who had clinical diagnosis of cervical radiculopathy, were included in this study. Seven patients underwent anterior key-hole foraminotomy to decompress the nerve root with successful results. All the patients had 3D MRI studies, such as true fast imaging with steady-state precession (FISP), 3DT2W sampling perfection with application optimized contrasts using different fillip angle evolution (SPACE), and 3D multi-echo data image combination (MEDIC) imagings in addition to the routine MRI (1.5 T Avanto, Siemens, Germany) with a phased array coil. The curved coronal MPR images were produced from these MRI data using a workstation. The nerve root compression was diagnosed by curved coronal MPR images in all the patients. The compression sites were compatible with those of the operative findings in 7 patients, who underwent surgical treatment. The MEDIC imagings were the most demonstrable to visualize the nerve root, while the 3D-space imagings were the next. The curved coronal MPR imaging is useful for the diagnosis of accurate localization of the compressing lesions in patients with cervical radiculopathy.

  7. An Imaging System for Satellite Hypervelocity Impact Debris Characterization

    NASA Astrophysics Data System (ADS)

    Moraguez, M.; Liou, J.; Fitz-Coy, N.; Patankar, K.; Cowardin, H.

    This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.

  8. An Imaging System for Satellite Hypervelocity Impact Debris Characterization

    NASA Technical Reports Server (NTRS)

    Moraguez, Matthew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Cowardin, Heather

    2015-01-01

    This paper discusses the design of an automated imaging system for size characterization of debris produced by the DebriSat hypervelocity impact test. The goal of the DebriSat project is to update satellite breakup models. A representative LEO satellite, DebriSat, was constructed and subjected to a hypervelocity impact test. The impact produced an estimated 85,000 debris fragments. The size distribution of these fragments is required to update the current satellite breakup models. An automated imaging system was developed for the size characterization of the debris fragments. The system uses images taken from various azimuth and elevation angles around the object to produce a 3D representation of the fragment via a space carving algorithm. The system consists of N point-and-shoot cameras attached to a rigid support structure that defines the elevation angle for each camera. The debris fragment is placed on a turntable that is incrementally rotated to desired azimuth angles. The number of images acquired can be varied based on the desired resolution. Appropriate background and lighting is used for ease of object detection. The system calibration and image acquisition process are automated to result in push-button operations. However, for quality assurance reasons, the system is semi-autonomous by design to ensure operator involvement. This paper describes the imaging system setup, calibration procedure, repeatability analysis, and the results of the debris characterization.

  9. Effective increase in beam emittance by phase-space expansion using asymmetric Bragg diffraction.

    PubMed

    Chu, Chia-Hung; Tang, Mau-Tsu; Chang, Shih-Lin

    2015-08-24

    We propose an innovative method to extend the utilization of the phase space downstream of a synchrotron light source for X-ray transmission microscopy. Based on the dynamical theory of X-ray diffraction, asymmetrically cut perfect crystals are applied to reshape the position-angle-wavelength space of the light source, by which the usable phase space of the source can be magnified by over one hundred times, thereby "phase-space-matching" the source with the objective lens of the microscope. The method's validity is confirmed using SHADOW code simulations, and aberration through an optical lens such as a Fresnel zone plate is examined via matrix optics for nano-resolution X-ray images.

  10. Least squares restoration of multi-channel images

    NASA Technical Reports Server (NTRS)

    Chin, Roland T.; Galatsanos, Nikolas P.

    1989-01-01

    In this paper, a least squares filter for the restoration of multichannel imagery is presented. The restoration filter is based on a linear, space-invariant imaging model and makes use of an iterative matrix inversion algorithm. The restoration utilizes both within-channel (spatial) and cross-channel information as constraints. Experiments using color images (three-channel imagery with red, green, and blue components) were performed to evaluate the filter's performance and to compare it with other monochrome and multichannel filters.

  11. Simulating multi-spacecraft Heliospheric Imager observations for tomographic reconstruction of interplanetary CMEs

    NASA Astrophysics Data System (ADS)

    Barnes, D.

    2017-12-01

    The multiple, spatially separated vantage points afforded by the STEREO and SOHO missions provide physicists with a means to infer the three-dimensional structure of the solar corona via tomographic imaging. The reconstruction process combines these multiple projections of the optically thin plasma to constrain its three-dimensional density structure and has been successfully applied to the low corona using the STEREO and SOHO coronagraphs. However, the technique is also possible at larger, inter-planetary distances using wide-angle imagers, such as the STEREO Heliospheric Imagers (HIs), to observe faint solar wind plasma and Coronal Mass Ejections (CMEs). Limited small-scale structure may be inferred from only three, or fewer, viewpoints and the work presented here is done so with the aim of establishing techniques for observing CMEs with upcoming and future HI-like technology. We use simulated solar wind densities to compute realistic white-light HI observations, with which we explore the requirements of such instruments for determining solar wind plasma density structure via tomography. We exploit this information to investigate the optimal orbital characteristics, such as spacecraft number, separation, inclination and eccentricity, necessary to perform the technique with HIs. Further to this we argue that tomography may be greatly enhanced by means of improved instrumentation; specifically, the use of wide-angle imagers capable of measuring polarised light. This work has obvious space weather applications, serving as a demonstration for potential future missions (such as at L1 and L5) and will prove timely in fully exploiting the science return from the upcoming Solar Orbiter and Parker Solar Probe missions.

  12. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field.

    PubMed

    Nie, Jingxin; Xue, Zhong; Liu, Tianming; Young, Geoffrey S; Setayesh, Kian; Guo, Lei; Wong, Stephen T C

    2009-09-01

    A variety of algorithms have been proposed for brain tumor segmentation from multi-channel sequences, however, most of them require isotropic or pseudo-isotropic resolution of the MR images. Although co-registration and interpolation of low-resolution sequences, such as T2-weighted images, onto the space of the high-resolution image, such as T1-weighted image, can be performed prior to the segmentation, the results are usually limited by partial volume effects due to interpolation of low-resolution images. To improve the quality of tumor segmentation in clinical applications where low-resolution sequences are commonly used together with high-resolution images, we propose the algorithm based on Spatial accuracy-weighted Hidden Markov random field and Expectation maximization (SHE) approach for both automated tumor and enhanced-tumor segmentation. SHE incorporates the spatial interpolation accuracy of low-resolution images into the optimization procedure of the Hidden Markov Random Field (HMRF) to segment tumor using multi-channel MR images with different resolutions, e.g., high-resolution T1-weighted and low-resolution T2-weighted images. In experiments, we evaluated this algorithm using a set of simulated multi-channel brain MR images with known ground-truth tissue segmentation and also applied it to a dataset of MR images obtained during clinical trials of brain tumor chemotherapy. The results show that more accurate tumor segmentation results can be obtained by comparing with conventional multi-channel segmentation algorithms.

  13. Ice Types in the Beaufort Sea, Alaska

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Determining the amount and type of sea ice in the polar oceans is crucial to improving our knowledge and understanding of polar weather and long term climate fluctuations. These views from two satellite remote sensing instruments; the synthetic aperture radar (SAR) on board the RADARSAT satellite and the Multi-angle Imaging SpectroRadiometer (MISR), illustrate different methods that may be used to assess sea ice type. Sea ice in the Beaufort Sea off the north coast of Alaska was classified and mapped in these concurrent images acquired March 19, 2001 and mapped to the same geographic area.

    To identify sea ice types, the National Oceanic and Atmospheric Administration (NOAA) National Ice Center constructs ice charts using several data sources including RADARSAT SAR images such as the one shown at left. SAR classifies sea ice types primarily by how the surface and subsurface roughness influence radar backscatter. In the SAR image, white lines delineate different sea ice zones as identified by the National Ice Center. Regions of mostly multi-year ice (A) are separated from regions with large amounts of first year and younger ice (B-D), and the dashed white line at bottom marks the coastline. In general, sea ice types that exhibit increased radar backscatter appear bright in SAR and are identified as rougher, older ice types. Younger, smoother ice types appear dark to SAR. Near the top of the SAR image, however, red arrows point to bright areas in which large, crystalline 'frost flowers' have formed on young, thin ice, causing this young ice type to exhibit an increased radar backscatter. Frost flowers are strongly backscattering at radar wavelengths (cm) due to both surface roughness and the high salinity of frost flowers, which causes them to be highly reflective to radar energy.

    Surface roughness is also registered by MISR, although the roughness observed is at a different spatial scale. Older, rougher ice areas are predominantly backward scattering to the MISR cameras, whereas younger, smoother ice types are predominantly forward scattering. The MISR map at right was generated using a statistical classification routine (called ISODATA) and analyzed using ice charts from the National Ice Center. Five classes of sea ice were found based upon the classification of MISR angular data. These are described, based on interpretation of the SAR image, by the image key. Very smooth ice areas that are predominantly forward scattering are colored red. Frost flowers are largely smooth to the MISR visible band sensor and are mapped as forward scattering. Areas mapped as blue are predominantly backward scattering, and the other three classes have statistically distinct angular signatures and fall within the middle of the forward/backward scattering continuum. Some areas that may be first year or younger ice between the multi year ice floes are not discernible to SAR, illustrating how MISR potentially can make a unique contribution to sea ice mapping.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees north and 82 degrees south latitude. This data product was generated from a portion of the imagery acquired during Terra orbit 6663. The MISR image has been cropped to include an area that is 200 kilometers wide, and utilizes data from blocks 30 to 33 within World Reference System-2 path 71.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory,Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  14. Flooding in the Aftermath of Hurricane Katrina

    NASA Technical Reports Server (NTRS)

    2005-01-01

    These views of the Louisiana and Mississippi regions were acquired before and one day after Katrina made landfall along the Gulf of Mexico coast, and highlight many of the changes to the rivers and vegetation that occurred between the two views. The images were acquired by NASA's Multi-angle Imaging SpectroRadiometer (MISR) on August 14 and August 30, 2005. These multiangular, multispectral false-color composites were created using red band data from MISR's 46o backward and forward-viewing cameras, and near-infrared data from MISR's nadir camera. Such a display causes water bodies and inundated soil to appear in blue and purple hues, and highly vegetated areas to appear bright green. The scene differentiation is a result of both spectral effects (living vegetation is highly reflective at near-infrared wavelengths whereas water is absorbing) and of angular effects (wet surfaces preferentially forward scatter sunlight). The two images were processed identically and extend from the regions of Greenville, Mississippi (upper left) to Mobile Bay, Alabama (lower right).

    There are numerous rivers along the Mississippi coast that were not apparent in the pre-Katrina image; the most dramatic of these is a new inlet in the Pascagoula River that was not apparent before Katrina. The post-Katrina flooding along the edges of Lake Pontchartrain and the city of New Orleans is also apparent. In addition, the agricultural lands along the Mississippi floodplain in the upper left exhibit stronger near-infrared brightness before Katrina. After Katrina, many of these agricultural areas exhibit a stronger signal to MISR's oblique cameras, indicating the presence of inundated soil throughout the floodplain. Note that clouds appear in a different spot for each view angle due to a parallax effect resulting from their height above the surface.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously, viewing the entire globe between 82o north and 82o south latitude every nine days. Each image covers an area of about 380 kilometers by 410 kilometers. The data products were generated from a portion of the imagery acquired during Terra orbits 30091 and 30324 and utilize data from blocks 64-67 within World Reference System-2 path 22.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Science Mission Directorate, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is managed for NASA by the California Institute of Technology.

  15. Study on feasibility of laser reflective tomography with satellite-accompany

    NASA Astrophysics Data System (ADS)

    Gu, Yu; Hu, Yi-hua; Hao, Shi-qi; Gu, You-lin; Zhao, Nan-xiang; Wang, Yang-yang

    2015-10-01

    Laser reflective tomography is a long-range, high-resolution active detection technology, whose advantage is that the spatial resolution is unrelated with the imaging distance. Accompany satellite is a specific satellite around the target spacecraft with encircling movement. When using the accompany satellite to detect the target aircraft, multi-angle echo data can be obtained with the application of reflective tomography imaging. The feasibility of such detection working mode was studied in this article. Accompany orbit model was established with horizontal circular fleet and the parameters of accompany flight was defined. The simulation of satellite-to-satellite reflective tomography imaging with satellite-accompany was carried out. The operating mode of reflective tomographic data acquisition from monostatic laser radar was discussed and designed. The flight period, which equals to the all direction received data consuming time, is one of the important accompany flight parameters. The azimuth angle determines the plane of image formation while the elevation angle determines the projection direction. Both of the azimuth and elevation angles guide the satellite attitude stability controller in order to point the laser radar spot on the target. The influences of distance between accompany satellite and target satellite on tomographic imaging consuming time was analyzed. The influences of flight period, azimuth angle and elevation angle on tomographic imaging were analyzed as well. Simulation results showed that the satellite-accompany laser reflective tomography is a feasible and effective method to the satellite-to-satellite detection.

  16. A Population-Based Assessment of the Agreement Between Grading of Goniophotographic Images and Gonioscopy in the Chinese-American Eye Study (CHES).

    PubMed

    Murakami, Yohko; Wang, Dandan; Burkemper, Bruce; Lin, Shan C; Varma, Rohit

    2016-08-01

    To compare grading of goniophotographic images and gonioscopy in assessing the iridocorneal angle. In a population-based, cross-sectional study, participants underwent gonioscopy and goniophotographic imaging during the same visit. The iridocorneal angle was classified as closed if the posterior trabecular meshwork could not be seen. A single masked observer graded the goniophotographic images, and each eye was classified as having angle closure based on the number of closed quadrants. Agreement between the methods was analyzed by calculating kappa (κ) and first-order agreement coefficient (AC1) statistics and comparison of area under receiver operating characteristic curves (AUC). A total of 4149 Chinese Americans (3994 eyes) were included in this study. The agreement for angle closure diagnosis between gonioscopy and EyeCam was moderate to excellent (κ = 0.60, AC1 0.90, AUC 0.76-0.80). Detection of iridocorneal angle closure based on goniophotographic imaging shows moderate to very good agreement with angle closure assessment using gonioscopy.

  17. From Wheatstone to Cameron and beyond: overview in 3-D and 4-D imaging technology

    NASA Astrophysics Data System (ADS)

    Gilbreath, G. Charmaine

    2012-02-01

    This paper reviews three-dimensional (3-D) and four-dimensional (4-D) imaging technology, from Wheatstone through today, with some prognostications for near future applications. This field is rich in variety, subject specialty, and applications. A major trend, multi-view stereoscopy, is moving the field forward to real-time wide-angle 3-D reconstruction as breakthroughs in parallel processing and multi-processor computers enable very fast processing. Real-time holography meets 4-D imaging reconstruction at the goal of achieving real-time, interactive, 3-D imaging. Applications to telesurgery and telemedicine as well as to the needs of the defense and intelligence communities are also discussed.

  18. Massive Gas Cloud Around Jupiter

    NASA Technical Reports Server (NTRS)

    2003-01-01

    An innovative instrument on NASA's Cassini spacecraft makes the space environment around Jupiter visible, revealing a donut-shaped gas cloud encircling the planet.

    The image was taken with the energetic neutral atom imaging technique by the Magnetospheric Imaging Instrument on Cassini as the spacecraft flew past Jupiter in early 2001 at a distance of about 10 million kilometers (6 million miles). This technique provides information about a source by detecting neutral atoms emitted by the source, comparable to how a camera reveals information about an object by detecting photons coming from the object.

    The central object in this image represents energetic neutral atom emissions from Jupiter itself. The outer two objects represent emissions from a donut-shaped cloud, or torus, that shares an orbit with Jupiter's moon Europa. The cloud's emissions appear dot-like because of the viewing angle. The torus is viewed edge-on, and the image is brightest at the line-of-sight angles that pass through the greatest volume of it.

    Cassini is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, Calif., manages Cassini for NASA's Office of Space Science, Washington, D.C.

  19. Partially-overlapped viewing zone based integral imaging system with super wide viewing angle.

    PubMed

    Xiong, Zhao-Long; Wang, Qiong-Hua; Li, Shu-Li; Deng, Huan; Ji, Chao-Chao

    2014-09-22

    In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.

  20. Relationship between iris surface features and angle width in Asian eyes.

    PubMed

    Sidhartha, Elizabeth; Nongpiur, Monisha Esther; Cheung, Carol Y; He, Mingguang; Wong, Tien Yin; Aung, Tin; Cheng, Ching-Yu

    2014-10-23

    To examine the associations between iris surface features with anterior chamber angle width in Asian eyes. In this prospective cross-sectional study, we recruited 600 subjects from a large population-based study, the Singapore Epidemiology of Eye Diseases (SEED) study. We obtained standardized digital slit-lamp iris photographs and graded the iris crypts (by number and size), furrows (by number and circumferential extent), and color (higher grade denoting darker iris). Vertical and horizontal cross-sections of anterior chamber were imaged using anterior segment optical coherence tomography. Angle opening distance (AOD), angle recess area (ARA), and trabecular-iris space area (TISA) were measured using customized software. Associations of the angle width with the iris surface features in the subject's right eyes were assessed using linear regression analysis. A total of 464 eyes of the 464 subjects (mean age: 57.5 ± 8.6 years) had complete and gradable data for crypts and color, and 423 eyes had gradable data for furrows. After adjustment for age, sex, ethnicity, pupil size, and corneal arcus, higher crypt grade was independently associated with wider AOD750 (β [change in angle width per grade higher] = 0.018, P = 0.023), ARA750 (β = 0.022, P = 0.049), and TISA750 (β = 0.011, P = 0.019), and darker iris was associated narrower ARA750 (β = -0.025, P = 0.044) and TISA750 (β = -0.013, P = 0.011). Iris surface features, assessed and measured from slit-lamp photographs, correlated well with anterior chamber angle width; irises with more crypts and lighter color were associated with wider angle. These findings may provide another imaging modality to assess angle closure risk based on iris surface features. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  1. MISR: protection from ourselves

    NASA Technical Reports Server (NTRS)

    Nolan, T.; Varanasi, P.

    2004-01-01

    Outlines lessons learned by the Instrument Operations Team of NASA/JPL Terra's Multi-angle Imaging SpectroRadiometer mission. It narrates a story of MISR: Protection from Ourselves! and describes, in detail, how the MISR instrument survived operator errors.

  2. Level 2 Ancillary Products and Datasets Algorithm Theoretical Basis

    NASA Technical Reports Server (NTRS)

    Diner, D.; Abdou, W.; Gordon, H.; Kahn, R.; Knyazikhin, Y.; Martonchik, J.; McDonald, D.; McMuldroch, S.; Myneni, R.; West, R.

    1999-01-01

    This Algorithm Theoretical Basis (ATB) document describes the algorithms used to generate the parameters of certain ancillary products and datasets used during Level 2 processing of Multi-angle Imaging SpectroRadiometer (MIST) data.

  3. Hurricane Katrina

    Atmospheric Science Data Center

    2014-05-15

    ... Katrina is one of the most powerful and destructive storms on record for the Atlantic Basin. The animation progresses from ... tops" are also characteristic of strong and rapidly growing storms. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth ...

  4. Two wide-angle imaging neutral-atom spectrometers (TWINS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McComas, D.J.; Blake, B.; Burch, J.

    1998-11-01

    Two Wide-angle Imaging Neutral-atom Spectrometers (TWINS) is a revolutionary new mission designed to stereoscopically image the magnetosphere in charge exchange neutral atoms for the first time. The authors propose to fly two identical TWINS instruments as a mission of opportunity on two widely-spaced high-altitude, high-inclination US Government spacecraft. Because the spacecraft are funded independently, TWINS can provide a vast quantity of high priority science observations (as identified in an ongoing new missions concept study and the Sun-Earth Connections Roadmap) at a small fraction of the cost of a dedicated mission. Because stereo observations of the near-Earth space environs will providemore » a particularly graphic means for visualizing the magnetosphere in action, and because of the dedication and commitment of the investigator team to the principles of carrying space science to the broader audience, TWINS will also be an outstanding tool for public education and outreach.« less

  5. Volga Delta and the Caspian Sea

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Russia's Volga River is the largest river system in Europe, draining over 1.3 million square kilometers of catchment area into the Caspian Sea. The brackish Caspian is Earth's largest landlocked water body, and its isolation from the world's oceans has enabled the preservation of several unique animal and plant species. The Volga provides most of the Caspian's fresh water and nutrients, and also discharges large amounts of sediment and industrial waste into the relatively shallow northern part of the sea. These images of the region were captured by the Multi-angle Imaging SpectroRadiometer on October 5, 2001, during Terra orbit 9567. Each image represents an area of approximately 275 kilometers x 376 kilometers.

    The left-hand image is from MISR's nadir (vertical-viewing) camera, and shows how light is reflected at red, green, and blue wavelengths. The right-hand image is a false color composite of red-band imagery from MISR's 60-degree backward, nadir, and 60-degree forward-viewing cameras, displayed as red, green, and blue, respectively. Here, color variations indicate how light is reflected at different angles of view. Water appears blue in the right-hand image, for example, because sun glitter makes smooth, wet surfaces look brighter at the forward camera's view angle. The rougher-textured vegetated wetlands near the coast exhibit preferential backscattering, and consequently appear reddish. A small cloud near the center of the delta separates into red, green, and blue components due to geometric parallax associated with its elevation above the surface.

    Other notable features within the images include several linear features located near the Volga Delta shoreline. These long, thin lines are artificially maintained shipping channels, dredged to depths of at least 2 meters. The crescent-shaped Kulaly Island, also known as Seal Island, is visible near the right-hand edge of the images.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  6. [Multi-channel motion signal acquisition system and experimental results].

    PubMed

    Zhong, Sheng; Yi, Wanguan; Deng, Ke; Zhan, Kai; Wen, Huiying; Chen, Xin

    2014-09-01

    For the study of muscle function and features during exercise, a multi-channel data acquisition system was developed, the overall design of the system, hardware composition, the function of system and so on have made a detail implements. The synchronous acquisition and storage of the surface EMG signal, joint angle signal, plantar pressure signal, ultrasonic image and initial results have been achieved.

  7. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  8. MISR at 15: Multiple Perspectives on Our Changing Earth

    NASA Astrophysics Data System (ADS)

    Diner, D. J.; Ackerman, T. P.; Braverman, A. J.; Bruegge, C. J.; Chopping, M. J.; Clothiaux, E. E.; Davies, R.; Di Girolamo, L.; Garay, M. J.; Jovanovic, V. M.; Kahn, R. A.; Kalashnikova, O.; Knyazikhin, Y.; Liu, Y.; Marchand, R.; Martonchik, J. V.; Muller, J. P.; Nolin, A. W.; Pinty, B.; Verstraete, M. M.; Wu, D. L.

    2014-12-01

    Launched aboard NASA's Terra satellite in December 1999, the Multi-angle Imaging SpectroRadiometer (MISR) instrument has opened new vistas in remote sensing of our home planet. Its 9 pushbroom cameras provide as many view angles ranging from 70 degrees forward to 70 degrees backward along Terra's flight track, in four visible and near-infrared spectral bands. MISR's well-calibrated, accurately co-registered, and moderately high spatial resolution radiance images have been coupled with novel data processing algorithms to mine the information content of angular reflectance anisotropy and multi-camera stereophotogrammetry, enabling new perspectives on the 3-D structure and dynamics of Earth's atmosphere and surface in support of climate and environmental research. Beginning with "first light" in February 2000, the nearly 15-year (and counting) MISR observational record provides an unprecedented data set with applications to multiple disciplines, documenting regional, global, short-term, and long-term changes in aerosol optical depths, aerosol type, near-surface particulate pollution, spectral top-of-atmosphere and surface albedos, aerosol plume-top and cloud-top heights, height-resolved cloud fractions, atmospheric motion vectors, and the structure of vegetated and ice-covered terrains. Recent computational advances include aerosol retrievals at finer spatial resolution than previously possible, and production of near-real time tropospheric winds with a latency of less than 3 hours, making possible for the first time the assimilation of MISR data into weather forecast models. In addition, recent algorithmic and technological developments provide the means of using and acquiring multi-angular data in new ways, such as the application of optical tomography to map 3-D atmospheric structure; building smaller multi-angle instruments in the future; and extending the multi-angular imaging methodology to the ultraviolet, shortwave infrared, and polarimetric realms. Such advances promise further enhancements to the observational power of the remote sensing approaches that MISR has pioneered.

  9. Multi-position photovoltaic assembly

    DOEpatents

    Dinwoodie, Thomas L.

    2003-03-18

    The invention is directed to a PV assembly, for use on a support surface, comprising a base, a PV module, a multi-position module support assembly, securing the module to the base at shipping and inclined-use angles, a deflector, a multi-position deflector support securing the deflector to the base at deflector shipping and deflector inclined-use angles, the module and deflector having opposed edges defining a gap therebetween. The invention permits transport of the PV assemblies in a relatively compact form, thus lowering shipping costs, while facilitating installation of the PV assemblies with the PV module at the proper inclination.

  10. Field-Portable Pixel Super-Resolution Colour Microscope

    PubMed Central

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm2. This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate ‘rainbow’ like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings. PMID:24086742

  11. Field-portable pixel super-resolution colour microscope.

    PubMed

    Greenbaum, Alon; Akbari, Najva; Feizi, Alborz; Luo, Wei; Ozcan, Aydogan

    2013-01-01

    Based on partially-coherent digital in-line holography, we report a field-portable microscope that can render lensfree colour images over a wide field-of-view of e.g., >20 mm(2). This computational holographic microscope weighs less than 145 grams with dimensions smaller than 17×6×5 cm, making it especially suitable for field settings and point-of-care use. In this lensfree imaging design, we merged a colorization algorithm with a source shifting based multi-height pixel super-resolution technique to mitigate 'rainbow' like colour artefacts that are typical in holographic imaging. This image processing scheme is based on transforming the colour components of an RGB image into YUV colour space, which separates colour information from brightness component of an image. The resolution of our super-resolution colour microscope was characterized using a USAF test chart to confirm sub-micron spatial resolution, even for reconstructions that employ multi-height phase recovery to handle dense and connected objects. To further demonstrate the performance of this colour microscope Papanicolaou (Pap) smears were also successfully imaged. This field-portable and wide-field computational colour microscope could be useful for tele-medicine applications in resource poor settings.

  12. Accuracy of Shack-Hartmann wavefront sensor using a coherent wound fibre image bundle

    NASA Astrophysics Data System (ADS)

    Zheng, Jessica R.; Goodwin, Michael; Lawrence, Jon

    2018-03-01

    Shack-Hartmannwavefront sensors using wound fibre image bundles are desired for multi-object adaptive optical systems to provide large multiplex positioned by Starbugs. The use of a large-sized wound fibre image bundle provides the flexibility to use more sub-apertures wavefront sensor for ELTs. These compact wavefront sensors take advantage of large focal surfaces such as the Giant Magellan Telescope. The focus of this paper is to study the wound fibre image bundle structure defects effect on the centroid measurement accuracy of a Shack-Hartmann wavefront sensor. We use the first moment centroid method to estimate the centroid of a focused Gaussian beam sampled by a simulated bundle. Spot estimation accuracy with wound fibre image bundle and its structure impact on wavefront measurement accuracy statistics are addressed. Our results show that when the measurement signal-to-noise ratio is high, the centroid measurement accuracy is dominated by the wound fibre image bundle structure, e.g. tile angle and gap spacing. For the measurement with low signal-to-noise ratio, its accuracy is influenced by the read noise of the detector instead of the wound fibre image bundle structure defects. We demonstrate this both with simulation and experimentally. We provide a statistical model of the centroid and wavefront error of a wound fibre image bundle found through experiment.

  13. Identification of geostationary satellites using polarization data from unresolved images

    NASA Astrophysics Data System (ADS)

    Speicher, Andy

    In order to protect critical military and commercial space assets, the United States Space Surveillance Network must have the ability to positively identify and characterize all space objects. Unfortunately, positive identification and characterization of space objects is a manual and labor intensive process today since even large telescopes cannot provide resolved images of most space objects. Since resolved images of geosynchronous satellites are not technically feasible with current technology, another method of distinguishing space objects was explored that exploits the polarization signature from unresolved images. The objective of this study was to collect and analyze visible-spectrum polarization data from unresolved images of geosynchronous satellites taken over various solar phase angles. Different collection geometries were used to evaluate the polarization contribution of solar arrays, thermal control materials, antennas, and the satellite bus as the solar phase angle changed. Since materials on space objects age due to the space environment, it was postulated that their polarization signature may change enough to allow discrimination of identical satellites launched at different times. The instrumentation used in this experiment was a United States Air Force Academy (USAFA) Department of Physics system that consists of a 20-inch Ritchey-Chretien telescope and a dual focal plane optical train fed with a polarizing beam splitter. A rigorous calibration of the system was performed that included corrections for pixel bias, dark current, and response. Additionally, the two channel polarimeter was calibrated by experimentally determining the Mueller matrix for the system and relating image intensity at the two cameras to Stokes parameters S0 and S1. After the system calibration, polarization data was collected during three nights on eight geosynchronous satellites built by various manufacturers and launched several years apart. Three pairs of the eight satellites were identical buses to determine if identical buses could be correctly differentiated. When Stokes parameters were plotted against time and solar phase angle, the data indicates that there were distinguishing features in S0 (total intensity) and S1 (linear polarization) that may lead to positive identification or classification of each satellite.

  14. Cloud Height Maps for Hurricanes Frances and Ivan

    NASA Technical Reports Server (NTRS)

    2004-01-01

    NASA's Multi-angle Imaging SpectroRadiometer (MISR) captured these images and cloud-top height retrievals of Hurricane Frances on September 4, 2004, when the eye sat just off the coast of eastern Florida, and Hurricane Ivan on September 5th, after this cyclone had devastated Grenada and was heading toward the central and western Caribbean. Hurricane Frances made landfall in the early hours of September 5, and was downgraded to Tropical Storm status as it swept inland through the Florida panhandle and continued northward. On the heels of Frances is Hurricane Ivan, which is on record as the strongest tropical cyclone to form at such a low latitude in the Atlantic, and was the most powerful hurricane to have hit the Caribbean in nearly a decade.

    The ability of forecasters to predict the intensity and amount of rainfall associated with hurricanes still requires improvement, especially on the 24 to 48 hour timescale vital for disaster planning. To improve the operational models used to make hurricane forecasts, scientists need to better understand the multi-scale interactions at the cloud, mesoscale and synoptic scales that lead to hurricane intensification and dissipation, and the various physical processes that affect hurricane intensity and rainfall distributions. Because these uncertainties with regard to how to represent cloud processes still exist, it is vital that the model findings be evaluated against hurricane observations whenever possible. Two-dimensional maps of cloud height such as those shown here offer an unprecedented opportunity for comparing simulated cloud fields against actual hurricane observations.

    The left-hand panel in each image pair is a natural color view from MISR's nadir camera. The right-hand panels are cloud-top height retrievals produced by automated computer recognition of the distinctive spatial features between images acquired at different view angles. These results indicate that at the time that these images were acquired, clouds within Frances and Ivan had attained altitudes of 15 kilometers and 16 kilometers above sea level, respectively. The height fields pictured here are uncorrected for the effects of cloud motion. Wind-corrected heights (which have higher accuracy but sparser spatial coverage) are within about 1 kilometer of the heights shown here.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82o north and 82o south latitude. These data products were generated from a portion of the imagery acquired during Terra orbits 25081 and 25094. The panels cover an area of 380 kilometers x 924 kilometers, and utilize data from within blocks 65 to 87 within World Reference System-2 paths 14 and 222, respectively.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California In

  15. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  16. A Web-GIS Procedure Based on Satellite Multi-Spectral and Airborne LIDAR Data to Map the Road blockage Due to seismic Damages of Built-Up Urban Areas

    NASA Astrophysics Data System (ADS)

    Costanzo, Antonio; Montuori, Antonio; Silva, Juan Pablo; Silvestri, Malvina; Musacchio, Massimo; Buongiorno, Maria Fabrizia; Stramondo, Salvatore

    2016-08-01

    In this work, a web-GIS procedure to map the risk of road blockage in urban environments through the combined use of space-borne and airborne remote sensing sensors is presented. The methodology concerns (1) the provision of a geo-database through the integration of space-borne multispectral images and airborne LiDAR data products; (2) the modeling of building vulnerability, based on the corresponding 3D geometry and construction time information; (3) the GIS-based mapping of road closure due to seismic- related building collapses based on the building characteristic height and the width of the road. Experimental results, gathered for the Cosenza urban area, allow demonstrating the benefits of both the proposed approach and the GIS-based integration of multi-platforms remote sensing sensors and techniques for seismic road assessment purposes.

  17. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  18. Multi-pose facial correction based on Gaussian process with combined kernel function

    NASA Astrophysics Data System (ADS)

    Shi, Shuyan; Ji, Ruirui; Zhang, Fan

    2018-04-01

    In order to improve the recognition rate of various postures, this paper proposes a method of facial correction based on Gaussian Process which build a nonlinear regression model between the front and the side face with combined kernel function. The face images with horizontal angle from -45° to +45° can be properly corrected to front faces. Finally, Support Vector Machine is employed for face recognition. Experiments on CAS PEAL R1 face database show that Gaussian process can weaken the influence of pose changes and improve the accuracy of face recognition to certain extent.

  19. The Retrieval of Aerosol Optical Thickness Using the MERIS Instrument

    NASA Astrophysics Data System (ADS)

    Mei, L.; Rozanov, V. V.; Vountas, M.; Burrows, J. P.; Levy, R. C.; Lotz, W.

    2015-12-01

    Retrieval of aerosol properties for satellite instruments without shortwave-IR spectral information, multi-viewing, polarization and/or high-temporal observation ability is a challenging problem for spaceborne aerosol remote sensing. However, space based instruments like the MEdium Resolution Imaging Spectrometer (MERIS) and the successor, Ocean and Land Colour Instrument (OLCI) with high calibration accuracy and high spatial resolution provide unique abilities for obtaining valuable aerosol information for a better understanding of the impact of aerosols on climate, which is still one of the largest uncertainties of global climate change evaluation. In this study, a new Aerosol Optical Thickness (AOT) retrieval algorithm (XBAER: eXtensible Bremen AErosol Retrieval) is presented. XBAER utilizes the global surface spectral library database for the determination of surface properties while the MODIS collection 6 aerosol type treatment is adapted for the aerosol type selection. In order to take the surface Bidirectional Reflectance Distribution Function (BRDF) effect into account for the MERIS reduce resolution (1km) retrieval, a modified Ross-Li mode is used. The AOT is determined in the algorithm using lookup tables including polarization created using Radiative Transfer Model SCIATRAN3.4, by minimizing the difference between atmospheric corrected surface reflectance with given AOT and the surface reflectance calculated from the spectral library. The global comparison with operational MODIS C6 product, Multi-angle Imaging SpectroRadiometer (MISR) product, Advanced Along-Track Scanning Radiometer (AATSR) aerosol product and the validation using AErosol RObotic NETwork (AERONET) show promising results. The current XBAER algorithm is only valid for aerosol remote sensing over land and a similar method will be extended to ocean later.

  20. Parity-Time Symmetric Nonlocal Metasurfaces: All-Angle Negative Refraction and Volumetric Imaging

    NASA Astrophysics Data System (ADS)

    Monticone, Francesco; Valagiannopoulos, Constantinos A.; Alù, Andrea

    2016-10-01

    Lens design for focusing and imaging has been optimized through centuries of developments; however, conventional lenses, even in their most ideal realizations, still suffer from fundamental limitations, such as limits in resolution and the presence of optical aberrations, which are inherent to the laws of refraction. In addition, volume-to-volume imaging of three-dimensional regions of space is not possible with systems based on conventional refractive optics, which are inherently limited to plane-to-plane imaging. Although some of these limitations have been at least theoretically relaxed with the advent of metamaterials, several challenges still stand in the way of ideal imaging of three-dimensional regions of space. Here, we show that the concept of parity-time symmetry, combined with tailored nonlocal responses, enables overcoming some of these challenges, and we propose the design of a loss-immune, linear, transversely invariant, planarized metamaterial lens, with reduced aberrations and the potential to realize volume-to-volume imaging.

  1. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  2. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  3. Sharpening Ejecta Patterns: Investigating Spectral Fidelity After Controlled Intensity-Hue-Saturation Image Fusion of LROC Images of Fresh Craters

    NASA Astrophysics Data System (ADS)

    Awumah, A.; Mahanti, P.; Robinson, M. S.

    2017-12-01

    Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.

  4. WE-G-BRD-07: Automated MR Image Standardization and Auto-Contouring Strategy for MRI-Based Adaptive Brachytherapy for Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, H Al; Erickson, B; Paulson, E

    Purpose: MRI-based adaptive brachytherapy (ABT) is an emerging treatment modality for patients with gynecological tumors. However, MR image intensity non-uniformities (IINU) can vary from fraction to fraction, complicating image interpretation and auto-contouring accuracy. We demonstrate here an automated MR image standardization and auto-contouring strategy for MRI-based ABT of cervix cancer. Methods: MR image standardization consisted of: 1) IINU correction using the MNI N3 algorithm, 2) noise filtering using anisotropic diffusion, and 3) signal intensity normalization using the volumetric median. This post-processing chain was implemented as a series of custom Matlab and Java extensions in MIM (v6.4.5, MIM Software) and wasmore » applied to 3D T2 SPACE images of six patients undergoing MRI-based ABT at 3T. Coefficients of variation (CV=σ/µ) were calculated for both original and standardized images and compared using Mann-Whitney tests. Patient-specific cumulative MR atlases of bladder, rectum, and sigmoid contours were constructed throughout ABT, using original and standardized MR images from all previous ABT fractions. Auto-contouring was performed in MIM two ways: 1) best-match of one atlas image to the daily MR image, 2) multi-match of all previous fraction atlas images to the daily MR image. Dice’s Similarity Coefficients (DSCs) were calculated for auto-generated contours relative to reference contours for both original and standardized MR images and compared using Mann-Whitney tests. Results: Significant improvements in CV were detected following MR image standardization (p=0.0043), demonstrating an improvement in MR image uniformity. DSCs consistently increased for auto-contoured bladder, rectum, and sigmoid following MR image standardization, with the highest DSCs detected when the combination of MR image standardization and multi-match cumulative atlas-based auto-contouring was utilized. Conclusion: MR image standardization significantly improves MR image uniformity. The combination of MR image standardization and multi-match cumulative atlas-based auto-contouring produced the highest DSCs and is a promising strategy for MRI-based ABT for cervix cancer.« less

  5. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  6. Space Radar Image of Randonia Rain Cell

    NASA Image and Video Library

    1999-04-15

    This multi-frequency space radar image of a tropical rainforest in western Brazil shows rapidly changing land use patterns and it also demonstrates the capability of the different radar frequencies to detect and penetrate heavy rainstorms.

  7. Condensing Heat Exchanger with Hydrophilic Antimicrobial Coating

    NASA Technical Reports Server (NTRS)

    Thomas, Christopher M. (Inventor); Ma, Yonghui (Inventor)

    2014-01-01

    A multi-layer antimicrobial hydrophilic coating is applied to a substrate of anodized aluminum, although other materials may form the substrate. A silver layer is sputtered onto a thoroughly clean anodized surface of the aluminum to about 400 nm thickness. A layer of crosslinked, silicon-based macromolecular structure about 10 nm thickness overlies the silver layer, and the outermost surface of the layer of crosslinked, silicon-based macromolecular structure is hydroxide terminated to produce a hydrophilic surface with a water drop contact angle of less than 10.degree.. The coated substrate may be one of multiple fins in a condensing heat exchanger for use in the microgravity of space, which has narrow channels defined between angled fins such that the surface tension of condensed water moves water by capillary flow to a central location where it is pumped to storage. The antimicrobial coating prevents obstruction of the capillary passages.

  8. Appalachia Snow

    Atmospheric Science Data Center

    2014-05-15

    ... 7, 2002. The Appalachians are bounded by the Blue Ridge mountain belt along the east and the Appalachian Plateau along the west. ... tip, near the Great Smoky Mountains (the dark-colored range at lower right). The Multi-angle Imaging SpectroRadiometer observes ...

  9. South Africa

    Atmospheric Science Data Center

    2013-04-16

    ... blooms of phytoplankton caused a rapid reduction in the oxygen concentration of nearshore waters. The lobsters (or crayfish, as they ... known locally) moved toward the breaking surf in search of oxygen, but were stranded by the retreating tide. The Multi-angle Imaging ...

  10. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    PubMed Central

    Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888

  11. Imaging Modalities Relevant to Intracranial Pressure Assessment in Astronauts: A Case-Based Discussion

    NASA Technical Reports Server (NTRS)

    Sargsyan, Ashot E.; Kramer, Larry A.; Hamilton, Douglas R.; Hamilton, Douglas R.; Fogarty, Jennifer; Polk, J. D.

    2010-01-01

    Introduction: Intracranial pressure (ICP) elevation has been inferred or documented in a number of space crewmembers. Recent advances in noninvasive imaging technology offer new possibilities for ICP assessment. Most International Space Station (ISS) partner agencies have adopted a battery of occupational health monitoring tests including magnetic resonance imaging (MRI) pre- and postflight, and high-resolution sonography of the orbital structures in all mission phases including during flight. We hypothesize that joint consideration of data from the two techniques has the potential to improve quality and continuity of crewmember monitoring and care. Methods: Specially designed MRI and sonographic protocols were used to image eyes and optic nerves (ON) including the meningeal sheaths. Specific crewmembers multi-modality imaging data were analyzed to identify points of mutual validation as well as unique features of complementary nature. Results and Conclusion: Magnetic resonance imaging (MRI) and high-resolution sonography are both tomographic methods, however images obtained by the two modalities are based on different physical phenomena and use different acquisition principles. Consideration of the images acquired by these two modalities allows cross-validating findings related to the volume and fluid content of the ON subarachnoid space, shape of the globe, and other anatomical features of the orbit. Each of the imaging modalities also has unique advantages, making them complementary techniques.

  12. Results from an experiment that collected visible-light polarization data using unresolved imagery for classification of geosynchronous satellites

    NASA Astrophysics Data System (ADS)

    Speicher, Andy; Matin, Mohammad; Tippets, Roger; Chun, Francis; Strong, David

    2015-05-01

    In order to protect critical military and commercial space assets, the United States Space Surveillance Network must have the ability to positively identify and characterize all space objects. Unfortunately, positive identification and characterization of space objects is a manual and labor intensive process today since even large telescopes cannot provide resolved images of most space objects. The objective of this study was to collect and analyze visible-spectrum polarization data from unresolved images of geosynchronous satellites taken over various solar phase angles. Different collection geometries were used to evaluate the polarization contribution of solar arrays, thermal control materials, antennas, and the satellite bus as the solar phase angle changed. Since materials on space objects age due to the space environment, their polarization signature may change enough to allow discrimination of identical satellites launched at different times. Preliminary data suggests this optical signature may lead to positive identification or classification of each satellite by an automated process on a shorter timeline. The instrumentation used in this experiment was a United States Air Force Academy (USAFA) Department of Physics system that consists of a 20-inch Ritchey-Chrétien telescope and a dual focal plane optical train fed with a polarizing beam splitter. Following a rigorous calibration, polarization data was collected during two nights on eight geosynchronous satellites built by various manufacturers and launched several years apart. When Stokes parameters were plotted against time and solar phase angle, the data indicates that a polarization signature from unresolved images may have promise in classifying specific satellites.

  13. Atmospheric correction for remote sensing image based on multi-spectral information

    NASA Astrophysics Data System (ADS)

    Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen

    2018-03-01

    The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.

  14. A signature dissimilarity measure for trabecular bone texture in knee radiographs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woloszynski, T.; Podsiadlo, P.; Stachowiak, G. W.

    Purpose: The purpose of this study is to develop a dissimilarity measure for the classification of trabecular bone (TB) texture in knee radiographs. Problems associated with the traditional extraction and selection of texture features and with the invariance to imaging conditions such as image size, anisotropy, noise, blur, exposure, magnification, and projection angle were addressed. Methods: In the method developed, called a signature dissimilarity measure (SDM), a sum of earth mover's distances calculated for roughness and orientation signatures is used to quantify dissimilarities between textures. Scale-space theory was used to ensure scale and rotation invariance. The effects of image size,more » anisotropy, noise, and blur on the SDM developed were studied using computer generated fractal texture images. The invariance of the measure to image exposure, magnification, and projection angle was studied using x-ray images of human tibia head. For the studies, Mann-Whitney tests with significance level of 0.01 were used. A comparison study between the performances of a SDM based classification system and other two systems in the classification of Brodatz textures and the detection of knee osteoarthritis (OA) were conducted. The other systems are based on weighted neighbor distance using compound hierarchy of algorithms representing morphology (WND-CHARM) and local binary patterns (LBP). Results: Results obtained indicate that the SDM developed is invariant to image exposure (2.5-30 mA s), magnification (x1.00-x1.35), noise associated with film graininess and quantum mottle (<25%), blur generated by a sharp film screen, and image size (>64x64 pixels). However, the measure is sensitive to changes in projection angle (>5 deg.), image anisotropy (>30 deg.), and blur generated by a regular film screen. For the classification of Brodatz textures, the SDM based system produced comparable results to the LBP system. For the detection of knee OA, the SDM based system achieved 78.8% classification accuracy and outperformed the WND-CHARM system (64.2%). Conclusions: The SDM is well suited for the classification of TB texture images in knee OA detection and may be useful for the texture classification of medical images in general.« less

  15. Independent polarization and multi-band THz absorber base on Jerusalem cross

    NASA Astrophysics Data System (ADS)

    Arezoomand, Afsaneh Saee; Zarrabi, Ferdows B.; Heydari, Samaneh; Gandji, Navid P.

    2015-10-01

    In this paper, we present the design and simulation of a single and multi-band perfect metamaterial absorber (MA) in the THz region base on Jerusalem cross (JC) and metamaterial load in unit cells. The structures consist of dual metallic layers for allowing near-perfect absorption with absorption peak of more than 99%. In this novel design, four-different shape of Jerusalem cross is presented and by adding L, U and W shape loaded to first structure, we tried to achieve a dual-band absorber. In addition, by good implementation of these loaded, we are able to control the absorption resonance at second resonance at 0.9, 0.7 and 0.85 THz respectively. In the other hand, we achieved a semi stable designing at first resonance between 0.53 and 0.58 THz. The proposed absorber has broadband polarization angle. The surface current modeled and proved the broadband polarization angle at prototype MA. The LC resonance of the metamaterial for Jerusalem cross and modified structures are extracting from equivalent circuit. As a result, proposed MA is useful for THz medical imaging and communication systems and the dual-band absorber has applications in many scientific and technological areas.

  16. Angle- and polarization-insensitive, small area, subtractive color filters via a-Si nanopillar arrays (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Fountaine, Katherine T.; Ito, Mikinori; Pala, Ragip; Atwater, Harry A.

    2016-09-01

    Spectrally-selective nanophotonic and plasmonic structures enjoy widespread interest for application as color filters in imaging devices, due to their potential advantages over traditional organic dyes and pigments. Organic dyes are straightforward to implement with predictable optical performance at large pixel size, but suffer from inherent optical cross-talk and stability (UV, thermal, humidity) issues and also exhibit increasingly unpredictable performance as pixel size approaches dye molecule size. Nanophotonic and plasmonic color filters are more robust, but often have polarization- and angle-dependent optical response and/or require large-range periodicity. Herein, we report on design and fabrication of polarization- and angle-insensitive CYM color filters based on a-Si nanopillar arrays as small as 1um2, supported by experiment, simulation, and analytic theory. Analytic waveguide and Mie theories explain the color filtering mechanism- efficient coupling into and interband transition-mediated attenuation of waveguide-like modes—and also guided the FDTD simulation-based optimization of nanopillar array dimensions. The designed a-Si nanopillar arrays were fabricated using e-beam lithography and reactive ion etching; and were subsequently optically characterized, revealing the predicted polarization- and angle-insensitive (±40°) subtractive filter responses. Cyan, yellow, and magenta color filters have each been demonstrated. The effects of nanopillar array size and inter-array spacing were investigated both experimentally and theoretically to probe the issues of ever-shrinking pixel sizes and cross-talk, respectively. Results demonstrate that these nanopillar arrays maintain their performance down to 1um2 pixel sizes with no inter-array spacing. These concepts and results along with color-processed images taken with a fabricated color filter array will be presented and discussed.

  17. Narrow Angle movie

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This brief three-frame movie of the Moon was made from three Cassini narrow-angle images as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The purpose of this particular set of images was to calibrate the spectral response of the narrow-angle camera and to test its 'on-chip summing mode' data compression technique in flight. From left to right, they show the Moon in the green, blue and ultraviolet regions of the spectrum in 40, 60 and 80 millisecond exposures, respectively. All three images have been scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is the same in each image. The spatial scale in the blue and ultraviolet images is 1.4 miles per pixel (2.3 kilometers). The original scale in the green image (which was captured in the usual manner and then reduced in size by 2x2 pixel summing within the camera system) was 2.8 miles per pixel (4.6 kilometers). It has been enlarged for display to the same scale as the other two. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  18. Multi-atlas learner fusion: An efficient segmentation approach for large-scale data.

    PubMed

    Asman, Andrew J; Huo, Yuankai; Plassard, Andrew J; Landman, Bennett A

    2015-12-01

    We propose multi-atlas learner fusion (MLF), a framework for rapidly and accurately replicating the highly accurate, yet computationally expensive, multi-atlas segmentation framework based on fusing local learners. In the largest whole-brain multi-atlas study yet reported, multi-atlas segmentations are estimated for a training set of 3464 MR brain images. Using these multi-atlas estimates we (1) estimate a low-dimensional representation for selecting locally appropriate example images, and (2) build AdaBoost learners that map a weak initial segmentation to the multi-atlas segmentation result. Thus, to segment a new target image we project the image into the low-dimensional space, construct a weak initial segmentation, and fuse the trained, locally selected, learners. The MLF framework cuts the runtime on a modern computer from 36 h down to 3-8 min - a 270× speedup - by completely bypassing the need for deformable atlas-target registrations. Additionally, we (1) describe a technique for optimizing the weak initial segmentation and the AdaBoost learning parameters, (2) quantify the ability to replicate the multi-atlas result with mean accuracies approaching the multi-atlas intra-subject reproducibility on a testing set of 380 images, (3) demonstrate significant increases in the reproducibility of intra-subject segmentations when compared to a state-of-the-art multi-atlas framework on a separate reproducibility dataset, (4) show that under the MLF framework the large-scale data model significantly improve the segmentation over the small-scale model under the MLF framework, and (5) indicate that the MLF framework has comparable performance as state-of-the-art multi-atlas segmentation algorithms without using non-local information. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. A 2D multi-term time and space fractional Bloch-Torrey model based on bilinear rectangular finite elements

    NASA Astrophysics Data System (ADS)

    Qin, Shanlin; Liu, Fawang; Turner, Ian W.

    2018-03-01

    The consideration of diffusion processes in magnetic resonance imaging (MRI) signal attenuation is classically described by the Bloch-Torrey equation. However, many recent works highlight the distinct deviation in MRI signal decay due to anomalous diffusion, which motivates the fractional order generalization of the Bloch-Torrey equation. In this work, we study the two-dimensional multi-term time and space fractional diffusion equation generalized from the time and space fractional Bloch-Torrey equation. By using the Galerkin finite element method with a structured mesh consisting of rectangular elements to discretize in space and the L1 approximation of the Caputo fractional derivative in time, a fully discrete numerical scheme is derived. A rigorous analysis of stability and error estimation is provided. Numerical experiments in the square and L-shaped domains are performed to give an insight into the efficiency and reliability of our method. Then the scheme is applied to solve the multi-term time and space fractional Bloch-Torrey equation, which shows that the extra time derivative terms impact the relaxation process.

  20. USPIO-enhanced 3D-cine self-gated cardiac MRI based on a stack-of-stars golden angle short echo time sequence: Application on mice with acute myocardial infarction.

    PubMed

    Trotier, Aurélien J; Castets, Charles R; Lefrançois, William; Ribot, Emeline J; Franconi, Jean-Michel; Thiaudière, Eric; Miraux, Sylvain

    2016-08-01

    To develop and assess a 3D-cine self-gated method for cardiac imaging of murine models. A 3D stack-of-stars (SOS) short echo time (STE) sequence with a navigator echo was performed at 7T on healthy mice (n = 4) and mice with acute myocardial infarction (MI) (n = 4) injected with ultrasmall superparamagnetic iron oxide (USPIO) nanoparticles. In all, 402 spokes were acquired per stack with the incremental or the golden angle method using an angle increment of (360/402)° or 222.48°, respectively. A cylindrical k-space was filled and repeated with a maximum number of repetitions (NR) of 10. 3D cine cardiac images at 156 μm resolution were reconstructed retrospectively and compared for the two methods in terms of contrast-to-noise ratio (CNR). The golden angle images were also reconstructed with NR = 10, 6, and 3, to assess cardiac functional parameters (ejection fraction, EF) on both animal models. The combination of 3D SOS-STE and USPIO injection allowed us to optimize the identification of cardiac peaks on navigator signal and generate high CNR between blood and myocardium (15.3 ± 1.0). The golden angle method resulted in a more homogeneous distribution of the spokes inside a stack (P < 0.05), enabling reducing the acquisition time to 15 minutes. EF was significantly different between healthy and MI mice (P < 0.05). The method proposed here showed that 3D-cine images could be obtained without electrocardiogram or respiratory gating in mice. It allows precise measurement of cardiac functional parameters even on MI mice. J. Magn. Reson. Imaging 2016;44:355-365. © 2016 Wiley Periodicals, Inc.

  1. The Panchromatic STARBurst IRregular Dwarf Survey (STARBIRDS): Observations and Data Archive

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen B. W.; Mitchell, Noah P.; Skillman, Evan D.

    2015-06-01

    Understanding star formation in resolved low mass systems requires the integration of information obtained from observations at different wavelengths. We have combined new and archival multi-wavelength observations on a set of 20 nearby starburst and post-starburst dwarf galaxies to create a data archive of calibrated, homogeneously reduced images. Named the panchromatic “STARBurst IRregular Dwarf Survey” archive, the data are publicly accessible through the Mikulski Archive for Space Telescopes. This first release of the archive includes images from the Galaxy Evolution Explorer Telescope (GALEX), the Hubble Space Telescope (HST), and the Spitzer Space Telescope (Spitzer) Multiband Imaging Photometer instrument. The data sets include flux calibrated, background subtracted images, that are registered to the same world coordinate system. Additionally, a set of images are available that are all cropped to match the HST field of view. The GALEX and Spitzer images are available with foreground and background contamination masked. Larger GALEX images extending to 4 times the optical extent of the galaxies are also available. Finally, HST images convolved with a 5″ point spread function and rebinned to the larger pixel scale of the GALEX and Spitzer 24 μm images are provided. Future additions are planned that will include data at other wavelengths such as Spitzer IRAC, ground-based Hα, Chandra X-ray, and Green Bank Telescope H i imaging. Based on observations made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA), and the Canadian Astronomy Data Centre (CADC/NRC/CSA).

  2. Geolocation error tracking of ZY-3 three line cameras

    NASA Astrophysics Data System (ADS)

    Pan, Hongbo

    2017-01-01

    The high-accuracy geolocation of high-resolution satellite images (HRSIs) is a key issue for mapping and integrating multi-temporal, multi-sensor images. In this manuscript, we propose a new geometric frame for analysing the geometric error of a stereo HRSI, in which the geolocation error can be divided into three parts: the epipolar direction, cross base direction, and height direction. With this frame, we proved that the height error of three line cameras (TLCs) is independent of nadir images, and that the terrain effect has a limited impact on the geolocation errors. For ZY-3 error sources, the drift error in both the pitch and roll angle and its influence on the geolocation accuracy are analysed. Epipolar and common tie-point constraints are proposed to study the bundle adjustment of HRSIs. Epipolar constraints explain that the relative orientation can reduce the number of compensation parameters in the cross base direction and have a limited impact on the height accuracy. The common tie points adjust the pitch-angle errors to be consistent with each other for TLCs. Therefore, free-net bundle adjustment of a single strip cannot significantly improve the geolocation accuracy. Furthermore, the epipolar and common tie-point constraints cause the error to propagate into the adjacent strip when multiple strips are involved in the bundle adjustment, which results in the same attitude uncertainty throughout the whole block. Two adjacent strips-Orbit 305 and Orbit 381, covering 7 and 12 standard scenes separately-and 308 ground control points (GCPs) were used for the experiments. The experiments validate the aforementioned theory. The planimetric and height root mean square errors were 2.09 and 1.28 m, respectively, when two GCPs were settled at the beginning and end of the block.

  3. A Population-Based Assessment of the Agreement Between Grading of Goniophotographic Images and Gonioscopy in the Chinese-American Eye Study (CHES)

    PubMed Central

    Murakami, Yohko; Wang, Dandan; Burkemper, Bruce; Lin, Shan C.; Varma, Rohit

    2016-01-01

    Purpose To compare grading of goniophotographic images and gonioscopy in assessing the iridocorneal angle. Methods In a population-based, cross-sectional study, participants underwent gonioscopy and goniophotographic imaging during the same visit. The iridocorneal angle was classified as closed if the posterior trabecular meshwork could not be seen. A single masked observer graded the goniophotographic images, and each eye was classified as having angle closure based on the number of closed quadrants. Agreement between the methods was analyzed by calculating kappa (κ) and first-order agreement coefficient (AC1) statistics and comparison of area under receiver operating characteristic curves (AUC). Results A total of 4149 Chinese Americans (3994 eyes) were included in this study. The agreement for angle closure diagnosis between gonioscopy and EyeCam was moderate to excellent (κ = 0.60, AC1 0.90, AUC 0.76–0.80). Conclusions Detection of iridocorneal angle closure based on goniophotographic imaging shows moderate to very good agreement with angle closure assessment using gonioscopy. PMID:27571018

  4. Macromolecular Topography Leaps into the Digital Age

    NASA Technical Reports Server (NTRS)

    Lovelace, J.; Bellamy, H.; Snell, E. H.; Borgstahl, G.

    2003-01-01

    A low-cost, real-time digital topography system is under development which will replace x-ray film and nuclear emulsion plates. The imaging system is based on an inexpensive surveillance camera that offers a 1000x1000 array of 8 im square pixels, anti-blooming circuitry, and very quick read out. Currently, the system directly converts x-rays to an image with no phosphor. The system is small and light and can be easily adapted to work with other crystallographic equipment. Preliminary images have been acquired of cubic insulin at the NSLS x26c beam line. NSLS x26c was configured for unfocused monochromatic radiation. Six reflections were collected with stills spaced from 0.002 to 0.001 degrees apart across the entire oscillation range that the reflections were in diffracting condition. All of the reflections were rotated to the vertical to reduce Lorentz and beam related effects. This particular CCD is designed for short exposure applications (much less than 1 sec) and so has a relatively high dark current leading to noisy raw images. The images are processed to remove background and other system noise with a multi-step approach including the use of wavelets, histogram, and mean window filtering. After processing, animations were constructed with the corresponding reflection profile to show the diffraction of the crystal volume vs. the oscillation angle as well as composite images showing the parts of the crystal with the strongest diffraction for each reflection. The final goal is to correlate features seen in reflection profiles captured with fine phi slicing to those seen in the topography images. With this development macromolecular topography finally comes into the digital age.

  5. Development of Multi-Field of view-Multiple-Scattering-Polarization Lidar : analysis of angular resolved backscattered signals

    NASA Astrophysics Data System (ADS)

    Makino, T.; Okamoto, H.; Sato, K.; Tanaka, K.; Nishizawa, T.; Sugimoto, N.; Matsui, I.; Jin, Y.; Uchiyama, A.; Kudo, R.

    2014-12-01

    We have developed a new type of ground-based lidar, Multi-Field of view-Multiple-Scattering-Polarization Lidar (MFMSPL), to analyze multiple scattering contribution due to low-level clouds. One issue of the ground based lidar is the limitation of optical thickness of about 3 due to the strong attenuation in the lidar signals so that only the cloud bottom part can be observed. In order to overcome the problem, we have proposed the MFMSPL that has been designed to observe similar degree of multiple scattering contribution expected from space-borne lidar CALIOP on CALIPSO satellite. The system consists of eight detectors; four telescopes for parallel channels and four for perpendicular channels. The four pairs of telescope have been mounted with four different off-beam angles, ranging from -5 to 35mrad, where the angle is defined as the one between the direction of laser beam and the direction of telescope. Consequently, similar large foot print (100m) as CALIOP can be achieved in the MFMSPL observations when the altitude of clouds is located at about 1km. The use of multi-field of views enables to measure depolarization ratio from optically thick clouds. The outer receivers attached with larger angles generally detect backscattered signals from clouds located at upper altitudes due to the enhanced multiple scattering compared with the inner receiver that detects signals only from cloud bottom portions. Therefore the information of cloud microphysics from optically thicker regions is expected by the MFMSPL observations compared with the conventional lidar with small FOV. The MFMSPL have been continuously operated in Tsukuba, Japan since June 2014.Initial analyses have indicated expected performances from the theoretical estimation by backward Monte-Carlo simulations. The depolarization ratio from deeper part of the clouds detected by the receiver with large off-beam angle showed much larger values than those from the one with small angle. The calibration procedures and summary of initial observations will be presented. The observed data obtained by the MFMSPL will be used to develop and evaluate the retrieval algorithms for cloud microphysics applied to the CALIOP data.

  6. Complex Contact Angles Calculated from Capillary Rise Measurements on Rock Fracture Faces

    NASA Astrophysics Data System (ADS)

    Perfect, E.; Gates, C. H.; Brabazon, J. W.; Santodonato, L. J.; Dhiman, I.; Bilheux, H.; Bilheux, J. C.; Lokitz, B. S.

    2017-12-01

    Contact angles for fluids in unconventional reservoir rocks are needed for modeling hydraulic fracturing leakoff and subsequent oil and gas extraction. Contact angle measurements for wetting fluids on rocks are normally performed using polished flat surfaces. However, such prepared surfaces are not representative of natural rock fracture faces, which have been shown to be rough over multiple scales. We applied a variant of the Wilhelmy plate method for determining contact angle from the height of capillary rise on a vertical surface to the wetting of rock fracture faces by water in the presence of air. Cylindrical core samples (5.05 cm long x 2.54 cm diameter) of Mancos shale and 6 other rock types were investigated. Mode I fractures were created within the cores using the Brazilian method. Each fractured core was then separated into halves exposing the fracture faces. One fracture face from each rock type was oriented parallel to a collimated neutron beam in the CG-1D imaging instrument at ORNL's High Flux Isotope Reactor. Neutron radiography was performed using the multi-channel plate detector with a spatial resolution of 50 μm. Images were acquired every 60 s after a water reservoir contacted the base of the fracture face. The images were normalized to the initial dry condition so that the upward movement of water on the fracture face was clearly visible. The height of wetting at equilibrium was measured on the normalized images using ImageJ. Contact angles were also measured on polished flat surfaces using the conventional sessile drop method. Equilibrium capillary rise on the exposed fracture faces was up to 8.5 times greater than that predicted for polished flat surfaces from the sessile drop measurements. These results indicate that rock fracture faces are hyperhydrophilic (i.e., the height of capillary rise is greater than that predicted for a contact angle of zero degrees). The use of complex numbers permitted calculation of imaginary contact angles for such surfaces. This analysis yielded a continuum of contact angles (real above, and imaginary below, zero degrees) that can be used to investigate relationships with properties such surface roughness and porosity. It should be noted these are preliminary, unreplicated results and further research will be needed to verify them and refine the approach.

  7. Touch the Invisible Sky: A multi-wavelength Braille book featuring NASA images

    NASA Astrophysics Data System (ADS)

    Steel, S.; Grice, N.; Daou, D.

    2008-06-01

    Multi-wavelength astronomy - the study of the Universe at wavelengths beyond the visible, has revolutionised our understanding and appreciation of the cosmos. Hubble, Chandra and Spitzer are examples of powerful, space-based telescopes that complement each other in their observations spanning the electromagnetic spectrum. While several Braille books on astronomical topics have been published, to this point, no printed material accessible to the sight disabled or Braille reading public has been available on the topic of multi-wavelength astronomy. Touch the Invisible Sky presents the first printed introduction to modern, multi-wavelength astronomy studies to the disabled sight community. On a more fundamental level, tactile images of a Universe that had, until recently, been invisible to all, sighted or non-sighted, is an important learning message on how science and technology broadens our senses and our understanding of the natural world.

  8. Comparing observer models and feature selection methods for a task-based statistical assessment of digital breast tomsynthesis in reconstruction space

    NASA Astrophysics Data System (ADS)

    Park, Subok; Zhang, George Z.; Zeng, Rongping; Myers, Kyle J.

    2014-03-01

    A task-based assessment of image quality1 for digital breast tomosynthesis (DBT) can be done in either the projected or reconstructed data space. As the choice of observer models and feature selection methods can vary depending on the type of task and data statistics, we previously investigated the performance of two channelized- Hotelling observer models in conjunction with 2D Laguerre-Gauss (LG) and two implementations of partial least squares (PLS) channels along with that of the Hotelling observer in binary detection tasks involving DBT projections.2, 3 The difference in these observers lies in how the spatial correlation in DBT angular projections is incorporated in the observer's strategy to perform the given task. In the current work, we extend our method to the reconstructed data space of DBT. We investigate how various model observers including the aforementioned compare for performing the binary detection of a spherical signal embedded in structured breast phantoms with the use of DBT slices reconstructed via filtered back projection. We explore how well the model observers incorporate the spatial correlation between different numbers of reconstructed DBT slices while varying the number of projections. For this, relatively small and large scan angles (24° and 96°) are used for comparison. Our results indicate that 1) given a particular scan angle, the number of projections needed to achieve the best performance for each observer is similar across all observer/channel combinations, i.e., Np = 25 for scan angle 96° and Np = 13 for scan angle 24°, and 2) given these sufficient numbers of projections, the number of slices for each observer to achieve the best performance differs depending on the channel/observer types, which is more pronounced in the narrow scan angle case.

  9. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    PubMed

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  10. 2.5D multi-view gait recognition based on point cloud registration.

    PubMed

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-03-28

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM.

  11. Sensitivity of multiangle photo-polarimetry to absorbing aerosol vertical layering and properties: Quantifying measurement uncertainties for ACE requirements

    NASA Astrophysics Data System (ADS)

    Kalashnikova, O. V.; Garay, M. J.; Davis, A. B.; Natraj, V.; Diner, D. J.; Tanelli, S.; Martonchik, J. V.; JPl Team

    2011-12-01

    The impact of tropospheric aerosols on climate can vary greatly based upon relatively small variations in aerosol properties, such as composition, shape and size distributions, as well as vertical layering. Multi-angle polarimetric measurements have been advocated in recent years as an additional tool to better understand and retrieve the aerosol properties needed for improved predictions of aerosol radiative forcing on climate. The central concern of this work is the assessment of the effects of absorbing aerosol properties under measurement uncertainties achievable for future generation multi-angle, polarimetric imaging instruments under ACE mission requirements. As guidelines, the on-orbit performance of MISR for multi-angle intensity measurements and the reported polarization sensitivities of a MSPI prototype were adopted. In particular, we will focus on sensitivities to absorbing aerosol layering and observation-constrained refractive indices (resulting in various single scattering albedos (SSA)) of both spherical and non-spherical absorbing aerosol types. We conducted modeling experiments to determine how the measured Stokes vector elements are affected in UV-NIR range by the vertical distribution, mixing and layering of smoke and dust aerosols, and aerosol SSA under the assumption of a black and polarizing ocean surfaces. We use a vector successive-orders-of-scattering (SOS) and VLIDORT transfer codes that show excellent agreement. Based on our sensitivity studies we will demonstrate advantages and disadvantages of wavelength selection in UV-NIR range to access absorbing aerosol properties. Polarized UV channels do not show particular advantage for absorbing aerosol property characterization due to dominating molecular signal. Polarimetric SSA sensitivity is small, however needed to be considered in the future polarimetric retrievals under ACE-defined uncertainty.

  12. [The research on bidirectional reflectance computer simulation of forest canopy at pixel scale].

    PubMed

    Song, Jin-Ling; Wang, Jin-Di; Shuai, Yan-Min; Xiao, Zhi-Qiang

    2009-08-01

    Computer simulation is based on computer graphics to generate the realistic 3D structure scene of vegetation, and to simulate the canopy regime using radiosity method. In the present paper, the authors expand the computer simulation model to simulate forest canopy bidirectional reflectance at pixel scale. But usually, the trees are complex structures, which are tall and have many branches. So there is almost a need for hundreds of thousands or even millions of facets to built up the realistic structure scene for the forest It is difficult for the radiosity method to compute so many facets. In order to make the radiosity method to simulate the forest scene at pixel scale, in the authors' research, the authors proposed one idea to simplify the structure of forest crowns, and abstract the crowns to ellipsoids. And based on the optical characteristics of the tree component and the characteristics of the internal energy transmission of photon in real crown, the authors valued the optical characteristics of ellipsoid surface facets. In the computer simulation of the forest, with the idea of geometrical optics model, the gap model is considered to get the forest canopy bidirectional reflectance at pixel scale. Comparing the computer simulation results with the GOMS model, and Multi-angle Imaging SpectroRadiometer (MISR) multi-angle remote sensing data, the simulation results are in agreement with the GOMS simulation result and MISR BRF. But there are also some problems to be solved. So the authors can conclude that the study has important value for the application of multi-angle remote sensing and the inversion of vegetation canopy structure parameters.

  13. Pine Island Glacier, Antarctica, MISR Multi-angle Composite

    Atmospheric Science Data Center

    2013-12-17

    ...     View Larger Image (JPEG) A large iceberg has finally separated from the calving front ... next due to stereo parallax. This parallax is used in MISR processing to retrieve cloud heights over snow and ice. Additionally, a plume ...

  14. Global Albedo

    Atmospheric Science Data Center

    2013-04-19

    ... the albedo. Bright surfaces have albedo near unity, and dark surfaces have albedo near zero. The DHR refers to the amount of spectral ... Atmospheric Science Data Center's  MISR Level 3 Imagery web site . The Multi-angle Imaging SpectroRadiometer observes the daylit ...

  15. A method of measuring micro-impulse with torsion pendulum based on multi-beam laser heterodyne

    NASA Astrophysics Data System (ADS)

    Li, Yan-Chao; Wang, Chun-Hui

    2012-02-01

    In this paper, we propose a novel method of multi-beam laser heterodyne measurement for micro-impulse. The measurement of the micro-impulse, which is converted into the measurement of the small tuning angle of the torsion pendulum, is realized by considering the interaction between pulse laser and working medium. Based on Doppler effect and heterodyne technology, the information regarding the small tuning angle is loaded to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, thereby obtaining many values of the small tuning angle after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, the small tuning angle can be obtained accurately and the value of the micro-impulse can eventually be calculated. Using Polyvinylchlorid+2%C as a working medium, this novel method is used to simulate the value of the micro-impulse by MATLAB which is generated by considering the interaction between the pulse laser and the working medium, the obtained result shows that the relative error of this method is just 0.5%.

  16. Sensitivity analysis of observed reflectivity to ice particle surface roughness using MISR satellite observations

    NASA Astrophysics Data System (ADS)

    Bell, A.; Hioki, S.; Wang, Y.; Yang, P.; Di Girolamo, L.

    2016-12-01

    Previous studies found that including ice particle surface roughness in forward light scattering calculations significantly reduces the differences between observed and simulated polarimetric and radiometric observations. While it is suggested that some degree of roughness is desirable, the appropriate degree of surface roughness to be assumed in operational cloud property retrievals and the sensitivity of retrieval products to this assumption remains uncertain. In an effort to extricate this ambiguity, we will present a sensitivity analysis of space-borne multi-angle observations of reflectivity, to varying degrees of surface roughness. This process is two fold. First, sampling information and statistics of Multi-angle Imaging SpectroRadiometer (MISR) sensor data aboard the Terra platform, will be used to define the most coming viewing observation geometries. Using these defined geometries, reflectivity will be simulated for multiple degrees of roughness using results from adding-doubling radiative transfer simulations. Sensitivity of simulated reflectivity to surface roughness can then be quantified, thus yielding a more robust retrieval system. Secondly, sensitivity of the inverse problem will be analyzed. Spherical albedo values will be computed by feeding blocks of MISR data comprising cloudy pixels over ocean into the retrieval system, with assumed values of surface roughness. The sensitivity of spherical albedo to the inclusion of surface roughness can then be quantified, and the accuracy of retrieved parameters can be determined.

  17. A multi-cone x-ray imaging Bragg crystal spectrometer

    DOE PAGES

    Bitter, M.; Hill, K. W.; Gao, Lan; ...

    2016-08-26

    This article describes a new x-ray imaging Bragg crystal spectrometer, which—in combination with a streak camera or a gated strip detector—can be used for time-resolved measurements of x-ray line spectra at the National Ignition Facility and other high power laser facilities. The main advantage of this instrument is that it produces perfect images of a point source for each wavelength in a selectable spectral range and that the detector plane can be perpendicular to the crystal surface or inclined by an arbitrary angle with respect to the crystal surface. Furthermore, these unique imaging properties are obtained by bending the x-raymore » diffracting crystal into a certain shape, which is generated by arranging multiple cones with different aperture angles on a common nodal line.« less

  18. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  19. The Zeldovich approximation and wide-angle redshift-space distortions

    NASA Astrophysics Data System (ADS)

    Castorina, Emanuele; White, Martin

    2018-06-01

    The contribution of line-of-sight peculiar velocities to the observed redshift of objects breaks the translational symmetry of the underlying theory, modifying the predicted 2-point functions. These `wide angle effects' have mostly been studied using linear perturbation theory in the context of the multipoles of the correlation function and power spectrum . In this work we present the first calculation of wide angle terms in the Zeldovich approximation, which is known to be more accurate than linear theory on scales probed by the next generation of galaxy surveys. We present the exact result for dark matter and perturbatively biased tracers as well as the small angle expansion of the configuration- and Fourier-space two-point functions and the connection to the multi-frequency angular power spectrum. We compare different definitions of the line-of-sight direction and discuss how to translate between them. We show that wide angle terms can reach tens of percent of the total signal in a measurement at low redshift in some approximations, and that a generic feature of wide angle effects is to slightly shift the Baryon Acoustic Oscillation scale.

  20. Reflections on current and future applications of multiangle imaging to aerosol and cloud remote sensing

    NASA Astrophysics Data System (ADS)

    Diner, David

    2010-05-01

    The Multi-angle Imaging SpectroRadiometer (MISR) instrument has been collecting global Earth data from NASA's Terra satellite since February 2000. With its 9 along-track view angles, 4 spectral bands, intrinsic spatial resolution of 275 m, and stable radiometric and geometric calibration, no instrument that combines MISR's attributes has previously flown in space, nor is there is a similar capability currently available on any other satellite platform. Multiangle imaging offers several tools for remote sensing of aerosol and cloud properties, including bidirectional reflectance and scattering measurements, stereoscopic pattern matching, time lapse sequencing, and potentially, optical tomography. Current data products from MISR employ several of these techniques. Observations of the intensity of scattered light as a function of view angle and wavelength provide accurate measures of aerosol optical depths (AOD) over land, including bright desert and urban source regions. Partitioning of AOD according to retrieved particle classification and incorporation of height information improves the relationship between AOD and surface PM2.5 (fine particulate matter, a regulated air pollutant), constituting an important step toward a satellite-based particulate pollution monitoring system. Stereoscopic cloud-top heights provide a unique metric for detecting interannual variability of clouds and exceptionally high quality and sensitivity for detection and height retrieval for low-level clouds. Using the several-minute time interval between camera views, MISR has enabled a pole-to-pole, height-resolved atmospheric wind measurement system. Stereo imagery also makes possible global measurement of the injection heights and advection speeds of smoke plumes, volcanic plumes, and dust clouds, for which a large database is now available. To build upon what has been learned during the first decade of MISR observations, we are evaluating algorithm updates that not only refine retrieval accuracies but also include enhancements (e.g., finer spatial resolution) that would have been computationally prohibitive just ten years ago. In addition, we are developing technological building blocks for future sensors that enable broader spectral coverage, wider swath, and incorporation of high-accuracy polarimetric imaging. Prototype cameras incorporating photoelastic modulators have been constructed. To fully capitalize on the rich information content of the current and next-generation of multiangle imagers, several algorithmic paradigms currently employed need to be re-examined, e.g., the use of aerosol look-up tables, neglect of 3-D effects, and binary partitioning of the atmosphere into "cloudy" or "clear" designations. Examples of progress in algorithm and technology developments geared toward advanced application of multiangle imaging to remote sensing of aerosols and clouds will be presented.

  1. Bio-inspired multi-mode optic flow sensors for micro air vehicles

    NASA Astrophysics Data System (ADS)

    Park, Seokjun; Choi, Jaehyuk; Cho, Jihyun; Yoon, Euisik

    2013-06-01

    Monitoring wide-field surrounding information is essential for vision-based autonomous navigation in micro-air-vehicles (MAV). Our image-cube (iCube) module, which consists of multiple sensors that are facing different angles in 3-D space, can be applied to the wide-field of view optic flows estimation (μ-Compound eyes) and to attitude control (μ- Ocelli) in the Micro Autonomous Systems and Technology (MAST) platforms. In this paper, we report an analog/digital (A/D) mixed-mode optic-flow sensor, which generates both optic flows and normal images in different modes for μ- Compound eyes and μ-Ocelli applications. The sensor employs a time-stamp based optic flow algorithm which is modified from the conventional EMD (Elementary Motion Detector) algorithm to give an optimum partitioning of hardware blocks in analog and digital domains as well as adequate allocation of pixel-level, column-parallel, and chip-level signal processing. Temporal filtering, which may require huge hardware resources if implemented in digital domain, is remained in a pixel-level analog processing unit. The rest of the blocks, including feature detection and timestamp latching, are implemented using digital circuits in a column-parallel processing unit. Finally, time-stamp information is decoded into velocity from look-up tables, multiplications, and simple subtraction circuits in a chip-level processing unit, thus significantly reducing core digital processing power consumption. In the normal image mode, the sensor generates 8-b digital images using single slope ADCs in the column unit. In the optic flow mode, the sensor estimates 8-b 1-D optic flows from the integrated mixed-mode algorithm core and 2-D optic flows with an external timestamp processing, respectively.

  2. Detection of Neuron Membranes in Electron Microscopy Images Using Multi-scale Context and Radon-Like Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seyedhosseini, Mojtaba; Kumar, Ritwik; Jurrus, Elizabeth R.

    2011-10-01

    Automated neural circuit reconstruction through electron microscopy (EM) images is a challenging problem. In this paper, we present a novel method that exploits multi-scale contextual information together with Radon-like features (RLF) to learn a series of discriminative models. The main idea is to build a framework which is capable of extracting information about cell membranes from a large contextual area of an EM image in a computationally efficient way. Toward this goal, we extract RLF that can be computed efficiently from the input image and generate a scale-space representation of the context images that are obtained at the output ofmore » each discriminative model in the series. Compared to a single-scale model, the use of a multi-scale representation of the context image gives the subsequent classifiers access to a larger contextual area in an effective way. Our strategy is general and independent of the classifier and has the potential to be used in any context based framework. We demonstrate that our method outperforms the state-of-the-art algorithms in detection of neuron membranes in EM images.« less

  3. Batman flies: a compact spectro-imager for space observation

    NASA Astrophysics Data System (ADS)

    Zamkotsian, Frederic; Ilbert, Olivier; Zoubian, Julien; Delsanti, Audrey; Boissier, Samuel; Lancon, Ariane

    2017-11-01

    Multi-object spectroscopy (MOS) is a key technique for large field of view surveys. MOEMS programmable slit masks could be next-generation devices for selecting objects in future infrared astronomical instrumentation for space telescopes. MOS is used extensively to investigate astronomical objects by optimizing the Signal-to-Noise Ratio (SNR): high precision spectra are obtained and the problem of spectral confusion and background level occurring in slitless spectroscopy is cancelled. Fainter limiting fluxes are reached and the scientific return is maximized both in cosmology and in legacy science. Major telescopes around the world are equipped with MOS in order to simultaneously record several hundred spectra in a single observation run. Next generation MOS for space like the Near Infrared Multi-Object Spectrograph (NIRSpec) for the James Webb Space Telescope (JWST) require a programmable multi-slit mask. Conventional masks or complex fiber-optics-based mechanisms are not attractive for space. The programmable multi-slit mask requires remote control of the multislit configuration in real time. During the early-phase studies of the European Space Agency (ESA) EUCLID mission, a MOS instrument based on a MOEMS device has been assessed. Due to complexity and cost reasons, slitless spectroscopy was chosen for EUCLID, despite a much higher efficiency with slit spectroscopy. A promising possible solution is the use of MOEMS devices such as micromirror arrays (MMA) [1,2,3] or micro-shutter arrays (MSA) [4]. MMAs are designed for generating reflecting slits, while MSAs generate transmissive slits. In Europe an effort is currently under way to develop single-crystalline silicon micromirror arrays for future generation infrared multi-object spectroscopy (collaboration LAM / EPFL-CSEM) [5,6]. By placing the programmable slit mask in the focal plane of the telescope, the light from selected objects is directed toward the spectrograph, while the light from other objects and from the sky background is blocked. To get more than 2 millions independent micromirrors, the only available component is a Digital Micromirror Device (DMD) chip from Texas Instruments (TI) that features 2048 x 1080 mirrors and a 13.68μm pixel pitch. DMDs have been tested in space environment (-40°C, vacuum, radiations) by LAM and no showstopper has been revealed [7]. We are presenting in this paper a DMD-based spectrograph called BATMAN, including two arms, one spectroscopic channel and one imaging channel. This instrument is designed for getting breakthrough results in several science cases, from high-z galaxies to nearby galaxies and Trans-Neptunian Objects of Kuiper Belt.

  4. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  5. Visualizing transplanted muscle flaps using minimally invasive multi-electrode bioimpedance spectroscopy

    NASA Astrophysics Data System (ADS)

    Gordon, R.; Zorkova, V.; Min, M.; Rätsep, I.

    2010-04-01

    We describe here an imaging system that uses bioimpedance spectroscopy with multi-electrode array to indicate the state of muscle flap regions under the array. The system is able to differentiate between different health states in the tissue and give early information about the location and size of ischemic sub-regions. The array will be 4*8 electrodes with the spacing of 5mm between the electrodes (the number of electrodes and the spacing may vary). The electrodes are minimally invasive short stainless steel needles, that penetrate 0.3 mm into the tissue with the goal of achieving a wet electric contact. We combine 32 configurations of 4-electrode multi-frequency impedance measurements to derive a health-state map for the transplanted flap. The imaging method is tested on a model consisting of 2 tissues and FEM software (Finite Element Method -COMSOL Multiphysics based) is used to conduct the measurements virtually. Dedicated multichannel bioimpedance measurement equipment has already been developed and tested, that cover the frequency range from 100 Hz to 1 MHz.

  6. SPECT3D - A multi-dimensional collisional-radiative code for generating diagnostic signatures based on hydrodynamics and PIC simulation output

    NASA Astrophysics Data System (ADS)

    MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

    2007-05-01

    SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.

  7. Recognition of rotated images using the multi-valued neuron and rotation-invariant 2D Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Aizenberg, Evgeni; Bigio, Irving J.; Rodriguez-Diaz, Eladio

    2012-03-01

    The Fourier descriptors paradigm is a well-established approach for affine-invariant characterization of shape contours. In the work presented here, we extend this method to images, and obtain a 2D Fourier representation that is invariant to image rotation. The proposed technique retains phase uniqueness, and therefore structural image information is not lost. Rotation-invariant phase coefficients were used to train a single multi-valued neuron (MVN) to recognize satellite and human face images rotated by a wide range of angles. Experiments yielded 100% and 96.43% classification rate for each data set, respectively. Recognition performance was additionally evaluated under effects of lossy JPEG compression and additive Gaussian noise. Preliminary results show that the derived rotation-invariant features combined with the MVN provide a promising scheme for efficient recognition of rotated images.

  8. Tailoring graphene layer-to-layer growth

    NASA Astrophysics Data System (ADS)

    Li, Yongtao; Wu, Bin; Guo, Wei; Wang, Lifeng; Li, Jingbo; Liu, Yunqi

    2017-06-01

    A layered material grown between a substrate and the upper layer involves complex interactions and a confined reaction space, representing an unusual growth mode. Here, we show multi-layer graphene domains grown on liquid or solid Cu by the chemical vapor deposition method via this ‘double-substrate’ mode. We demonstrate the interlayer-induced coupling effect on the twist angle in bi- and multi-layer graphene. We discover dramatic growth disunity for different graphene layers, which is explained by the ideas of a chemical ‘gate’ and a material transport process within a confined space. These key results lead to a consistent framework for understanding the dynamic evolution of multi-layered graphene flakes and tailoring the layer-to-layer growth for practical applications.

  9. The research of multi-frame target recognition based on laser active imaging

    NASA Astrophysics Data System (ADS)

    Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan

    2013-09-01

    Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.

  10. Development of a prototype Open-close positron emission tomography system

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Okumura, Satoshi; Watabe, Tadashi; Ikeda, Hayato; Kanai, Yasukazu; Toshito, Toshiyuki; Komori, Masataka; Ogata, Yoshimune; Kato, Katsuhiko; Hatazawa, Jun

    2015-08-01

    We developed a prototype positron emission tomography (PET) system based on a new concept called Open-close PET, which has two modes: open and close-modes. In the open-mode, the detector ring is separated into two halved rings and subject is imaged with the open space and projection image is formed. In the close-mode, the detector ring is closed to be a regular circular ring, and the subject can be imaged without an open space, and so reconstructed images can be made without artifacts. The block detector of the Open-close PET system consists of two scintillator blocks that use two types of gadolinium orthosilicate (GSO) scintillators with different decay times, angled optical fiber-based image guides, and a flat panel photomultiplier tube. The GSO pixel size was 1.6 × 2.4 × 7 mm and 8 mm for fast (35 ns) and slow (60 ns) GSOs, respectively. These GSOs were arranged into an 11 × 15 matrix and optically coupled in the depth direction to form a depth-of-interaction detector. The angled optical fiber-based image guides were used to arrange the two scintillator blocks at 22.5° so that they can be arranged in a hexadecagonal shape with eight block detectors to simplify the reconstruction algorithm. The detector ring was divided into two halves to realize the open-mode and set on a mechanical stand with which the distance between the two parts can be manually changed. The spatial resolution in the close-mode was 2.4-mm FWHM, and the sensitivity was 1.7% at the center of the field-of-view. In both the close- and open-modes, we made sagittal (y-z plane) projection images between the two halved detector rings. We obtained reconstructed and projection images of 18F-NaF rat studies and proton-irradiated phantom images. These results indicate that our developed Open-close PET is useful for some applications such as proton therapy as well as other applications such as molecular imaging.

  11. An Innovate Robotic Endoscope Guidance System for Transnasal Sinus and Skull Base Surgery: Proof of Concept.

    PubMed

    Friedrich, D T; Sommer, F; Scheithauer, M O; Greve, J; Hoffmann, T K; Schuler, P J

    2017-12-01

    Objective  Advanced transnasal sinus and skull base surgery remains a challenging discipline for head and neck surgeons. Restricted access and space for instrumentation can impede advanced interventions. Thus, we present the combination of an innovative robotic endoscope guidance system and a specific endoscope with adjustable viewing angle to facilitate transnasal surgery in a human cadaver model. Materials and Methods  The applicability of the robotic endoscope guidance system with custom foot pedal controller was tested for advanced transnasal surgery on a fresh frozen human cadaver head. Visualization was enabled using a commercially available endoscope with adjustable viewing angle (15-90 degrees). Results  Visualization and instrumentation of all paranasal sinuses, including the anterior and middle skull base, were feasible with the presented setup. Controlling the robotic endoscope guidance system was effectively precise, and the adjustable endoscope lens extended the view in the surgical field without the common change of fixed viewing angle endoscopes. Conclusion  The combination of a robotic endoscope guidance system and an advanced endoscope with adjustable viewing angle enables bimanual surgery in transnasal interventions of the paranasal sinuses and the anterior skull base in a human cadaver model. The adjustable lens allows for the abandonment of fixed-angle endoscopes, saving time and resources, without reducing the quality of imaging.

  12. Wavelet SVM in Reproducing Kernel Hilbert Space for hyperspectral remote sensing image classification

    NASA Astrophysics Data System (ADS)

    Du, Peijun; Tan, Kun; Xing, Xiaoshi

    2010-12-01

    Combining Support Vector Machine (SVM) with wavelet analysis, we constructed wavelet SVM (WSVM) classifier based on wavelet kernel functions in Reproducing Kernel Hilbert Space (RKHS). In conventional kernel theory, SVM is faced with the bottleneck of kernel parameter selection which further results in time-consuming and low classification accuracy. The wavelet kernel in RKHS is a kind of multidimensional wavelet function that can approximate arbitrary nonlinear functions. Implications on semiparametric estimation are proposed in this paper. Airborne Operational Modular Imaging Spectrometer II (OMIS II) hyperspectral remote sensing image with 64 bands and Reflective Optics System Imaging Spectrometer (ROSIS) data with 115 bands were used to experiment the performance and accuracy of the proposed WSVM classifier. The experimental results indicate that the WSVM classifier can obtain the highest accuracy when using the Coiflet Kernel function in wavelet transform. In contrast with some traditional classifiers, including Spectral Angle Mapping (SAM) and Minimum Distance Classification (MDC), and SVM classifier using Radial Basis Function kernel, the proposed wavelet SVM classifier using the wavelet kernel function in Reproducing Kernel Hilbert Space is capable of improving classification accuracy obviously.

  13. A practical salient region feature based 3D multi-modality registration method for medical images

    NASA Astrophysics Data System (ADS)

    Hahn, Dieter A.; Wolz, Gabriele; Sun, Yiyong; Hornegger, Joachim; Sauer, Frank; Kuwert, Torsten; Xu, Chenyang

    2006-03-01

    We present a novel representation of 3D salient region features and its integration into a hybrid rigid-body registration framework. We adopt scale, translation and rotation invariance properties of those intrinsic 3D features to estimate a transform between underlying mono- or multi-modal 3D medical images. Our method combines advantageous aspects of both feature- and intensity-based approaches and consists of three steps: an automatic extraction of a set of 3D salient region features on each image, a robust estimation of correspondences and their sub-pixel accurate refinement with outliers elimination. We propose a region-growing based approach for the extraction of 3D salient region features, a solution to the problem of feature clustering and a reduction of the correspondence search space complexity. Results of the developed algorithm are presented for both mono- and multi-modal intra-patient 3D image pairs (CT, PET and SPECT) that have been acquired for change detection, tumor localization, and time based intra-person studies. The accuracy of the method is clinically evaluated by a medical expert with an approach that measures the distance between a set of selected corresponding points consisting of both anatomical and functional structures or lesion sites. This demonstrates the robustness of the proposed method to image overlap, missing information and artefacts. We conclude by discussing potential medical applications and possibilities for integration into a non-rigid registration framework.

  14. 3D-shape recognition and size measurement of irregular rough particles using multi-views interferometric out-of-focus imaging.

    PubMed

    Ouldarbi, L; Talbi, M; Coëtmellec, S; Lebrun, D; Gréhan, G; Perret, G; Brunel, M

    2016-11-10

    We realize simplified-tomography experiments on irregular rough particles using interferometric out-of-focus imaging. Using two angles of view, we determine the global 3D-shape, the dimensions, and the 3D-orientation of irregular rough particles whose morphologies belong to families such as sticks, plates, and crosses.

  15. Islands in the Midst of the World

    NASA Technical Reports Server (NTRS)

    2002-01-01

    The Greek islands of the Aegean Sea, scattered across 800 kilometers from north to south and between Greece and western Turkey, are uniquely situated at the intersection of Europe, Asia and Africa. This image from the Multi-angle Imaging SpectroRadiometer includes many of the islands of the East Aegean, Sporades, Cyclades, Dodecanese and Crete, as well as part of mainland Turkey. Many sites important to ancient and modern history can be found here. The largest modern city in the Aegean coast is Izmir, situated about one quarter of the image length from the top, southeast of the large three-pronged island of Lesvos. Izmir can be located as a bright coastal area near the greenish waters of the Izmir Bay, about one quarter of the image length from the top, southeast of Lesvos. The coastal areas around this cosmopolitan Turkish city were a center of Ionian culture from the 11th century BC, and at the top of the image (north of Lesvos), once stood the ancient city of Troy.

    The image was acquired before the onset of the winter rains, on September 30, 2001, but dense vegetation is never very abundant in the arid Mediterranean climate. The sharpness and clarity of the view also indicate dry, clear air. Some vegetative changes can be detected between the western or southern islands such as Crete (the large island along the bottom of the image) and those closer to the Turkish coast which appear comparatively green. Volcanic activities are evident by the form of the islands of Santorini. This small group of islands shaped like a broken ring are situated to the right and below image center. Santorini's Thera volcano erupted around 1640 BC, and the rim of the caldera collapsed, forming the shape of the islands as they exist today.

    The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously from pole to pole, and views almost the entire globe every 9 days. This natural-color image was acquired by MISR's nadir (vertical-viewing) camera, and is a portion of the data acquired during Terra orbit 9495. The image covers an area of 369 kilometers x 567 kilometers, and utilizes data from blocks 58 to 64 within World Reference System-2 path 181.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  16. Multi-frequency subspace migration for imaging of perfectly conducting, arc-like cracks in full- and limited-view inverse scattering problems

    NASA Astrophysics Data System (ADS)

    Park, Won-Kwang

    2015-02-01

    Multi-frequency subspace migration imaging techniques are usually adopted for the non-iterative imaging of unknown electromagnetic targets, such as cracks in concrete walls or bridges and anti-personnel mines in the ground, in the inverse scattering problems. It is confirmed that this technique is very fast, effective, robust, and can not only be applied to full- but also to limited-view inverse problems if a suitable number of incidents and corresponding scattered fields are applied and collected. However, in many works, the application of such techniques is heuristic. With the motivation of such heuristic application, this study analyzes the structure of the imaging functional employed in the subspace migration imaging technique in two-dimensional full- and limited-view inverse scattering problems when the unknown targets are arbitrary-shaped, arc-like perfectly conducting cracks located in the two-dimensional homogeneous space. In contrast to the statistical approach based on statistical hypothesis testing, our approach is based on the fact that the subspace migration imaging functional can be expressed by a linear combination of the Bessel functions of integer order of the first kind. This is based on the structure of the Multi-Static Response (MSR) matrix collected in the far-field at nonzero frequency in either Transverse Magnetic (TM) mode (Dirichlet boundary condition) or Transverse Electric (TE) mode (Neumann boundary condition). The investigation of the expression of imaging functionals gives us certain properties of subspace migration and explains why multi-frequency enhances imaging resolution. In particular, we carefully analyze the subspace migration and confirm some properties of imaging when a small number of incident fields are applied. Consequently, we introduce a weighted multi-frequency imaging functional and confirm that it is an improved version of subspace migration in TM mode. Various results of numerical simulations performed on the far-field data affected by large amounts of random noise are similar to the analytical results derived in this study, and they provide a direction for future studies.

  17. MO-G-BRE-03: Automated Continuous Monitoring of Patient Setup with Second-Check Independent Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, X; Fox, T; Schreibmann, E

    2014-06-15

    Purpose: To create a non-supervised quality assurance program to monitor image-based patient setup. The system acts a secondary check by independently computing shifts and rotations and interfaces with Varian's database to verify therapist's work and warn against sub-optimal setups. Methods: Temporary digitally-reconstructed radiographs (DRRs) and OBI radiographic image files created by Varian's treatment console during patient setup are intercepted and used as input in an independent registration module customized for accuracy that determines the optimal rotations and shifts. To deal with the poor quality of OBI images, a histogram equalization of the live images to the DDR counterparts is performedmore » as a pre-processing step. A search for the most sensitive metric was performed by plotting search spaces subject to various translations and convergence analysis was applied to ensure the optimizer finds the global minima. Final system configuration uses the NCC metric with 150 histogram bins and a one plus one optimizer running for 2000 iterations with customized scales for translations and rotations in a multi-stage optimization setup that first corrects and translations and subsequently rotations. Results: The system was installed clinically to monitor and provide almost real-time feedback on patient positioning. On a 2 month-basis uncorrected pitch values were of a mean 0.016° with standard deviation of 1.692°, and couch rotations of − 0.090°± 1.547°. The couch shifts were −0.157°±0.466° cm for the vertical, 0.045°±0.286 laterally and 0.084°± 0.501° longitudinally. Uncorrected pitch angles were the most common source of discrepancies. Large variations in the pitch angles were correlated with patient motion inside the mask. Conclusion: A system for automated quality assurance of therapist's registration was designed and tested in clinical practice. The approach complements the clinical software's automated registration in terms of algorithm configuration and performance and constitutes a practical approach to implement safe and cost-effective radiotherapy.« less

  18. Improvements in Modeling the Collimated Jets of Comet 19P/Borrelly from the Stereo Images of the Deep Space 1 Flyby

    NASA Astrophysics Data System (ADS)

    Melville, Kenneth J.; Farnham, T.; Hoban, S.

    2010-10-01

    On September 22, 2001, the spacecraft Deep Space 1 (DS1), which was primarily designed for testing advanced technologies in space, preformed an extended mission flyby of the comet 19P/Borrelly. This encounter provided scientists with the best images taken of a comet. These images from the DS1 Miniature Integrated Camera and Spectrometer (MICAS) instrument show features of comet Borrelly's surface; collimated dust jets escaping the nucleus, and the coma of gas and dust that surrounds the nucleus. Properties of the jet, such as rate and angle of expansion have been measured accurately due to the jet's geometric structure and position on the rotation axis of the comet. These measurements have been taken for several points along the spacecrafts approach, flyby, and from additional McDonald ground based observatory images. A model of the jet with similar geometry has been constructed in order to reproduce the observational data found in the flyby images. Other proposed models are tested as well. Once these models has been adjusted to replicate the data, they can be used to investigate the collimation mechanism below the comets surface producing the jet. Comet 19P/Borrelly is the idea test for this model due to the simple structure of the jet, as well as the wide variety of angles and observation times. Using information from this model, scientists may be able to make new assumptions on the composition and physical structure of other comets. This research was supported by the NASA Planetary Data System: Small Bodies Node, and College Student Investigator Program at UMBC Goddard Earth Sciences & Technology Center.

  19. Optimization of Brain T2 Mapping Using Standard CPMG Sequence In A Clinical Scanner

    NASA Astrophysics Data System (ADS)

    Hnilicová, P.; Bittšanský, M.; Dobrota, D.

    2014-04-01

    In magnetic resonance imaging, transverse relaxation time (T2) mapping is a useful quantitative tool enabling enhanced diagnostics of many brain pathologies. The aim of our study was to test the influence of different sequence parameters on calculated T2 values, including multi-slice measurements, slice position, interslice gap, echo spacing, and pulse duration. Measurements were performed using standard multi-slice multi-echo CPMG imaging sequence on a 1.5 Tesla routine whole body MR scanner. We used multiple phantoms with different agarose concentrations (0 % to 4 %) and verified the results on a healthy volunteer. It appeared that neither the pulse duration, the size of interslice gap nor the slice shift had any impact on the T2. The measurement accuracy was increased with shorter echo spacing. Standard multi-slice multi-echo CPMG protocol with the shortest echo spacing, also the smallest available interslice gap (100 % of slice thickness) and shorter pulse duration was found to be optimal and reliable for calculating T2 maps in the human brain.

  20. High speed CMOS acquisition system based on FPGA embedded image processing for electro-optical measurements

    NASA Astrophysics Data System (ADS)

    Rosu-Hamzescu, Mihnea; Polonschii, Cristina; Oprea, Sergiu; Popescu, Dragos; David, Sorin; Bratu, Dumitru; Gheorghiu, Eugen

    2018-06-01

    Electro-optical measurements, i.e., optical waveguides and plasmonic based electrochemical impedance spectroscopy (P-EIS), are based on the sensitive dependence of refractive index of electro-optical sensors on surface charge density, modulated by an AC electrical field applied to the sensor surface. Recently, P-EIS has emerged as a new analytical tool that can resolve local impedance with high, optical spatial resolution, without using microelectrodes. This study describes a high speed image acquisition and processing system for electro-optical measurements, based on a high speed complementary metal-oxide semiconductor (CMOS) sensor and a field-programmable gate array (FPGA) board. The FPGA is used to configure CMOS parameters, as well as to receive and locally process the acquired images by performing Fourier analysis for each pixel, deriving the real and imaginary parts of the Fourier coefficients for the AC field frequencies. An AC field generator, for single or multi-sine signals, is synchronized with the high speed acquisition system for phase measurements. The system was successfully used for real-time angle-resolved electro-plasmonic measurements from 30 Hz up to 10 kHz, providing results consistent to ones obtained by a conventional electrical impedance approach. The system was able to detect amplitude variations with a relative variation of ±1%, even for rather low sampling rates per period (i.e., 8 samples per period). The PC (personal computer) acquisition and control software allows synchronized acquisition for multiple FPGA boards, making it also suitable for simultaneous angle-resolved P-EIS imaging.

  1. On the possibility of ground-based direct imaging detection of extra-solar planets: the case of TWA-7

    NASA Astrophysics Data System (ADS)

    Neuhäuser, R.; Brandner, W.; Eckart, A.; Guenther, E.; Alves, J.; Ott, T.; Huélamo, N.; Fernández, M.

    2000-02-01

    We show that ground-based direct imaging detection of extra-solar planets is possible with current technology. As an example, we present evidence for a possible planetary companion to the young T Tauri star 1RXSJ104230.3-334014 (=TWA-7), discovered by ROSAT as a member of the nearby TW Hya association. In an HST NICMOS F160W image, an object is detected that is more than 9 mag fainter than TWA-7, located 2.445 +/- 0.035'' south-east at a position angle of 142.24 +/- 1.34deg. One year later using the ESO-NTT with the SHARP speckle camera, we obtained H- and K-band detections of this faint object at a separation of 2.536 +/- 0.077'' and a position angle of 139.3 +/- 2.1deg. Given the known proper motion of TWA-7, the pair may form a proper motion pair. If the faint object orbits TWA-7, then its apparent magnitudes of H=16.42 +/- 0.11 and K=16.34 +/- 0.15 mag yield absolute magnitudes consistent with a ~ 106.5 yr old ~ 3 M_jup mass object according to the non-gray theory by Burrows et al. (1997). At ~ 55 pc, the angular separation of ~ 2.5'' corresponds to ~ 138 AU, clearly within typical disk sizes. However, position angles and separations are slightly more consistent with a background object than with a companion. Based on observations obtained at the European Southern Observatory, La Silla (ESO Proposals 62.I-0418 and 63.N-0178), and on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under the NASA contract NAS 5-26555.

  2. Limited angle C-arm tomosynthesis reconstruction algorithms

    NASA Astrophysics Data System (ADS)

    Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying

    2015-03-01

    In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.

  3. Carotid arterial wall MRI at 3T using 3D variable-flip-angle turbo spin-echo (TSE) with flow-sensitive dephasing (FSD).

    PubMed

    Fan, Zhaoyang; Zhang, Zhuoli; Chung, Yiu-Cho; Weale, Peter; Zuehlsdorff, Sven; Carr, James; Li, Debiao

    2010-03-01

    To evaluate the effectiveness of flow-sensitive dephasing (FSD) magnetization preparation in improving blood signal suppression of three-dimensional (3D) turbo spin-echo (TSE) sequence (SPACE) for isotropic high-spatial-resolution carotid arterial wall imaging at 3T. The FSD-prepared SPACE sequence (FSD-SPACE) was implemented by adding two identical FSD gradient pulses right before and after the first refocusing 180 degrees -pulse of the SPACE sequence in all three orthogonal directions. Nine healthy volunteers were imaged at 3T with SPACE, FSD-SPACE, and multislice T2-weighted 2D TSE coupled with saturation band (SB-TSE). Apparent carotid wall-lumen contrast-to-noise ratio (aCNR(w-l)) and apparent lumen area (aLA) at the locations with residual-blood (rb) signal shown on SPACE images were compared between SPACE and FSD-SPACE. Carotid aCNR(w-l) and lumen (LA) and wall area (WA) measured from FSD-SPACE were compared to those measured from SB-TSE. Plaque-mimicking flow artifacts identified in seven carotids on SPACE images were eliminated on FSD-SPACE images. The FSD preparation resulted in slightly reduced aCNR(w-l) (P = 0.025), but significantly improved aCNR between the wall and rb regions (P < 0.001) and larger aLA (P < 0.001). Compared to SB-TSE, FSD-SPACE offered comparable aCNR(w-l) with much higher spatial resolution, shorter imaging time, and larger artery coverage. The LA and WA measurements from the two techniques were in good agreement based on intraclasss correlation coefficient (0.988 and 0.949, respectively; P < 0.001) and Bland-Altman analyses. FSD-SPACE is a time-efficient 3D imaging technique for carotid arterial wall with superior spatial resolution and blood signal suppression.

  4. Pilot Study of the Effects of Ambient Light Level Variation on Spectral Domain Anterior Segment OCT-Derived Angle Metrics in Caucasians versus Asians.

    PubMed

    Dastiridou, Anna; Marion, Kenneth; Niemeyer, Moritz; Francis, Brian; Sadda, Srinivas; Chopra, Vikas

    2018-04-11

    To investigate the effects of ambient light level variation on spectral domain anterior segment optical coherence tomography (SD-ΟCT)-derived anterior chamber angle metrics in Caucasians versus Asians. Caucasian (n = 24) and Asian participants of Chinese ancestry (n = 24) with open angles on gonioscopy had one eye imaged twice at five strictly controlled, ambient light levels. Ethnicity was self-reported. Light levels were strictly controlled using a light meter at 1.0, 0.75, 0.5, 0.25, and 0 foot candle illumination levels. SD-OCT 5-line raster scans at the inferior 270° irido-corneal angle were measured by two trained, masked graders from the Doheny Image Reading Center using customized Image-J software. Schwalbe's line-angle opening distance (SL-AOD) and SL-trabecular iris space area (SL-TISA) in different light meter readings (LMRs) between the two groups were compared. Baseline light SL-AOD and SL-TISA measured 0.464 ± 0.115mm/0.351 ± 0.110mm 2 and 0.344 ± 0.118mm/0.257 ± 0.092mm 2 , respectively, in the Caucasian and the Asian group. SL-AOD and SL-TISA in each LMR were significantly larger in the Caucasian group compared to the Asian group (p < 0.05). Despite this difference in angle size between the groups, there were no statistically significant differences in the degree of change in angle parameters from light to dark (% changes in SL-AOD or SL-TISA between the two groups were statistically similar with all p-values >0.3). SL-based angle dimensions using SD-OCT are sensitive to changes in ambient illumination in participants with Caucasian and Asian ancestry. Although Caucasian eyes had larger baseline angle opening under bright light conditions, the light-to-dark change in angle dimensions was similar in the two groups.

  5. Hurricane Debby and the Appalachians Highlight New MISR Data Products

    NASA Technical Reports Server (NTRS)

    2000-01-01

    The MISR team has developed new methods for retrieving information about clouds, airborne particles, and surface properties that capitalize on the instrument's unique, multi-angle imaging approach. This illustration, based upon results contained in sample products that have just been publicly released at the Atmospheric Sciences Data Center (ASDC), highlights some of these new capabilities. The ASDC, located at NASA's Langley Research Center, is the primary processing and archive center for MISR data (http://eosweb.larc.nasa.gov/).

    On August 21, 2000, during Terra orbit 3600, MISR imaged Hurricane Debby in the Atlantic Ocean. The first panel on the left is the MISR downward-looking (nadir) view of the storm's eastern edge. The next two panels show the results of a new approach that uses MISR's stereoscopic observations to retrieve cloud heights and winds. In the middle panel of this set, gradations from low to high cloud are depicted in shades ranging from blue to red. Since it takes seven minutes for all nine MISR cameras to view any location on Earth, and the clouds moved during this time, the data also contain information about wind speed and direction. Derived wind vectors, shown in the third panel, reveal Hurricane Debby's cyclonic motion. The highest wind speed measured is nearly 100 kilometers/hour. MISR obtains this type of information on a global basis, which will help scientists study the relationship between climate change and the three-dimensional characteristics of clouds.

    MISR imaged the eastern United States on March 6, 2000, during Terra orbit 1155. The first panel in the righthand set is the downward-looking (nadir) view, covering the region from Lake Ontario to northern Georgia, and spanning the Appalachian Mountains. The middle panel is the image taken by the forward-viewing 70.5-degree camera. At this increased slant angle, the line-of-sight through the atmosphere is three times longer, and a thin haze over the Appalachians is significantly more apparent. MISR uses this enhanced sensitivity along with the variation of brightness with angle to monitor particulate pollution and to measure haze properties. The third panel shows the airborne particle (aerosol) amount, derived using new methods that take advantage of MISR's moderately high spatial resolution at very oblique angles. The aerosol results are obtained at coarser resolution than the underlying images; gradations from blue to red indicate increasing aerosol abundance. These data indicate how airborne particles are interacting with sunlight, a measure of their impact on Earth's climate.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  6. Multi-functional Infrared Sensor

    DTIC Science & Technology

    2014-05-11

    infrared imaging; perforated gold films with Si3N4 overlayers, studied the fundamental understanding of surface plasmon polariton modes and their...we studied the underlying mechanism of surface plamon polariton modes and their angle dependence by means of experiment, theory and simulation (In

  7. Chesapeake Bay

    Atmospheric Science Data Center

    2016-06-13

    ... including NASA's high-altitude ER-2 rocket plane and the University of Washington's Convair-580. At the same time, the Multi-angle ... of Cape Henry at the southern end of Chesapeake Bay, though it is not visible at the MISR resolution. The lower right image is a ...

  8. Invited Article: Mask-modulated lensless imaging with multi-angle illuminations

    NASA Astrophysics Data System (ADS)

    Zhang, Zibang; Zhou, You; Jiang, Shaowei; Guo, Kaikai; Hoshino, Kazunori; Zhong, Jingang; Suo, Jinli; Dai, Qionghai; Zheng, Guoan

    2018-06-01

    The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.

  9. Manifold alignment with Schroedinger eigenmaps

    NASA Astrophysics Data System (ADS)

    Johnson, Juan E.; Bachmann, Charles M.; Cahill, Nathan D.

    2016-05-01

    The sun-target-sensor angle can change during aerial remote sensing. In an attempt to compensate BRDF effects in multi-angular hyperspectral images, the Semi-Supervised Manifold Alignment (SSMA) algorithm pulls data from similar classes together and pushes data from different classes apart. SSMA uses Laplacian Eigenmaps (LE) to preserve the original geometric structure of each local data set independently. In this paper, we replace LE with Spatial-Spectral Schoedinger Eigenmaps (SSSE) which was designed to be a semisupervised enhancement to the to extend the SSMA methodology and improve classification of multi-angular hyperspectral images captured over Hog Island in the Virginia Coast Reserve.

  10. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Sparse interferometric millimeter-wave array for centimeter-level 100-m standoff imaging

    NASA Astrophysics Data System (ADS)

    Suen, Jonathan Y.; Lubin, Philip M.; Solomon, Steven L.; Ginn, Robert P.

    2013-05-01

    We present work on the development of a long range standoff concealed weapons detection system capable of imaging under very heavy clothing at distances exceeding 100 m with a cm resolution. The system is based off a combination of phased array technologies used in radio astronomy and SAR radar by using a coherent, multi-frequency reconstruction algorithm which can run at up to 1000 Hz frame rates and high SNR with a multi-tone transceiver. We show the flexible design space of our system as well as algorithm development, predicted system performance and impairments, and simulated reconstructed images. The system can be used for a variety of purposes including portal applications, crowd scanning and tactical situations. Additional uses include seeing through dust and fog.

  12. A Geostatistical Data Fusion Technique for Merging Remote Sensing and Ground-Based Observations of Aerosol Optical Thickness

    NASA Technical Reports Server (NTRS)

    Chatterjee, Abhishek; Michalak, Anna M.; Kahn, Ralph A.; Paradise, Susan R.; Braverman, Amy J.; Miller, Charles E.

    2010-01-01

    Particles in the atmosphere reflect incoming sunlight, tending to cool the Earth below. Some particles, such as soot, also absorb sunlight, which tens to warm the ambient atmosphere. Aerosol optical depth (AOD) is a measure of the amount of particulate matter in the atmosphere, and is a key input to computer models that simulate and predict Earth's changing climate. The global AOD products from the Multi-angle Imaging SpectroRadiometer (MISR) and the MODerate resolution Imaging Spectroradiometer (MODIS), both of which fly on the NASA Earth Observing System's Terra satellite, provide complementary views of the particles in the atmosphere. Whereas MODIS offers global coverage about four times as frequent as MISR, the multi-angle data makes it possible to separate the surface and atmospheric contributions to the observed top-of-atmosphere radiances, and also to more effectively discriminate particle type. Surface-based AERONET sun photometers retrieve AOD with smaller uncertainties than the satellite instruments, but only at a few fixed locations. So there are clear reasons to combine these data sets in a way that takes advantage of their respective strengths. This paper represents an effort at combining MISR, MODIS and AERONET AOD products over the continental US, using a common spatial statistical technique called kriging. The technique uses the correlation between the satellite data and the "ground-truth" sun photometer observations to assign uncertainty to the satellite data on a region-by-region basis. The larger fraction of the sun photometer variance that is duplicated by the satellite data, the higher the confidence assigned to the satellite data in that region. In the Western and Central US, MISR AOD correlation with AERONET are significantly higher than those with MODIS, likely due to bright surfaces in these regions, which pose greater challenges for the single-view MODIS retrievals. In the east, MODIS correlations are higher, due to more frequent sampling of the varying AOD. These results demonstrate how the MISR and MODIS aerosol products are complementary. The underlying technique also provides one method for combining these products in such a way that takes advantage of the strengths of each, in the places and times when they are maximal, and in addition, yields an estimate of the associated uncertainties in space and time.

  13. High Contrast Tests with a PIAA Coronagraph in Air

    NASA Astrophysics Data System (ADS)

    Totems, J.; Guyon, O.

    2007-06-01

    The Phase-Induced Amplitude Apodization Coronagraph, which allows high contrast imaging with a small inner working angle, is extremely attractive for future space and ground-based high contrast missions. An experiment is currently under development in our lab at the Subaru Telescope in Hilo, Hawaii, to qualify its capabilities. We will describe the optical configuration adopted and our efforts to stabilize the wavefront in order to improve its performance.

  14. Development and assessment of a higher-spatial-resolution (4.4 km) MISR aerosol optical depth product using AERONET-DRAGON data

    NASA Astrophysics Data System (ADS)

    Garay, Michael J.; Kalashnikova, Olga V.; Bull, Michael A.

    2017-04-01

    Since early 2000, the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite has been acquiring data that have been used to produce aerosol optical depth (AOD) and particle property retrievals at 17.6 km spatial resolution. Capitalizing on the capabilities provided by multi-angle viewing, the current operational (Version 22) MISR algorithm performs well, with about 75 % of MISR AOD retrievals globally falling within 0.05 or 20 % × AOD of paired validation data from the ground-based Aerosol Robotic Network (AERONET). This paper describes the development and assessment of a prototype version of a higher-spatial-resolution 4.4 km MISR aerosol optical depth product compared against multiple AERONET Distributed Regional Aerosol Gridded Observations Network (DRAGON) deployments around the globe. In comparisons with AERONET-DRAGON AODs, the 4.4 km resolution retrievals show improved correlation (r = 0. 9595), smaller RMSE (0.0768), reduced bias (-0.0208), and a larger fraction within the expected error envelope (80.92 %) relative to the Version 22 MISR retrievals.

  15. An intelligent space for mobile robot localization using a multi-camera system.

    PubMed

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  16. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    PubMed Central

    Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.

    2014-01-01

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009

  17. SU-E-J-47: Development of a High-Precision, Image-Guided Radiotherapy, Multi- Purpose Radiation Isocenter Quality-Assurance Calibration and Checking System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C; Yan, G; Helmig, R

    2014-06-01

    Purpose: To develop a system that can define the radiation isocenter and correlate this information with couch coordinates, laser alignment, optical distance indicator (ODI) settings, optical tracking system (OTS) calibrations, and mechanical isocenter walkout. Methods: Our team developed a multi-adapter, multi-purpose quality assurance (QA) and calibration device that uses an electronic portal imaging device (EPID) and in-house image-processing software to define the radiation isocenter, thereby allowing linear accelerator (Linac) components to be verified and calibrated. Motivated by the concept that each Linac component related to patient setup for image-guided radiotherapy based on cone-beam CT should be calibrated with respect tomore » the radiation isocenter, we designed multiple concentric adapters of various materials and shapes to meet the needs of MV and KV radiation isocenter definition, laser alignment, and OTS calibration. The phantom's ability to accurately define the radiation isocenter was validated on 4 Elekta Linacs using a commercial ball bearing (BB) phantom as a reference. Radiation isocenter walkout and the accuracy of couch coordinates, ODI, and OTS were then quantified with the device. Results: The device was able to define the radiation isocenter within 0.3 mm. Radiation isocenter walkout was within ±1 mm at 4 cardinal angles. By switching adapters, we identified that the accuracy of the couch position digital readout, ODI, OTS, and mechanical isocenter walkout was within sub-mm. Conclusion: This multi-adapter, multi-purpose isocenter phantom can be used to accurately define the radiation isocenter and represents a potential paradigm shift in Linac QA. Moreover, multiple concentric adapters allowed for sub-mm accuracy for the other relevant components. This intuitive and user-friendly design is currently patent pending.« less

  18. Space-based Coronagraphic Imaging Polarimetry of the TW Hydrae Disk: Shedding New Light on Self-shadowing Effects

    NASA Astrophysics Data System (ADS)

    Poteet, Charles A.; Chen, Christine H.; Hines, Dean C.; Perrin, Marshall D.; Debes, John H.; Pueyo, Laurent; Schneider, Glenn; Mazoyer, Johan; Kolokolova, Ludmilla

    2018-06-01

    We present Hubble Space Telescope Near-Infrared Camera and Multi-Object Spectrometer coronagraphic imaging polarimetry of the TW Hydrae protoplanetary disk. These observations simultaneously measure the total and polarized intensity, allowing direct measurement of the polarization fraction across the disk. In accord with the self-shadowing hypothesis recently proposed by Debes et al., we find that the total and polarized intensity of the disk exhibits strong azimuthal asymmetries at projected distances consistent with the previously reported bright and dark ring-shaped structures (∼45–99 au). The sinusoidal-like variations possess a maximum brightness at position angles near ∼268°–300° and are up to ∼28% stronger in total intensity. Furthermore, significant radial and azimuthal variations are also detected in the polarization fraction of the disk. In particular, we find that regions of lower polarization fraction are associated with annuli of increased surface brightness, suggesting that the relative proportion of multiple-to-single scattering is greater along the ring and gap structures. Moreover, we find strong (∼20%) azimuthal variation in the polarization fraction along the shadowed region of the disk. Further investigation reveals that the azimuthal variation is not the result of disk flaring effects, but is instead from a decrease in the relative contribution of multiple-to-single scattering within the shadowed region. Employing a two-layer scattering surface, we hypothesize that the diminished contribution in multiple scattering may result from shadowing by an inclined inner disk, which prevents direct stellar light from reaching the optically thick underlying surface component.

  19. Deep Ocean Tsunami Waves off the Sri Lankan Coast

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The initial tsunami waves resulting from the undersea earthquake that occurred at 00:58:53 UTC (Coordinated Universal Time) on December 26, 2004, off the island of Sumatra, Indonesia, took a little over 2 hours to reach the teardrop-shaped island of Sri Lanka. Additional waves continued to arrive for many hours afterward. At approximately 05:15 UTC, as NASA's Terra satellite passed overhead, the Multi-angle Imaging SpectroRadiometer (MISR) captured this image of deep ocean tsunami waves about 30-40 kilometers from Sri Lanka's southwestern coast. The waves are made visible due to the effects of changes in sea-surface slope on the reflected sunglint pattern, shown here in MISR's 46-degree-forward-pointing camera. Sunglint occurs when sunlight reflects off a water surface in much the same way light reflects off a mirror, and the position of the Sun, angle of observation, and orientation of the sea surface determines how bright each part of the ocean appears in the image. These large wave features were invisible to MISR's nadir (vertical-viewing) camera. The image covers an area of 208 kilometers by 207 kilometers. The greatest impact of the tsunami was generally in an east-west direction, so the havoc caused by the tsunami along the southwestern shores of Sri Lanka was not as severe as along the eastern coast. However, substantial damage did occur in this region' as evidenced by the brownish debris in the water' because tsunami waves can diffract around land masses. The ripple-like wave pattern evident in this MISR image roughly correlates with the undersea boundary of the continental shelf. The surface wave pattern is likely to have been caused by interaction of deep waves with the ocean floor, rather than by the more usually observed surface waves, which are driven by winds. It is possible that this semi-concentric pattern represents wave reflection from the continental land mass; however, a combination of wave modeling and detailed bathymetric data is required to fully understand the dynamics. Examination of other MISR images of this area, taken under similar illumination conditions, has not uncovered any surface patterns resembling those seen here. This image is an example of how MISR's multi-angular capability provides unique information for understanding how tsunamis propagate. Another application of MISR data enabled scientists to measure the motion of breaking tsunami waves along the eastern shores of Andhra Pradesh, India. The Multi-angle Imaging SpectroRadiometer observes the daylit Earth continuously and every 9 days views the entire globe between 82 degrees North and 82 degrees South latitude. These data products were generated from a portion of the imagery acquired during Terra orbit 26720 and utilize data from within blocks 85 to 86 within World Reference System-2 path 142. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. Image courtesy NASA/GSFC/LaRC/JPL, MISR Team. Text by Clare Averill (Raytheon ITSS/JPL); Michael Garay and David J. Diner (JPL, California Institute of Technology); and Vasily Titov (NOAA/Pacific Marine Environmental Laboratory and University of Washington/Joint Institute for the Study of the Atmosphere and Oceans).

  20. The Diagnostic Value of 3-Dimensional Sampling Perfection With Application Optimized Contrasts Using Different Flip Angle Evolutions (SPACE) MRI in Evaluating Lower Extremity Deep Venous Thrombus.

    PubMed

    Wu, Gang; Xie, Ruyi; Zhang, Xiaoli; Morelli, John; Yan, Xu; Zhu, Xiaolei; Li, Xiaoming

    2017-12-01

    The aim of this study was to evaluate the diagnostic performance of noncontrast magnetic resonance imaging utilizing sampling perfection with application optimized contrasts using different flip angle evolutions (SPACE) in detecting deep venous thrombus (DVT) of the lower extremity and evaluating clot burden. This prospective study was approved by the institutional review board. Ninety-four consecutive patients (42 men, 52 women; age range, 14-87 years; average age, 52.7 years) suspected of lower extremity DVT underwent ultrasound (US) and SPACE. The venous visualization score for SPACE was determined by 2 radiologists independently according to a 4-point scale (1-4, poor to excellent). The sensitivity and specificity of SPACE in detecting DVT were calculated based on segment, limb, and patient, with US serving as the reference standard. The clot burden for each segment was scored (0-3, patent to entire segment occlusion). The clot burden score obtained with SPACE was compared with US using a Wilcoxon test based on region, limb, and patient. Interobserver agreement in assessing DVT (absent, nonocclusive, or occlusive) with SPACE was determined by calculating Cohen kappa coefficients. The mean venous visualization score for SPACE was 3.82 ± 0.50 for reader 1 and 3.81 ± 0.50 for reader 2. For reader 1, sensitivity/specificity values of SPACE in detecting DVT were 96.53%/99.90% (segment), 95.24%/99.04% (limb), and 95.89%/95.24% (patient). For reader 2, corresponding values were 97.20%/99.90%, 96.39%/99.05%, and 97.22%/95.45%. The clot burden assessed with SPACE was not significantly different from US (P > 0.05 for region, limb, patient). Interobserver agreement of SPACE in assessing thrombosis was excellent (kappa = 0.894 ± 0.014). Non-contrast-enhanced 3-dimensional SPACE magnetic resonance imaging is highly accurate in detecting lower extremity DVT and reliable in the evaluation of clot burden. SPACE could serve as an important alternative for patients in whom US cannot be performed.

  1. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    PubMed Central

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  2. MAUVE/SWIPE: an imaging instrument concept with multi-angular, -spectral, and -polarized capability for remote sensing of aerosols, ocean color, clouds, and vegetation from space

    NASA Astrophysics Data System (ADS)

    Frouin, Robert; Deschamps, Pierre-Yves; Rothschild, Richard; Stephan, Edward; Leblanc, Philippe; Duttweiler, Fred; Ghaemi, Tony; Riedi, Jérôme

    2006-12-01

    The Monitoring Aerosols in the Ultraviolet Experiment (MAUVE) and the Short-Wave Infrared Polarimeter Experiment (SWIPE) instruments have been designed to collect, from a typical sun-synchronous polar orbit at 800 km altitude, global observations of the spectral, polarized, and directional radiance reflected by the earth-atmosphere system for a wide range of applications. Based on the heritage of the POLDER radiometer, the MAUVE/SWIPE instrument concept combines the merits of TOMS for observing in the ultra-violet, MISR for wide field-of-view range, MODIS, for multi-spectral aspects in the visible and near infrared, and the POLDER instrument for polarization. The instruments are camera systems with 2-dimensional detector arrays, allowing a 120-degree field-of-view with adequate ground resolution (i.e., 0.4 or 0.8 km at nadir) from satellite altitude. Multi-angle viewing is achieved by the along-track migration at spacecraft velocity of the 2-dimensional field-of-view. Between the cameras' optical assembly and detector array are two filter wheels, one carrying spectral filters, the other polarizing filters, allowing measurements of the first three Stokes parameters, I. Q, and V, of the incident radiation in 16 spectral bands optimally placed in the interval 350-2200 nm. The spectral range is 350-1050 nm for the MAUVE instrument and 1050-2200 nm for the SWIPE instrument. The radiometric requirements are defined to fully exploit the multi-angular, multi-spectral, and multi-polarized capability of the instruments. These include a wide dynamic range, a signal-to-noise ratio above 500 in all channels at maximum radiance level, i.e., when viewing a surface target of albedo equal to 1, and a noise-equivalent-differential reflectance better than 0.0005 at low signal level for a sun at zenith. To achieve daily global coverage, a pair of MAUVE and SWIPE instruments would be carried by each of two mini-satellites placed on interlaced orbits. The equator crossing time of the two satellites would be adjusted to allow simultaneous observations of the overlapping zone viewed from the two parallel orbits of the twin satellites. Using twin satellites instead of a single satellite would allow measurements in a more complete range of scattering angles. A MAUVE/SWIPE satellite mission would improve significantly the accuracy of ocean color observations from space, and will extend the retrieval of ocean optical properties to the ultra-violet, where they become very sensitive to detritus material and dissolved organic matter. It would also provide a complete description of the scattering and absorption properties of aerosol particles, as well as their size distribution and vertical distribution. Over land, the retrieved bidirectional reflectance function would allow a better classification of terrestrial vegetation and discrimination of surface types. The twin satellite concept, by providing stereoscopic capability, would offer the possibility to analyze the three-dimensional structure and radiative properties of cloud fields.

  3. C- and L-band space-borne SAR incidence angle normalization for efficient Arctic sea ice monitoring

    NASA Astrophysics Data System (ADS)

    Mahmud, M. S.; Geldsetzer, T.; Howell, S.; Yackel, J.; Nandan, V.

    2017-12-01

    C-band Synthetic Aperture Radar (SAR) has been widely used effectively for operational sea ice monitoring, owing to its greater seperability between snow-covered first-year (FYI) and multi-year (MYI) ice types, during winter. However, during the melt season, C-band SAR backscatter contrast reduces between FYI and MYI. To overcome the limitations of C-band, several studies have recommended utlizing L-band SAR, as it has the potential to significantly improve sea ice classification. Given its longer wavelength, L-band can efficiently separate FYI and MYI types, especially during melt season. Therefore, the combination of C- and L-band SAR is an optimal solution for efficient seasonal sea ice monitoring. As SAR acquires images over a range of incidence angles from near-range to far-range, SAR backscatter varies substantially. To compensate this variation in SAR backscatter, incidence angle dependency of C- and L-band SAR backscatter for different FYI and MYI types is crucial to quantify, which is the objective of this study. Time-series SAR imagery from C-band RADARSAT-2 and L-band ALOS PALSAR during winter months of 2010 across 60 sites over the Canadian Arctic was acquired. Utilizing 15 images for each sites during February-March for both C- and L-band SAR, incidence angle dependency was calculated. Our study reveals that L- and C-band backscatter from FYI and MYI decreases with increasing incidence angle. The mean incidence angle dependency for FYI and MYI were estimated to be -0.21 dB/1° and -0.30 dB/1° respectively from L-band SAR, and -0.22 dB/1° and -0.16 dB/1° from C-band SAR, respectively. While the incidence angle dependency for FYI was found to be similar in both frequencies, it doubled in case of MYI from L-band, compared to C-band. After applying the incidence angle normalization method to both C- and L-band SAR images, preliminary results indicate improved sea ice type seperability between FYI and MYI types, with substantially lower number of mixed pixels; thereby offering more reliable sea ice classification accuracies. Research findings from this study can be utilized to improve seasonal sea ice classification with higher accuracy for operational Arctic sea ice monitoring, especially in regions like the Canadian Arctic, where MYI detection is crucial for safer ship navigations.

  4. Coccoliths in the Celtic Sea

    NASA Technical Reports Server (NTRS)

    2002-01-01

    As the basis of the marine food chain, phytoplankton are important indicators of change in the oceans. These marine flora also extract carbon dioxide from the atmosphere for use in photosynthesis, and play an important role in global climate. Phytoplankton blooms that occur near the surface are readily visible from space, enabling a global estimation of the presence of chlorophyll and other pigments. There are more than 5,000 different species of phytoplankton however, and it is not always possible to identify the type of phytoplankton present using space-based remote sensing.

    Coccolithophores, however, are a group of phytoplankton that are identifiable from space. These microscopic plants armor themselves with external plates of calcium carbonate. The plates, or coccoliths, give the ocean a milky white or turquoise appearance during intense blooms. The long-term flux of coccoliths to the ocean floor is the main process responsible for the formation of chalk and limestone.

    This image is a natural-color view of the Celtic Sea and English Channel regions, and was acquired by the Multi-angle Imaging SpectroRadiometer's nadir (vertical-viewing) camera on June 4, 2001 during Terra orbit 7778. It represents an area of 380 kilometers x 445 kilometers, and includes portions of southwestern England and northwestern France. The coccolithophore bloom in the lower left-hand corner usually occurs in the Celtic Sea for several weeks in summer. The coccoliths backscatter light from the water column to create a bright optical effect. Other algal and/or phytoplankton blooms can also be discerned along the coasts near Portsmouth, England and Granville, France.

    At full resolution, evidence of human activity is also apparent in this image. White specks associated with ship wakes are present in the open water, and aircraft contrails are visible within the high cirrus clouds over the English Channel.

    MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  5. Focus detection by shearing interference of vortex beams for non-imaging systems.

    PubMed

    Li, Xiongfeng; Zhan, Shichao; Liang, Yiyong

    2018-02-10

    In focus detection of non-imaging systems, the common image-based methods are not available. Also, interference techniques are seldom used because only the degree with hardly any direction of defocus can be derived from the fringe spacing. In this paper, we propose a vortex-beam-based shearing interference system to do focus detection for a focused laser direct-writing system, where a vortex beam is already involved. Both simulated and experimental results show that fork-like features are added in the interference patterns due to the existence of an optical vortex, which makes it possible to distinguish the degree and direction of defocus simultaneously. The theoretical fringe spacing and resolution of this method are derived. A resolution of 0.79 μm can be achieved under the experimental combination of parameters, and it can be further improved with the help of the image processing algorithm and closed-loop controlling in the future. Finally, the influence of incomplete collimation and the wedge angle of the shear plate is discussed. This focus detection approach is extremely appropriate for those non-imaging systems containing one or more focused vortex beams.

  6. Detection of relationships among multi-modal brain imaging meta-features via information flow.

    PubMed

    Miller, Robyn L; Vergara, Victor M; Calhoun, Vince D

    2018-01-15

    Neuroscientists and clinical researchers are awash in data from an ever-growing number of imaging and other bio-behavioral modalities. This flow of brain imaging data, taken under resting and various task conditions, combines with available cognitive measures, behavioral information, genetic data plus other potentially salient biomedical and environmental information to create a rich but diffuse data landscape. The conditions being studied with brain imaging data are often extremely complex and it is common for researchers to employ more than one imaging, behavioral or biological data modality (e.g., genetics) in their investigations. While the field has advanced significantly in its approach to multimodal data, the vast majority of studies still ignore joint information among two or more features or modalities. We propose an intuitive framework based on conditional probabilities for understanding information exchange between features in what we are calling a feature meta-space; that is, a space consisting of many individual featurae spaces. Features can have any dimension and can be drawn from any data source or modality. No a priori assumptions are made about the functional form (e.g., linear, polynomial, exponential) of captured inter-feature relationships. We demonstrate the framework's ability to identify relationships between disparate features of varying dimensionality by applying it to a large multi-site, multi-modal clinical dataset, balance between schizophrenia patients and controls. In our application it exposes both expected (previously observed) relationships, and novel relationships rarely considered investigated by clinical researchers. To the best of our knowledge there is not presently a comparably efficient way to capture relationships of indeterminate functional form between features of arbitrary dimension and type. We are introducing this method as an initial foray into a space that remains relatively underpopulated. The framework we propose is powerful, intuitive and very efficiently provides a high-level overview of a massive data space. In our application it exposes both expected relationships and relationships very rarely considered worth investigating by clinical researchers. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. JMISR INteractive eXplorer

    NASA Technical Reports Server (NTRS)

    Nelson, David L.; Diner, David J.; Thompson, Charles K.; Hall, Jeffrey R.; Rheingans, Brian E.; Garay, Michael J.; Mazzoni, Dominic

    2010-01-01

    MISR (Multi-angle Imaging SpectroRadiometer) INteractive eXplorer (MINX) is an interactive visualization program that allows a user to digitize smoke, dust, or volcanic plumes in MISR multiangle images, and automatically retrieve height and wind profiles associated with those plumes. This innovation can perform 9-camera animations of MISR level-1 radiance images to study the 3D relationships of clouds and plumes. MINX also enables archiving MISR aerosol properties and Moderate Resolution Imaging Spectroradiometer (MODIS) fire radiative power along with the heights and winds. It can correct geometric misregistration between cameras by correlating off-nadir camera scenes with corresponding nadir scenes and then warping the images to minimize the misregistration offsets. Plots of BRF (bidirectional reflectance factor) vs. camera angle for points clicked in an image can be displayed. Users get rapid access to map views of MISR path and orbit locations and overflight dates, and past or future orbits can be identified that pass over a specified location at a specified time. Single-camera, level-1 radiance data at 1,100- or 275- meter resolution can be quickly displayed in color using a browse option. This software determines the heights and motion vectors of features above the terrain with greater precision and coverage than previous methods, based on an algorithm that takes wind direction into consideration. Human interpreters can precisely identify plumes and their extent, and wind direction. Overposting of MODIS thermal anomaly data aids in the identification of smoke plumes. The software has been used to preserve graphical and textural versions of the digitized data in a Web-based database.

  8. Numerical investigation of three-dimensional pupil model impact on the relative illumination in panomorph lenses

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhenfeng; Thibault, Simon

    2017-11-01

    One of the key issues in conventional wide-angle lenses is the well-known cosine-fourth power law problem causing the illumination falloff at its image space. This paper explores methods of improving illumination in the image space in panomorph lenses. By tracing skew rays within the defined field of view and pupil diameter, we obtained the actual position of the three-dimensional pupil model of the entrance pupil (EP) and exit pupil (XP). Based on the law of irradiance transport conservation, the relation between the area of the EP projection and illumination in the image space is derived to investigate the factors affecting the illumination on the peripheral field. A panomorph lens has been optimized as an example by providing a self-defined operation in the optimization process. The characteristic of the EP and XP in panomorph lenses is qualitatively analyzed. Compared with the conventional design method, the proposed design strategy can enhance the illumination with and without polarized light based on qualitatively evaluating the area of projected EP. It is demonstrated that this method enables the enhancement of the illumination without additional film coating.

  9. Techniques for deriving tissue structure from multiple projection dual-energy x-ray absorptiometry

    NASA Technical Reports Server (NTRS)

    Feldmesser, Howard S. (Inventor); Charles, Jr., Harry K. (Inventor); Beck, Thomas J. (Inventor); Magee, Thomas C. (Inventor)

    2004-01-01

    Techniques for deriving bone properties from images generated by a dual-energy x-ray absorptiometry apparatus include receiving first image data having pixels indicating bone mineral density projected at a first angle of a plurality of projection angles. Second image data and third image data are also received. The second image data indicates bone mineral density projected at a different second angle. The third image data indicates bone mineral density projected at a third angle. The third angle is different from the first angle and the second angle. Principal moments of inertia for a bone in the subject are computed based on the first image data, the second image data and the third image data. The techniques allow high-precision, high-resolution dual-energy x-ray attenuation images to be used for computing principal moments of inertia and strength moduli of individual bones, plus risk of injury and changes in risk of injury to a patient.

  10. Climatology of the Aerosol Optical Depth by Components from the Multi-Angle Imaging Spectroradiometer (MISR) and Chemistry Transport Models

    NASA Technical Reports Server (NTRS)

    Lee, Huikyo; Kalashnikova, Olga V.; Suzuki, Kentaroh; Braverman, Amy; Garay, Michael J.; Kahn, Ralph A.

    2016-01-01

    The Multi-angle Imaging Spectroradiometer (MISR) Joint Aerosol (JOINT_AS) Level 3 product has provided a global, descriptive summary of MISR Level 2 aerosol optical depth (AOD) and aerosol type information for each month over 16+ years since March 2000. Using Version 1 of JOINT_AS, which is based on the operational (Version 22) MISR Level 2 aerosol product, this study analyzes, for the first time, characteristics of observed and simulated distributions of AOD for three broad classes of aerosols: spherical nonabsorbing, spherical absorbing, and nonspherical - near or downwind of their major source regions. The statistical moments (means, standard deviations, and skew-nesses) and distributions of AOD by components derived from the JOINT_AS are compared with results from two chemistry transport models (CTMs), the Goddard Chemistry Aerosol Radiation and Transport (GOCART) and SPectral RadIatioN-TrAnSport (SPRINTARS). Overall, the AOD distributions retrieved from MISR and modeled by GOCART and SPRINTARS agree with each other in a qualitative sense. Marginal distributions of AOD for each aerosol type in both MISR and models show considerable high positive skewness, which indicates the importance of including extreme AOD events when comparing satellite retrievals with models. The MISR JOINT_AS product will greatly facilitate comparisons between satellite observations and model simulations of aerosols by type.

  11. Optimal graph based segmentation using flow lines with application to airway wall segmentation.

    PubMed

    Petersen, Jens; Nielsen, Mads; Lo, Pechin; Saghir, Zaigham; Dirksen, Asger; de Bruijne, Marleen

    2011-01-01

    This paper introduces a novel optimal graph construction method that is applicable to multi-dimensional, multi-surface segmentation problems. Such problems are often solved by refining an initial coarse surface within the space given by graph columns. Conventional columns are not well suited for surfaces with high curvature or complex shapes but the proposed columns, based on properly generated flow lines, which are non-intersecting, guarantee solutions that do not self-intersect and are better able to handle such surfaces. The method is applied to segment human airway walls in computed tomography images. Comparison with manual annotations on 649 cross-sectional images from 15 different subjects shows significantly smaller contour distances and larger area of overlap than are obtained with recently published graph based methods. Airway abnormality measurements obtained with the method on 480 scan pairs from a lung cancer screening trial are reproducible and correlate significantly with lung function.

  12. Adjoint Methods for Adjusting Three-Dimensional Atmosphere and Surface Properties to Fit Multi-Angle Multi-Pixel Polarimetric Measurements

    NASA Technical Reports Server (NTRS)

    Martin, William G.; Cairns, Brian; Bal, Guillaume

    2014-01-01

    This paper derives an efficient procedure for using the three-dimensional (3D) vector radiative transfer equation (VRTE) to adjust atmosphere and surface properties and improve their fit with multi-angle/multi-pixel radiometric and polarimetric measurements of scattered sunlight. The proposed adjoint method uses the 3D VRTE to compute the measurement misfit function and the adjoint 3D VRTE to compute its gradient with respect to all unknown parameters. In the remote sensing problems of interest, the scalar-valued misfit function quantifies agreement with data as a function of atmosphere and surface properties, and its gradient guides the search through this parameter space. Remote sensing of the atmosphere and surface in a three-dimensional region may require thousands of unknown parameters and millions of data points. Many approaches would require calls to the 3D VRTE solver in proportion to the number of unknown parameters or measurements. To avoid this issue of scale, we focus on computing the gradient of the misfit function as an alternative to the Jacobian of the measurement operator. The resulting adjoint method provides a way to adjust 3D atmosphere and surface properties with only two calls to the 3D VRTE solver for each spectral channel, regardless of the number of retrieval parameters, measurement view angles or pixels. This gives a procedure for adjusting atmosphere and surface parameters that will scale to the large problems of 3D remote sensing. For certain types of multi-angle/multi-pixel polarimetric measurements, this encourages the development of a new class of three-dimensional retrieval algorithms with more flexible parametrizations of spatial heterogeneity, less reliance on data screening procedures, and improved coverage in terms of the resolved physical processes in the Earth?s atmosphere.

  13. Real-Space Imaging of the Tailored Plasmons in Twisted Bilayer Graphene

    NASA Astrophysics Data System (ADS)

    Hu, F.; Das, Suprem R.; Luan, Y.; Chung, T.-F.; Chen, Y. P.; Fei, Z.

    2017-12-01

    We report a systematic plasmonic study of twisted bilayer graphene (TBLG)—two graphene layers stacked with a twist angle. Through real-space nanoimaging of TBLG single crystals with a wide distribution of twist angles, we find that TBLG supports confined infrared plasmons that are sensitively dependent on the twist angle. At small twist angles, TBLG has a plasmon wavelength comparable to that of single-layer graphene. At larger twist angles, the plasmon wavelength of TBLG increases significantly with apparently lower damping. Further analysis and modeling indicate that the observed twist-angle dependence of TBLG plasmons in the Dirac linear regime is mainly due to the Fermi-velocity renormalization, a direct consequence of interlayer electronic coupling. Our work unveils the tailored plasmonic characteristics of TBLG and deepens our understanding of the intriguing nano-optical physics in novel van der Waals coupled two-dimensional materials.

  14. Real-Space Imaging of the Tailored Plasmons in Twisted Bilayer Graphene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, F.; Das, Suprem R.; Luan, Y.

    Here, we report a systematic plasmonic study of twisted bilayer graphene (TBLG)—two graphene layers stacked with a twist angle. Through real-space nanoimaging of TBLG single crystals with a wide distribution of twist angles, we find that TBLG supports confined infrared plasmons that are sensitively dependent on the twist angle. At small twist angles, TBLG has a plasmon wavelength comparable to that of single-layer graphene. At larger twist angles, the plasmon wavelength of TBLG increases significantly with apparently lower damping. Further analysis and modeling indicate that the observed twist-angle dependence of TBLG plasmons in the Dirac linear regime is mainly duemore » to the Fermi-velocity renormalization, a direct consequence of interlayer electronic coupling. Our work unveils the tailored plasmonic characteristics of TBLG and deepens our understanding of the intriguing nano-optical physics in novel van der Waals coupled two-dimensional materials.« less

  15. Real-Space Imaging of the Tailored Plasmons in Twisted Bilayer Graphene

    DOE PAGES

    Hu, F.; Das, Suprem R.; Luan, Y.; ...

    2017-12-13

    Here, we report a systematic plasmonic study of twisted bilayer graphene (TBLG)—two graphene layers stacked with a twist angle. Through real-space nanoimaging of TBLG single crystals with a wide distribution of twist angles, we find that TBLG supports confined infrared plasmons that are sensitively dependent on the twist angle. At small twist angles, TBLG has a plasmon wavelength comparable to that of single-layer graphene. At larger twist angles, the plasmon wavelength of TBLG increases significantly with apparently lower damping. Further analysis and modeling indicate that the observed twist-angle dependence of TBLG plasmons in the Dirac linear regime is mainly duemore » to the Fermi-velocity renormalization, a direct consequence of interlayer electronic coupling. Our work unveils the tailored plasmonic characteristics of TBLG and deepens our understanding of the intriguing nano-optical physics in novel van der Waals coupled two-dimensional materials.« less

  16. Determining Metacarpophalangeal Flexion Angle Tolerance for Reliable Volumetric Joint Space Measurements by High-resolution Peripheral Quantitative Computed Tomography.

    PubMed

    Tom, Stephanie; Frayne, Mark; Manske, Sarah L; Burghardt, Andrew J; Stok, Kathryn S; Boyd, Steven K; Barnabe, Cheryl

    2016-10-01

    The position-dependence of a method to measure the joint space of metacarpophalangeal (MCP) joints using high-resolution peripheral quantitative computed tomography (HR-pQCT) was studied. Cadaveric MCP were imaged at 7 flexion angles between 0 and 30 degrees. The variability in reproducibility for mean, minimum, and maximum joint space widths and volume measurements was calculated for increasing degrees of flexion. Root mean square coefficient of variance values were < 5% under 20 degrees of flexion for mean, maximum, and volumetric joint spaces. Values for minimum joint space width were optimized under 10 degrees of flexion. MCP joint space measurements should be acquired at < 10 degrees of flexion in longitudinal studies.

  17. Multiview face detection based on position estimation over multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  18. Trapped Ring Current Ion Dynamics During the 17-18 March 2015 Geomagnetic Storm Obtained from TWINS ENA Images

    NASA Astrophysics Data System (ADS)

    Perez, J. D.; Goldstein, J.; McComas, D. J.; Valek, P. W.; Fok, M. C. H.; Hwang, K. J.

    2015-12-01

    On 17-18 March 2015, there was a large (minimum SYM/H < -200 nT) geomagnetic storm. The Two Wide-Angle Imaging Neutral Atom Spectrometers (TWINS) mission, the first stereoscopic ENA magnetospheric imager, provides global images of the inner magnetosphere from which global distributions of ion flux, energy spectra, and pitch angle distributions are obtained. We will show how the observed ion pressure correlates with SYM/H. Examples of multiple peaks in the ion spatial distribution which may be due to multiple injections and/or energy and pitch angle dependent drift will be illustrated. Energy spectra will be shown to be non-Maxwellian, frequently having two peaks, one in the 10 keV range and another near 40 keV. Pitch angle distributions will be shown to have generally perpendicular anisotropy and that this can be time, space and energy dependent. The results are consistent with Comprehensive Inner Magnetosphere-Ionosphere (CIMI) model simulations.

  19. MISR - Science Data Validation Plan

    NASA Technical Reports Server (NTRS)

    Conel, J.; Ledeboer, W.; Ackerman, T.; Marchand, R.; Clothiaux, E.

    2000-01-01

    This Science Data Validation Plan describes the plans for validating a subset of the Multi-angle Imaging SpectroRadiometer (MISR) Level 2 algorithms and data products and supplying top-of-atmosphere (TOA) radiances to the In-flight Radiometric Calibration and Characterization (IFRCC) subsystem for vicarious calibration.

  20. James Bay

    Atmospheric Science Data Center

    2013-04-17

    article title:  Hudson Bay and James Bay, Canada   ... which scatters more light in the backward direction. This example illustrates how multi-angle viewing can distinguish physical structures ... MD. The MISR data were obtained from the NASA Langley Research Center Atmospheric Science Data Center in Hampton, VA. Image ...

Top